var/home/core/zuul-output/0000755000175000017500000000000015156016453014533 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015156051234015473 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000373340515156051037020270 0ustar corecoreRikubelet.log_o[;r)Br'o -n(!9t%Cs7}g/غIs,r.k9Gfͅ ?+YI_翪|mvſFެxۻf+ovpZjЇ!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#(-aD~JwFPO7M$n6iXύO^%26lDt#3{f!f6;WR.!$5 J:1*S%V!F([EbD]娍ԹiE03`Cfw&:ɴ@=yN{f}\{+>2^G) u.`l(Sm&F4a0>eBmFR5]!PI6f٘"y/(":[#;`1J)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{W5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLM)}?=XkLd. yK>"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁eor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)*9O5KiPc7CDG.b~?|ђP? -8%JNIt"`HP!]ZrͰ4j8!*(jPcǷ!)'xmv>!0[r_G{j 6JYǹ>zs;tc.mctie:x&"bR4S uV8/0%X8Ua0NET݃jYAT` &AD]Ax95mvXYs"(A+/_+*{b }@UP*5ì"M|܊W7|}N{mL=dC' =MS2[3(/hoj$=Zm Mlh>P>Qwf8*c4˥Ęk(+,«.c%_~&^%80=1Jgͤ39(&ʤdH0Ζ@.!)CGtGLS0l/LKcQ.os2% t)Eh~2p cL1%'4-1a_`[Zz㧦|k˭c ĚOρ_} Ewt3th?tvͪ{~;J0= |JUԍ;Iw}/9nh7l%>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 dI [@3YN%:ò6PT:”QVay 77ĐrX(K&Y5+$wL#ɽ 4d-bbdAJ?:P>n^2] e}gjFX@&avF묇cTy^}m .Ŏ7Uֻ󂊹P-\!3^.Y9[XԦo Έ')Ji.VՕH4~)(kKC&N{g6R/wD_tՄ.F+HP'AE; J j"b~Y]Q4`Iz_*2coT'ƟlQ.Ff!bpRw@\6"yr+i37Z_j*YLfnYJ~Z~okJX ?A?gU3U;,ד1t7lJ#wՆ;I|p"+I4ˬZcն a.1wXhxDI:;.^m9W_c.4z+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{oɼʪ~75/nQ?s d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(ߐ/ۼ𹫘qݎt7Ym݃|M$ 6.x5 TMXbXj-P\jА޴y$j`ROA"EkuS#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@n|Vo|6 8~J[,o%l%!%tyNO}}=ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&VY+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓI(\gJ8@o2k'Hr~4Z(I8!H G8HNW%1Tќ^?GXodՔz q[*ڔC"1Ȋ-R0ڱ}oF4 3vFf#8^Vє+k@ :)@%9@nA B q 62!/ 6G (" u:)fSGAV(e֖t܁ ft~c.!R0N<R{mtdFdHÃФsxBl] " Δ<=9i/ d ␙F9Ґ)Hnxps2wApP!se]I)^ k?'k:%Ѹ)?wɧ6a{r7%]_Ϧi~ԞnZhubW*IakVC-(>Z#"U4Xk1G;7#m eji'ĒGIqB//(O &1I;svHd=mJW~ړUCOīpAiB^MP=MQ`=JB!"]b6Ƞi]ItЀ'Vf:yo=K˞r:( n72-˒#K9T\aVܩO "^OF1%e"xm뻱~0GBeFO0ޑ]w(zM6j\v00ׅYɓHڦd%NzT@gID!EL2$%Ӧ{(gL pWkn\SDKIIKWi^9)N?[tLjV}}O͌:&c!JC{J` nKlȉW$)YLE%I:/8)*H|]}\E$V*#(G;3U-;q7KǰfξC?ke`~UK mtIC8^P߼fub8P銗KDi'U6K×5 .]H<$ ^D'!" b1D8,?tT q lKxDȜOY2S3ҁ%mo(YT\3}sѦoY=-- /IDd6Gs =[F۴'c,QAIٰ9JXOz);B= @%AIt0v[Ƿ&FJE͙A~IQ%iShnMІt.޿>q=$ts,cJZڗOx2c6 .1zҪR "^Q[ TF )㢥M-GicQ\BL(hO7zNa>>'(Kgc{>/MoD8q̒vv73'9pM&jV3=ɹvYƛ{3iψI4Kp5 d2oOgd||K>R1Qzi#f>夑3KմԔ萴%|xyr>ķx>{E>Z4Ӥ͋#+hI{hNZt 9`b˝`yB,Ȍ=6Z" 8L O)&On?7\7ix@ D_P"~GijbɠM&HtpR:4Si גt&ngb9%islԃ)Hc`ebw|Ī Zg_0FRYeO:F)O>UD;;MY,2ڨi"R"*R2s@AK/u5,b#u>cY^*xkJ7C~pۊ ~;ɰ@ՙ.rT?m0:;}d8ۈ ݨW>.[Vhi̒;̥_9$W!p.zu~9x۾vC;kN?WƟ+fx3SuKQqxST Ζ2%?T74a{N8;lr`$pZds=3jwlL Eڲ t|*n8[#yN SrA GYb8ZIaʼn8 #fg3i`F#5N 3q_M]j 8E!@1vցP7!|+R@;HspSI]ڻCZUcg5pDcIϹ,oN-_XI,3\j ]ٟ5~' SuipA!C厐$&k7dmhz/#"݃,YqCL$ڲ`"MUbeT>Xuv~4Le͢ }UVM)[A`b}mcE]LCEg=2ȴcmZ?E*-8nhױ1xR2ϫCya` A y!?h!9yL%VLU2gr26A!4vbSG ]ꧧWp/ &ee *w$-`J\ ptǣC^p#_`{ К8EW>*(D{ٛ,[fnY𱹞M=6&$<,"lX-Ǐ_whaE 98 (oѢ/Р΅ 7ցl6618ł_1/=fu).s¯?.S[{'g=Ҥ):d8h\y6]t1T7IUV:;.1& ,5΀j:<< +Y?58In'bXIǣO{&V\DŽ0,9f O_"[l:h¢8wݓ19\:f6:+ .3}=uvKc ٹeS<>ij(o'ciS<{1$E[nP b?8E'xv[K+E{,Qƙ1*dcs_Z'407|qBOgYU|U--sG8`u! qGYܷw;ȌCPce*CK*"pXR[كrq IH!6=Ocnи%G"|ڔ^kПy׏<:n:!d#[7>^.hd/}ӾP'k2MؤYy/{!ca /^wT j˚ب|MLE7Ee/I lu//j8MoGqdDt^_Y\-8!ד|$@D.ݮl`p48io^.š{_f>O)J=iwwӑ؇n-i3,1׿5'odۆ3(h>1UW蚍R$W>u trFHqy5)Mr18=?&ǝOS5Uݎa}{Y!$olDEBc!ہlc4^bwN!ґÂ?F.~ը#kw!^޳Ba1X)SCGR(ilijY. =QXNUIQ8-lTUL%Ap&ƔI^辅.H/w^Ff`9(- gCa=sRbC>uD "#A s`aߕ[V6ޒUyaWwEim>r탷sxVw>.!UXVȢytaJCUYV 6FЂ{ObgxX KoVCb(z:j8GU]g /7PZ&/k1n }'qRI/!63rXX:ֈ"? oC0oƟāJ'8WFF-~M~٩:Ȧa%\!^P,o1XMV D:yzub1X ~iS? )JJrok"UȊ긠mr:[Iɘ+AYhCve-g:Y`J$q ]Fl%Đ*`Kq>u^pgTp 1dq,\WF(ަptzv9|ksAay*19mWj ;c]u}X!.Zkzqa!誎S> B#mw79,`:a _N 1Ef rr=x-R*34| %1zg7r7OGdt[))1 E.Ma^>@xg TKcTuvZ1G/2< 荊BʢՕ؀ 0raB8.V 9B"%8T(ԅ%B5,T18~VC 5.y`eQ9W8]䇕!|k=Lb@VC ;緢%NV'b=d1b {ѣ`[+ LA:-m!j$YD؊0jm$9d3VEb/0?ػ: \MʪBP4Zt%>7ȅG 7bbXi /bj@ 5Ӗc}jL!cUc٠=j¾9gRn dNJD a[݂(a}XNW@,'*U}9;~e" r:L;2vN"o=;pԓz]'ԋsI m1ܰ#$A4<-B}08ê`}Y<ٽ%|X"3}M?x]p> TA<+ ނգn9:/ʹ @[xK2i $A^8I&CtEw4 Ab d{-3o˿S]y@ks~d*"K=i.<x)ve .<,Hz%d1Nr@CY>EUCGV4d!Xeck5Y$DA5xKA֠%a(D=Ma1l- s|hZ+1@񺱥Vc1d-4O=G֠ 3LJVH2pa)h%1,I:*Y$ 4#rk ޶k\ȊxVCkr ۰|wlIYt mpaI=2+v5R%~XcvR%~\L^-U*t\:0EDza@GfZb"Oq5HbC*7l"! k{mͤ@GsC2;m%IQ31񱍠x{(6}ޏC,I < [j_Y crXBH]eV kJڦSXڨ,tWzFU`m\ہ:#q`x=C@ա`ƍ)s@%C•-u. e#w K]`Aa>r:Ț1q ߶lNaEPN4#`ɼ]Aa2@$#1o2aN*@端E]>ey5\)=.jǪÅVzGf /y(=n٣rEmm aE0 #%%(]C>v6X9<3t\lD:)=5mF+ I! g(ַ2+-X3rXƍ [|=c.f0GU`\Rڈa`tTtъXLm* dEޏӰu9t?HZߧskd蛔Ȃ^*q+AWNGиe\M'Pk6mZRy [0ޝYߤvsդPM+0u18lY.nّ[/㊀V^kDʡ!u6X9^wCN ӊ/19M/Z^K̓? ݖ1EJ=G [򞔧rv\ gƄF.!"#m.RŮ71/@$̨E7w-5li`YcrXJL ;Ÿ|ۭoAM [WVGRcتw@`@EͦŔ`oI鐅_nq.\5&U 6 stdG [pda[@dؒۮtGP8)G`iTa5MԢRrh9hUmq')T1D9qD vhC_+JXcQ]GUiͦY;* XE$2Smpoah~ <6.Bǀƾ2OV2lV"U81vzֲ?^PVqzC6}OZZ,X%2q+o6oVBzZ[RGܰşY;G/>XRB ?*k!k [|ǝJΩe]PdqU>܎<,:4:b 2Y~׸̀ji:8F1} p+2=O/G?.37:8'/qg^puM&`b@BÖ[Qou-H2߹Xߓ ыPlҷk$t.C`8߱EBÖ/B_83~ hp> \Ag<3jRvAڏ3qJCޤ:-xq5Oӂ~\daXV;6ClһiQ,#0>*ƵO+͡ SSGͺt@U ``qo%|_:? ~9Нv;zXt5@.ĵFkq"> M V\5ްG-7ll[[0Gz-1=RiK?`#Rq!uXVh уB txݾhP`1XNqQb 6+0l[ _;-9 [| w;-*3t\.Lf2EH޲d6gjw U; Q*Л8Z Ӄ~XG- VsR )"@Ubbla˯;C>#G`&RQ5KZ7/(EmC/&Qusuy۟Fۻ_My^,6wSyt~[\uno^Zw,/j7F\?,no6N̛+~+mGs^u/M N7,>?ŗaj \ÊнX\N5<څt}{V^4W_f_q+$kU9ί)u~ٰd=gC< PRs:&nE 7XIUg#d5T)JpQK; UgRæOY.ᣪVʰ>66AcUJ&!\,v )V1jQMY՛TqVܞ%~Cn0Dȗ̏V TZGe0VXFM9JJ NI:%e'ON`1ay//C0Ɨ'l4e22fIeqeN,lt7|a^/+fGxY\zd^t`mXmxPf<AMZ39 L󟅾"kȊVe=suۋl}P;U[eY;d0*0-_nwZ1b$` <}u#; |]=tgI^wK܀y&Azşcz4|A:şV%'vK' ϶=G/IOSxIԣ[>7BCtӸk ]8V?GT%t{uൺd?2$k <|ϦSS7Kݼ4n_%L7\Q_e=jAa+ ` 00M#?7m9ÜB!tߟފ4<^K_L{oGtfXe"B5*Gzc-/ɳ3ludvQ["%?Rz7ĹhvKV-⨞ّ3mL 7eMn;Ky DD [.D8LTbZ2 vwqYj̻Fό;-"u%2ɐ&cܲCxK'r=3 ef~6Or_$i y {]4|=(Q˯vؚL@$3?ZYu]yibKa`=F"w ~N|ikִ7N&cPȗ2D-W_q5)D)RPrR-& tI>šT (c:tr!dQEzvOwOD&B$PlPgMRo`?J$( XOh9/?e݈$q˦r2:{cS,i=֎7O?~ѵ>66 mD)ljgYBө&]'mu"y?Po8pR`XnYZc]%oj&\bJ#MZ M[wrq9Q40+72<9z`$[ F$ mGpDv6RdDSli;߿ L ^|ħ+Y@[9g+( O; M?'lQق38 IN'^g'Ot77t+){'SD-ދzATrŽ嵸%*C(tzs^~4٬f6U41pɢɮ.- I߁I$@ oE"s {B Cl۟c$opDU_@9- ]ߗyS\d9)JrIßnDZ$$W!ggޟ 4_wH`+`I@|M!i>"N+o:۳HYĂEp ɛS*Wu A!$!e&ONhA}݆!`w-?e. b֋'-!& cH0O(BRL '\(0CV]%kۻF$q(T#0_s=qI hYa[ʇ}YÈ[:yhlo޿hC||O9*jI_PkJDuFc YݦySixD،yXI_ .yxqEK4 &kd b(Ճ+i_Q$(@k1#3+P#~9Im[黦߬o VYMÈp< LJC,Ѭ(`VavY)0Og8A1l5_oΈ$YgwИ 4mW0.bmX`i -lcGˈ |7%Ԧ8JߢWl؀$S;MQSٻ8m4ZU:25OEu0{ L7w$8"kP-)~NC` }Y@˸˻ -W 3 "9`In/zg4#Ϭ9#rYPfOgvvTV`E\aFj1gVMgpՙ+ F J nWč86&Jin7v vDvMTMUDKu_E2]8Q\̰·j^u JkU osx>~c/Z<;\(/e[Y꣧l9j\ޑKtHXN*; R)dGDD*p~jXL`XZؑ:bxS{^޿DV]Y]{8rmJLpSJ.EV{=̔e{;T-3.M\k@LՆ7Ķ`y6^%K6ϳk V=9V>4a܅bu- eKލmc}qIôU.Z'ӵ52C$ [uETJ=uh)pVYͅK W \K*hq8'X.k~w5e 'EUۧ5JC1nmPrӵ4:%VXe]n`+GBTwS2{3ͱ+Y,d*˻*1Yw4lJ mj>TN7,)kؠHhg*Q Ix~"Coq][J!U[u΋<緃(ca1/WAGiIyUe ~&W0Zk['eHJkfw?2ù4lnZ7C _ebb.aְ)G5tYox0mm9cA4Bg?98̳܃r<[7f-,\f"0sn u@sLj0:tH{ CuοN[[Rmt:N2X}4s\' vN7-‡pηN}p̃j h!g[w~n^)7uqӰdH57InfU`2&abp#uM&p% 9ou >{-xіOS<{^& xn>. ͻ ;'Eބ)$د*oiN0iR]#fm8\UE\ADJJظ*Sw!doMIfg"X^I~I)9G,xs5.սG'"?͊tԕ㰬Ō"C}8,jhʉWc_}[F_!H2Xtw8EOwkyrߧgůgo~:8Yοzw#{:6] 3 "3;~ , XCAՖ9a]F0ΎߨNxlʵw~ #8 ޹a]J8L ~)߱ 0!ndp#(ޮ:uax#)av+([ S;ܳt vxz[yq|a#:89bC7^ߗ%/ӹNoihhz>$=&Ep2|昰YW%^Y8!+#`L_%hE xOC E w[MoC׍tpׄdczQsZp ,ǎ=XlgPwaĄs̜G jdVϵٳ#А‹e9JPa ~ReM8spz8 sXV|R7,rKTmIMqML\k6\w}$M;p.+T_e=Ihr]ޔ-hCK2]fjkᘦxsKIgy77'\1`gogT}9 6PeŰtF]τDI<5 +}U $&nX;q&()8 ? \Q\.Yy`8:.A?p% ͍}ٵ#CXvz|t'dTc!V`Jԓ>`v>4=pqzKb+ءF>^7:܀mlqLv~Mnq`@ˠQꦢ*{"ʙ"!x^-?hBЀmR |aGE<|o5 5 ؞Pe$؂PsPs B 5_FLB- ڞPeZ$ڂP{P{ B _FLB-u u ٞPe:$قPwPw B u_FLB- ۞Pez$@.Cgoɹ64C_6cŪ/BQU puf2/A柎&#?3Nn2/2D;vήEW9ZpO6D9eK$)q+')ܛ `V70Y^clPHm֑pW8h~3[8 T7}݁a4"=$Βh= @oV `!%@<"x!]B uV>i0%`"Uʎ ,?SZeg3t`ᡞShO)d\YV -XtRM̪4_I +n\4,Lp9a\#v6^/U>gKqMðEETFr6xbӵtKfM);F LoK9侰6{fi8_({eGL%o @P\zGg'V_>Lf1* )= %tD,EAr9Fo4-He䠧о/*Li*H֌ɜ#cvkC i5YyE$CYz8UxR擄K#S>.h:bŧD:B&5@ V~o8j34^} |XEE+5ʥFM &Q#.$ao`k@=V87Y$1Mg|fs[uRA-#ʥr/􀍄p"7FKfl+:#G8 Dٟdz?#x1PTֹΑoGᏻDn߽o%.Ɏ-{~noWdTϨG"}2m\~Ybj>b:˹%لvLߑjc6]HcBTDZT\*e/üU:PGFBUv%W[0t)P-{8گ XG_8&9Ҫ*Vmn#Y $R$ yU=/; 6/z %`B=0" JH"G]u^z$u@rKʐQF QVS&Qt<T ڨ 'o۰1G UL=԰-Ny8 J#E$Vd8H3na1mͱSjh]/ez)Z"cI#ڒ$N2c29NCQ%sH9,VrZ)xǝ5z F40e圯/mRC%GU$Nާ6 MÕ ҡSPFjnHT4VkѶYbNgyQ{#ʖ>24/$}tdj(ή!y_g].qy%e6H[$NbT}hhDvY%oeFX#{ΚU!>R{.=96Vfp)z:SJ5,R&,oWI-j_/48^x.Ң%~ѢiuMQ7ݜX9`xȰ짛saÅH@LA)%ˣ}Ñ}z{ @*vU-f.BAOju]>4r=3k`-)AۚEk2L.{\m)_Wj)ijDb1QYt(Ԇ(YfX%8kiӇQq:(ud@)p/cTsԲ:zɡLe3\t6-f.ó]p=~]=_oqx\cl\ ؜1:UChIKMGWHjټb*FqbWU%rk3؍onnIlYxx$o 6јܲ3q2_1$:g/r~D8bHͅM{p4CqLW 3,PCj(U@>W6r a4 .4X4vt]7za'$g_d$g7X4dl%YfIa~ Ζ7`-'Ou%\uVN`Bce;Z0L ׬#JzJ_iY}iW0gȵ*&*wGr))É?ꢖJ{##c>u>|LO |F"x#Hlo _ɯarيO7?Zݯ}tD;Yb9/gi[wN;w9"så,t h7*ѦJϧ8}_;zZ!vU鉦iL8(?|1]r *)*iK!{J ^l ?fHbg.w8{{0hs <4 6Hդ3&*5,9Y :{)8D`İNp>0:/7fZ8"haUeyv[#$loȱ>U^# 4 ,$(y.!0È9>_2XˍZX [DǼ9n2G1aİH_HcJ8ma`ӱUxh<t_0ؾ9gPUBTgF΢yQL44#B0b9.NCPlB+J+Fl`$lپHtc!V!3RdtbVTyZ*DIWNG/X[3JJ>Rw1E_C*CVZp5ydɼvTaMnp$5◲سXڢp+,)Dp<9֪KmbAnIWgxw$8Wr+цίآ=7 ,#=)_z?#k`Ĵ,N`! Uٖ G_xbpl$W*gXGaY֡ykzFk<5df$FIcS1(/װ :eE"$]@]gU2)bD~U"PW/7o i=r|%vtFͪ| U`^Ipٍͻ}s3)[dZsΡFXy bUœ:5+ Tv'ؠEFbÈa͠cW2~^s迚( BPr O ۬+z^]g 1Uv(ɡi /#N8sp $/W+:3h='B~AODDKf K \ê-qeBoH[u>&L pcuNk9G! R E_ ";$kaރY%xe\EK`$Q:gvGvN9qh%3k{|P,DljɑWϓ| Jdk=@O:>%^ӝW1~M;gc+SȠ^sh+x[D[]A,W<,).TJWHcBd)e#7Yrw0pY`](+3 8 3 z$sf\tJd0M]p`DBi"C+g57~'>R2@hcdv,-K=g:DV#Pĕ\}U>9<$/2*#Rl [Hn鶡N]u4PUM#?;xH% QV@atbiLb+YjvIW[.I;ؗ5b!OG*;2 eP2M^^2&x_兞՘{+;4%%vJ(r"={rbڳ?޹fIpʐv D4I%e9F2>-<E[@88cҴixm΅{uϓy`i۾b{TХɭ =&OMFުxxW&Hp4枧КY'0+v{2Ss?EQէۢC'l[lfIxnôg4Wat3%XK&֤p*v#u$όIF/fyИ! LZlPboحmp&L'q@ԻC @u6%Gn\`hׯP(`>2D;(hPuXN}?uRD0(Mʝw08<Y&X\J ʳX AN2-l "I>ܼ_\_-/  dO%u/^נLMc$/|(o޹`(s=WΝҋ'v=?(Z}'L/ݹjd^CFE{C%صD4a8db;pr d @90|+y &C$te??ipZI!ߗ=uW1\˔G[oz[K,[PFHqּ'qpdpxx=zKܚa8a GS7;5rSF${kϘFRH9h h7}l  +ìvL 1;v>;OBW_C:pVɎ G]|{(O$]npujp].Dž0xJSbMV4X3,~# u4)?;mdpCnrZW,"BoGʄAFX#iĘb>&c2q rO$n,S>ѾOmJDQ`XZUyIS(sUݺck`l}ı^7$8NoB21-$W., ϊ}oQՖ3]pD }EHpܐc 5-s-aIC 7܅Y9fϘO{Hmc?$ zC$ʾ6gzL( }ͨ c ўw1݂rxHt\kND+#@]/窲IdO*o`[6QEnl`5:d9zKyGn;HrVYsj`[N:ƨoŬ S&D,m?. n꧶X-vXmdi_*!1VJK~)NO_m[+;.mw6ؙ`:"FwZn[NRG[IIlQ> XѠ:n,} g7Y8Zei2 t<wzH7ٖDDvrF9V˺. C>H$/vl'J-kZeteTt#3' z~ug];ogCI96tx129ӓo<`KRC^xf\~i~v}OϽ.>\]K)48Yl>~nkBV+NC~U,8|7>G@C?-3}AIf+\nяͿ'߁콋lRC#0} ǁ D0p2OOP8'N0"GQ#?#xp7rEaT7`ytaB^%Rff/V࿽oLw@3'KJN!V<>'Qt^Ai%Vy#!>mͬdGsV{# MUz|: Ytֳx/fL|m"?fž KڈXa+9:\:f-ոS_0>%LFqܖAp` Fo,lKk4!K维u\\y"FR=eTF`׻$uQhweʩ ̫Uīk]FZ?J*\: pm29? ŀW c؜G˲BR_˷+vL&fvq;~oǃ<=l8_2Qڗ[MtCP6w$pW%9[u-wAiG#[ҳiu'Rvp̩K`.iXs!.6Іn R.PRZVm*% ={smnsAwgRzRd"=)=3j⸍jgMduALպeTc.\?\!)pu|*l^4l#!`1 -hB $0][`| Zh7Oy1+`L}(e)#Oq:ʑ$&q|/NCx>;hUNkޛI!4lQ$x8;2?Ry(OKݻk0 A3*jx[T ǧ^bvj3Y!(Q̽@É炆#~ٶlC{] Rx¯QgXKn68AˌupbpbUM-<:8B)zQ LCfcC0Վz~Uf2V SfR7E$ZMm];[ k\?ރ-}#a3wyCXJp<94wH)ʭX쿡~\r>sFc|!fTl7 NC x;S,PƝ3R:s r L.\kVj㰩kV`X8}@rmU8:WZ~T$yAؚA8[vnZL =]Y?Mpq*\?:{8gi#WX1^6SMrbb1òSuA`;n*BXajkUλ/17?Өu$Αu1ʽ=;]<_. r%-.AKڸ!@Wh=>}㦂K)m@0zOCM}z$N k)w#~-c_pK?p?? ˔>iqDy Zq_vy]cP@0 {k'.=DW@o'Zi;0>wړDZbe[nh߫v-܎j?.w 6k? 3~ )mSsz{,d]_r_ gO-~-=3bW?*0lORPODGa {O:}IVmG5O2 2&eA`!wtfo -{Qo>CBk>#8ㆱ:|p'Τwc3]bC[<.?Oճٖ[3SKUlj$:TQpNQ1ȭ0Oxt M^icT;Je*Ljs)ɘSnOD(6H M"C\@G& Li5S"w{. H怀DgD@KI76q> ԋ)FjMvlh4v0+;< b{+7{ZMx նl4Gx|}WRmO= \{ Õ/qܛ& >y >@ uwGJ*Ʒm 딫G`^Ns0gM*wxp݌{}x+mfh*h C*c)OEJa `+- pM3I#@)Bv*tˀ~q=q y6>$ߢ_]:^jbmj_q)AG DsL#b$Z=`v ۦz_L=yHp`_G!r,   =z08ܿ6S;:2uKd6FDCi=.M4`8 \ 奉tLu,63^+-fݣ-y&/[,8+i7.ZL>H|]<ٜ[~9\3J !AsR3.ésr_(kꅈe"js\@_5۠(0{Qk\&7>c.#('%9iQ&^$=\S2 2½EҦjr2L3}[c ,#zFkJLd)9dhk+HٖGh SxnsLSSp 3I=N,3-m:H.e6! lEƹՄ,7NQ110fi5xLg<P 1bx熿r׆;jgjRop&fĄ%ƃ`s9jVp!ȌĊ/op0wXiSlljR&Iu%!|~Io^$=S߫ޕ_?OOG>v6 =;?'*>5р]qi-5"(ߟ^WW/Jrrf>AD"z,WS$!4<:sߛ-4"NqK =NG xtÈ|$ :03{?8n/tܝ VYܞ4X<:zzDѓ[j9BL/^jD#ie83&T{3I^:/wӆW\fim2sqgsM$)xy)RyVym bmD0 Z 3HSǸaw`o*.⠉&H )sO^RN N_@kf=XYvHX&zRl+ dw:UR i|FQJ, [RN$9hH#qRe/d~ݾiV5bbRLy L`w7WGf"9*5#ÙRZn0Y)[ŌE!ٻ6$WYX8sIΰ(0 D)y5߯zDvie'HjuWw=2`H^ Nθ L%}.248xaSAHxIZ!X_$'2ťQ'W\` qriNHΔ$c0&¿Uf$3ZD<6$Dj\&,!"Vp(NqNdhLm RXK ci֛H3X:سSfSK;d6򜍒S(ӥ8(Z>VSz]w1؞?neۏJ.MSQ#:\ha;6 s rWbϚC: i =}]+ h+]D{zJ+f(+`+i,m Jr8(5֕{`v~8I#e2;(\,MrXG EΕ45qol;e䚇&V((13Gp ~Rc#WCriX F&)D|[5fΔ՝:Mݔ-m+͗[ 2p)KTs5xUp!U mW.l'Z5rC=&tEFC%z` Ɛ 3KyW(3ed]Z nt=j@ `CE:"| $txKUWp^ڰSi3{I5~˶ (\@r:9Kn*,rv!҉*}[OU9R:{/Jӌ(hJǔpJ?}M hXٟIY:x8/Iz(G44_p渎X%n!e|Yp/ݭDKxƯE]}/8au>$ַzbZm^nk2lY  9|i Lm#74 suYrm;"=SqV"Wq4'wK{vWl(y4)5ݫ(?Exos>"UmdGiO]k20rxs4 aїhv/=tq.Kb<2߀|z=?C]8Eƈ;u9!SJg ?4N]v{"-m5@:U+ڀi،ZVl:o7c4:kq0Loc /v)ý^"5Hj5^4lLۓ4tZ,\Y{ޏ-yPV {QOea`2E_Pqp5 D1Ek,cZth`Ugkmw-Y ~u_vmf 2VL,H"XݟǓNK bwݰlV!<{t R'%Apl.)H .%/!UQS6sXFib^#2q)j +8\Ռ|0h5'ڶaK߭F,EK4}NzM sĄ<LJ5yH;LJ'R*X߀œUJPѶh, ^٠Zb[xEl7p_랒=%(`o;~M3#?Z%јtN6{R'm(h5㞷'3p̠T+3$ $-4 tIWTBy]?N!j9Y3k7pv=-KTF@oYz ݂[v'${'(tAxz*'|=S8PbԄF81vܭ"e[gDL006{o[1h h}Jӭ?W-/&ӼlJOP_or%HNteZEɈ4rc㜒I ((OE[#~ w1a AWM",%:Vv+%e'EzF?f| k-G97؏9y( z3!ҧ OnDLz7-[p\po'J 0#H3(xMp'Jhp'J&\MkwokͦU͍⒘$eJS!l*tGK8N#.9ǪX'MxJJH5H!7&P5ez*XVpu2*G986,IU! 6,];jdl7`WmlU}XI$t!!^i7 ip=';,Y/RP"t$P'mZ`1lsѓBʆh'E+',xưUKr7GD/w+/wZT{[RgYb!'$JSCZ@T(ְH60o}˯$ZGW`bwga=jϢM)\64ݤo]-nD|w~ꞔ/!2EV8\UhӶƪH\4FZk[D-hV@M)CJD[[!|&=i Uֽ\ LfZ5P>a. [@n]끣 R2j!_ExSnkumHĠV.˵(NcMa?IUmR "cs0`b*qF\G% Ptv]<~5w48Ԝ@ CwEIPhEjk'afeFiRi;Xgǭ$= LκU,GOrWg9j#B1ELP4ڇځR7@X'=޵N ͬe:'b%%^%P.铕ͿX@ L6W-j5a7T)vd+fa]E#}mݘƤt!%)s*ӧi[uҽ]W%"~w q*Q X&gҭ+;1 j^{ \00,x,I ׄ \0h Ai Ԭm;Hr Ғ`/l  , #3HmI6w7Y:UU,RbqBjUTf2`.?n?7׎ԎpŜgE3#O} Ճo<8s=Fkg ֭O?֧qԸ}jek<ȵ py pe8~~9߽.{ߜ~W\{p.(W^(//QնG*;6o9r~u.9֚og,6Ts|{ogϬ;^x#=|'.DN |?~j../q%RCCN~9~|{>D/vm4}2|8rI2Vvн'gUCs8yxǭgO3$gBO,sv_=Quz|_[{݇zL3N`\w=dΔp= /7Sny5nfڷd+Zfg}vt]~DŽ<~ŏ`d+&٫!UŧvWs;\jkߧtנl9WIoCyHޕV ZUޮ*k/#SȃQ*s.@eOh[EeuXns!t@I1 fH2H1 $9ƚ$ rS QIw)I S3F#8\kr,rD2FOC $ ґ A㌑rĈQGƬ1ۧO]srtI7z>X+m铦^m~}5@$5tQAUcbյ"Pa}LW5ARsQF18;ʄ KuTky8LdLrdJi C5= t%Ry!R )V@^uQba*Ua7~VP(SNC,IRۛDj ~H9*Gh0V^OLωTՈaa bu{{n'uS5I9*ՠ ^yVШ%iTruz{N/+>_;×T:Y2i(ICV^qwԌ )VVp*zhN\TL[ރt{Îdg4ྖ8T^Հy(q(AkaVBZH;TIU_Aڪ{n/x2'aL ) ^)Sh Ju{{n|;KBX\+D`L̻d*!)=y>sB냀Qi;9.5e^ح探]W1?ùO={$߾xgOH|_o^ {s=>Vg||/iPxqK4wC3h+gqT@XRqp8zH5tY !܂깛[ iusk 7=t=IiC:Q酭0SI"Sƨ7⬐(H9R2,TwUN?\579Ut.[F7Zi7?5Gkǭֳ>}i^F>v]7_>/e0҈Cf%` C0xwNftbf0Э8fjH1T-ȯJG:RAjjěz5mmm64лsFSAp&9rҦ;.H9 럧O- IIWq˗^Q%xFcJin|rTV:ԝ] 1زR鯄{i [lXǗw;PŘ)Py<)*@&K|(:Eu5dՋư۟O)#%њy2!WjRݞyf> !!!nGY#Bk<¬b@@@F<i;X2j!w?S6B\vwH3#6ʡ"S W"ݧ/ ç5,0ӥ} -wF Z4ލ0:]A4."1.|^䃤/LM̃Ĝr.L.rp75C1.Ayҟ3R%u]dú.z"lxu3N{8| dr@A\ʺKzP?܌o B:@r1kƟ7R]-~~i"hߡ֥e&擈OZY%&`[Ā#e$H1,i/Je KƐfʒeY2-Ks !!%Fa 8,ay}e^vx[/*m"8O}}MnpM϶:62WzAel7#^k* .@ő#<)r$,rR`zP)V{u{KLDu*p:6/FKS<_ͤlvťHM\Pp9Kb|J|Qr5tR֘$e)ǖU8;oֱ@ 8ú1:j{y̶Q0,3 (ge>H1?Iݞ :On oΛpom3e%H9a;kќo;j~Xyoz-,"_&Ce@a꾤M,58u_RhʑԷޥI}}e9 hju /Wjksu]{$[_  3kPj{5`^ m}m=l 0-)trk捔C.6qjX-sA0r1lBw@,F;e.. 5]d,dRc2찠Me,,dcH9@qǛc'KNe^ ,!s(NS z2 g)G&uy35F[]xLON զV Y>@bgC69θ&^HȬ][4y80 9F0`i`?,wAx"c2αQ )FK ~hՁR 7_HT%t,`f)MSLi\ Ll0MuM.yp5RS} npx!ׯٯZUߙ4MtuBġ@cnu~jW+׷;x2O&Q\VeH1m>r_qWWnYtq`6/ω I:lCBpuyٺ-R3O*l0Raxw4qP׍f]]c?#g"+g'H9 .lBf8o@]ڶS f<ԝ#&M=}Uo@r[i: ;2ֲ-@KU@WoEJ.FBDga ŭY#p+ttc-stɾU";;=ҝ#s1ohdOBcpCq,kF&4^}~#D&1 ze)Z!k*c^}8/6qYP=FA*B{v=žUgg57h/tj*: Y Q BMp5DVE,¤"hȼt&0RBC@HԴ7B/.U[Wf)#//OHY5R +0$}C ^ezIɻi„\˼r ȵf)=P읫2(Y tbF`50г۱ᧂⅠ;L&ğBVXaP{ PH`+ t?_Yʀb)e|x. UFA䏶kAfqZص~$O~[xgRoɰ HPrcwFifkү_ ij PTLNCp0R (b@PA sqnĆ*(^ ׫,G lD c Pd)JF-F(T!ωZ2""W #?D>o[HV"yf?7!G-D!''7a`EQ}tUϓo?őf%;߇?tII'?쇓_Oud"*g˓"?b Dmy{[8=Nr'x=V,vHn> @W'x 6@E0̗e)2ǻ E?kl{L)~f !oA"9B0RpA߻f ybg%QїP,#<` zB.,<,)(k8ljg+P/!aԴ?dr&sfr$z!!ߎU)+6ѨNX-'>0R!#"ΪW_G;`*LH2k;Z M|],O% KHZ<+L=uD r #H[혽~JҙEOKJ:Hͦj$n|97R1KqmW%}_Үxy^Qˆ"总.taenp{q DH`xy\5R W(;8v53$W@,A&Y=qrV2#h#n0Z-P;,&hF@p8\ټe)FEG,AIA2k(Vv[Jg5e[*@7;g)FԻ2i}؞r:SQϑrlJjx\E|51Tl "z7~ i%z#&Ikm AڋXCMmnu ɡ-D\QJ!%Q[%("E~sw3s%E**)V `H8\P{8,MF󏧓ľN'u}sbN;X⪇ag n#~xV-:\ݒIqgRk)?{.7FYc$"g]º)CC }MCգ\Qf!'FnGG.bw7(2۝7aaa PCOcSKL}5g|ÔV~ 7(B%Das$`'EvҖܷ)'{I~vyWyg2%{ ##TvHܻzL۹_[t]dI9ђSߋkh⵪Z4tBs2OBs$9nD7rڕYBFG>漑`i![˶u!LQ30DRIj #5:aFyf,S )&876#3oE=,[Wx)}]&NRvDJG\ϖ K[`=)SϑmXiYo.{ØAa̝Y ȇ+2B>9q'migK>"!q@>1.1&M%OQ=;cMv$i7Izғ?2MF"ӹKIR0w7IMv$,@[rN,pc Runlvnlƞ?DHb72/_Ap+ffۍna8Iۻz8vL5k`6)KeÍ)˟an01|Vd N$N bKޠx=qdfʫQ)fOcز?0`V"ͧ60JŬaxWY?BL8Y )ЭzݖSs1]Ʋ?XVC{r#+Cػ3gb遗r6  t=M^HoIKp) Gz^W>OUX,ܘ?|:j~X1ހ9=?nBwC[+zZ`o#Ϋ-^=m<{Y;rl(78$OǯZ_"D4I %&@bTvmu/1)F-JPٺ ,Xf+ d횭#;Xz|v~/^M' PQ*l CdT[pɈ0JR*r!dɥ$6QˆeVFkx?{8z^3dbSž&eo}7Ɋ}wnyN}҃hL8T,pƩ#o]՛ATNqyomv*) ] w.ZW^2kqkr#sl%&(׹V[&RhlHICVpecG%iC`YX9 ~+-Ӓȕ>S DZܶ #@tE,C Zvd>HNs(.w\L ؼVBKU/GyTIwv!p[j kmO:z5S0둰SGG7QV % .T[JJ)4Z2.E%њqmaSJ:M2"gO(R"4qtM^: L8?FJ$>i)Lv|ijItK@L8Ga)o9ZЂCyP;!~^M&wve6xd0*f݋x7|]eKb'7[I/qdG z1b{O|Rہ>>ڭ! 7.K~]9BB }i>>i:Zu(yn?>|m6>9s$%MM\[m%DDDk^TPbfYƒLhgd T(r>>7Wsl|\`a>n SxX) 9-؅pRS+["g9D14"aYQ{VSb0*=OZ +r} v[3X9m8'n'|2ot~נ'}~npYd7Z#jڿNP3䖛SB:=%Vp$jIcSq;`N"nyR\LHngZY[fj!ԯPoߥVO?aѭT0EP!GݾߚJo@Lvhe9|q1Z_Bl7s׬w7iu&6Q9ևD!o(0B> nE7lwMZW4@xԲO7 laL%Ұ ax䷫lhL)U64"e[)-Gh(pADai){pV)I'؎ѐP[/Xж粚Ոp1iX׿+&'}G+0gm;ZAph1K{8HL㞈C`10Ceb [&Ɔ[X-by-NbZnŎǨ- )lߛ Ll4bu6 h,8|EqP4-Cn\xJeHm INgC]o!RYc;|ċJ1я5]Qr,jG6 kh׺Ȼ?Ə|{/Wix*O'QiI6W´$>SWۡBPn am掺ʜ I:8qh' [ኼw `j[?t# #.F!h~ hÅw%jYۢ-;مwzz16e-_o/o x4za{s`PphzЎ@0׵S`q1rr1+Zwl*O+ J2][9$%Ԍ wBh99ڨctanaKM>2*/\}̆f )nd=A\on8-[ N\!>Be d#`wrkGXrk>J U&V#Nb$nŢ' ,Ҥgkfۢ DXx[bC֤e2hyiZaWZWiZ3RZ4:u(-U ]BI/عx>@X0go2/F܂GYTf~)\>8B4y9=.g,7_٨oi8"PI)fD7xW2uh>׳#9Iy5:}.mݷ x؇zY6F'z 3.wf^{mF`OWWWR{=fȅB74~ ɟ/望}wh3M u>|u mjFդ>q`[}[0aοܙt8%*U=u{Rj㽻ek@c._=}n˛+n`KڎD=(⢟˱zE;l5mu?<-/6SQ9K fv {Fi*aZ_<@6٧e.0PJ-f$X$ *p*x."ņ2to.?_>*H1 +@[{ӏ=n/f@GMqf_eF!sifUH$JdRQpl'kθV\Zw~%R$ +KQAؤ"!*qi"$I3Jr 7fe¤}}ƜPh$hS ,YNE$P 49XgRHsfl܋iQL<&Ziq9QH% kk3%d)tƌYL9yMM扐 *( Utr{;Gu.9i*dHdiin13\kZd.탐 1wsrb<dCyGbRd$d&{423H!dIPhN2,h3QF(h@;1'I] `7iR(:F&s%T0)͂ esJ!얅# :iv iߚ;;Ԯ7)LI)4O@:-O%ubPIt*,"'RL+nΛBnTX"L̤s <, zT?{F] aS003`lv6cTD,0=E[j[**huu J䲥1B:ƾdԚ10X*3ZY,G]™depa##3$}f #;tH ,e9dLDVCJ f$r/"xeR11i5eG -b,UAv`$H9ڒE\G(*lczq(H4-K )!;DK Iꘐ]yۂa8|.V5Rg@92r |"Z2iXqD` p§(RyKS HӐctmLU@%lPV`4$)8Yr~`9=}x'ULƍYleHTDh#uѥ%Nw@ a, RF 6[jd2r•!Rc$ "`ր*LHHpq 2vL +1 'BCi2-xLEL/XL! @@>F(!.%؁PF ˤ# e2a̷ҀQ1r  Ȅ^HmZ@=1!3{Ax`1f#jf!&KA' " - L #U"+q)9nLf_ *J &LEszI%RI9n, mH( 3:Z q*< R͸+告&f$j $4cd%W5K+pJs@àDP A r,bC$QvM-3jxP>&&Q#hܰAi0eGAmE$2cEr|bt l$iD) !pҁ}&Blۘx:4WVdbג]` Ƃ0h ƓAyh!-T ˬ3 !S1Ah!BEDDN2 H S2+4h\(= ŋq`Fy Y u[J#H܌̠m uqgUUBN%~d}F`XU;I` Yd%ߏ;<qR.+Sе201awt_@A`%@_z#X<ˌLDQ{Xct,!=m9TB)1", W"# 2v"ڌjF$zZ5^0 iP)2Б+8)s\EaQIb&:AI eԫ`$?<B- jw7 PEp)2zXŤ*Ì4GP>zMq)X_NjM4?h:Q[͙qDjQP*YE :#6zZ U @c6ijђdGɵL ژkyD +R5ADIJ>Ryf4uALڐPp"=B5W {p*Nǝ29ff,+SYKTڠ@0XA,FA9 Z< ١W:V q#z0Q%#p,dh5QzrB%aY=)A̯#p \TFY\K-1&rI|*nPb.H亀r^R; G( FVEcAA0YUA2ϋ른~Xkw!iQt(Q0dZ~ϓw5ݡrQ%Rn u2_K" hefe=?.6[lVѭn~'>7|\o'.+\ju|{zJ>!CZr- (Y+) Ͼ2q.7}{nI9Z2> 0ԑm:byZql:CsdtT? 0m6fyz@"Q6 l:O4ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:/Ϧá [MpƦß4l:@mFOeMtMtMtMtMtMtMtMtMtMtMtMtMtMtMtMtMtMtMtMtMtMtMt^MfTM6^Fc)9MIHoFM5tl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦl:ͦL6ޡ]o7e%h~{]Y~IRx`^[cLCZWӤj /_,&iـ ~}gږ(,%Z k9CVVʉAÊZZ֒``$QeK WBV_]KˢH4Gf fTĬ&ƊZx+y--k)AxTfO愔U X`5TWQ&,\[ XerjaEQc%` Uf "i{LMl7K Rcdde9y%`ԊJjfeet-I2+q,EϜr",9s*E--+xUsx%`/T8"u-` g-/e{e'+ce%`˃*˸!TsY-ĬDN86ZTT&-, Oazg2"Y Ed G>|q~YL[gz[̲{`V>,NSt<ͮ\r5Qy`jqy\FQy ZrQ*|#F֟mG<$L ~_ ]UWytLFWRDWLJjU5tʎԼ*h}t4ztUvbg"*t \jtUPҕ(N̰.㙦ل43Zԍ{8%ߩfJ+*++ \ΪfZK+Y6[kuk+U=DWJsne5tZJ7_<])-ZFWLӊ AЕ. \M1O+L6zteCS]0,py %beWIWIj{^4l_n>ܾ8KCE\?|8t3NMy f@'͵~M/^:fĎiof{(CO{?Qw`Z^~p#('t+Y8K ge4=?ǿ$6]yT~xSg6s;ꬋcZc`BzlzIzvl4xe5$vVKu'g"57xtptgeY Gv<ʜv܉@l祷-NTء.mψO; [͖z_ v|sQcjsԅE0QVִxMh=G \.kyQЪ*(wZt}Y]vD7'n1v?/W~B8p^NP}CiקW?!3 \O9T} utȏW1\/2l>_p_c/gY | i6ޮn~o|p+mF;ẜ|>N.ֱ݄|1mz=`oiQ{;rٿ+nmly_,lwu9n %6h>i୍c!TĜ7~{?=o+F2;΋V~-}"Sy:.1`__f_OL&EE闏ioUҾcI)L&~]xޮ_|K2ǚʑׁm: or=R*}mMW{Y0t3?⍫On^.ҟYyWgwqD~ N-.;}Χ>^,~}* ]moF+>l~ eC$dn edّdbUSEKh4'kLbtwuUuufuǥ^70qӶ53໿ߞPկ֍uGϓ嗻Q_5XW:Htu'yx,'yx,?ôn4/i=Dċ.3#>؞qow=Je9xIxxf޶&sbg'Rk=w=YjO'-ݓvd,pknOyCH:H~hCJ*T["$fWr*>B[hBajrk<ė|3QTbvQd1+#ZyK fN0BK dwef٤yVF`S[GUygҳqR2xz(˧gYڻ=3YrRKE*]vˢ剝\Vqe]O p 9OT{1<{B΄@ `FҁRAdHTXxrgbc Tq}Qv4kw oJW"#ZGd(8|VEYj%-2cx4 Ǒ\&EކhevTSRz߯a55@KAqJ "Y" +Y{|OyדL*†1["l4]qK]q[ ӨDJμޗLhcX4ƑE[E@NO9֦)p+,l4(#3R`QOl }%iQM3Er1s|dK Da9.oҘGVUTX Et"! M.Ri|̴c\Έ3'fHmqʌ3R|JOL$W" ]x'-xb;QIn6[dL^q4cX߼C pȘrSq&wLQY[;BUyNW$W/*$I>'| , {h15ő`d:-ULn G!oBe(6 7R("MY0BWRH: +{B0]՛ զqw;>7p3;mxX Il`Fy20Oѕ1&pq2Gӯz9vRvU=e?倪WZ}A#>4BOɿPϥqe)͗Ţz\]/dW{'OCȔË<:_h]/vý^-(rK 9^ލ"F^=(Q<dBӰK"k'hW*[IZ} 蕺=yk!H!'¾a%m)xQӲO; Ahy_ vlK>4S\EƄ/Ǜfy`LqeS.@\SNGϭ+c(̩Q#x4Q}!7*nKN1p^mH+r_+rЅ3cرy0$R-([J\}l@sxүC0D+꒓Z{!eAB! .޸Qh} i}6j=x N101#Z1b՛^[nwȘMO+}o.լ`DxG LEƄݐr"y'vD [s NXru /<lR&}Ei^NFԇLC ƠZdL8̩ݘña`fwa+hZߩ|ұ-1z<-p,6޽ bbYPkƫ֡vh' zPm㴓Ӵ6p4k~|C!nF-oI*<@O`uGc1z;)*>bth^ <@/B@/,xg+p E<ɕ׈0 c*xriUa8"o㕛]t-n1!Dco_1[C'Zn> ê%'uGS7:7Vm;dz7K-!e*O=9-eQK uV} A<=$kUyʸ-'f.eu<:d`"onksRiR;F 8ETM-n41DcMk<qDQ͗O AVy]CnY#Z+.GtK_y5!Ǝ5gD͈PG`rsx3evɩp?تI:0`w(0ē< wy+?ם^[ QΞ} *MVh=?ʹvy!.Wok{/A4 JiYY ^//}7XG!WIʒ8om0hU >e|f.r*Uy[VTWcë˟rחˍRNn b16a$xb'~¯7V~j~q`)װM_cob?vF6q3EM4{7߼]/Wn:x>Z d_njԾ$Q?fP=;byPWiHL7/.+rX_JL,x/e2D/"$#9?yW8n1),`,Y$|8P"mkg:%ڥVV0`.*X,V=Mwp/Rr9J>[̗ ,?ۏYuIn\fw~9+.l#v3s lv4yk??.a_T}^/al?0-m=~̪7S^MXa~%ۜ7BfS}k/vn_ly rfC6?'߭6ehu~'/כs&|f]Aٵڜ؇&S)s>_ع[ r%XK^ a.k>j!Hw쳻>GKXd`+_?6i>Fe7ħ/@wets0Ÿт/aX=%travH)%2[",!ZW0 &J*T8`ئ,d,E>d'-%7b?7 $TJr4̣eM"Jx_)R1B&a#HE9G%BejYͨ=19tQ5V_m;3tYtqBJk;-p:ƊӖ)(BrzMP6ḮI h<xO\=pR\jCt3"Nh*iqͣA@F@ XR}HEv!GV5I!FyyT/t";hkE>PҘH;!>xX︐;<Z#">. qXmyӅutreާkk SX8z~G1 0 4 mj@jnp%K1-Q>9S SS](:TW1FE>T#Z.XĞA0usȥ" ZU*?bv @٫/E!*cYLx|"UE|#ڮtn @K=ㄇCOZ1 fd0ę! nX?8By@In>m}]=;YH6fQ]Ƶ܏ۄLϵC&(a;"ޡS둷iy-1w/cpYBO,} tQh1ڇN@'|RamT_Q*̣( |tE%*r YP6K]{O\ՓћH v$E>P̜~ֻ*OٿA|oc,2 v~Xƹ6"t>JGU,N(;y41("3Оq93\ M2pIUHѸ1.|GtxlqIjVN ђG ,B= 1mt:x4u1u("7F,%Q\I^1$F-xfI"cdYv5nE<ڢVhFH " OrSJsJ1PC9bSlA/NF0ޅN)gy<֚d=RPCQqq'&i<̀ TNx">=r:GSdN$%.?#G"*r5)[TmJAꌠO"O0Rާi8K~~xYEC,KmԆ,XdsiRkͣEѾE9r--׫w*2iA-Vh&?]UtfV1Q+9tc ?і*"u'VPN#&0Ϸ:xL/\ŋ=VO}j|!}'^9XIr&8wqv&_^C>P䉈顇QZ<& "g#njUnJED탠ǟpI26W N~D ouQ9,j')VxcDzhAI++þ M~v;O.'|/ŴPh}v ҕ#vs6*3܂?(lpzKRHkC:s>yȇVăc:xLy˓#&U0}LFϚs &[7Vztbgbk*SP'WRx֣w<eέp9#O>ٞ՜-'tҡ7TeIm&)""EJ\Uv E>T#y %N0IyDXɐj _!` ooiƛR%YiX8+L m J8Yi Ǯ c 3!4x F;/gX;h+0wAݼ(nt7զuq.嚾YK` B;0s0g )O2Ҁ-'ge4Yڨi[v}ϫ*:\B4{oMJmb* >L^N>95p )iiEn!iF21B ;S9̈́!wj?!~C,̮k"Kyfdb:j^u]/EE>:x|#I:9Lv3׍RN'oA=I٢5p4/$TQ~B7fɹEdH0BIepN1Z )0a%q>Y=so*EqẇNps:VTY!<ʂ͉Qʥq=' IX>t%| .] >/ < %d‡6}R%#UV$p"8KX -2zy#.{!~"c2}"MXxW@thu CܩYМfj}y҉;Rk&cj΄&̤1, 7+t!Vqy̖Dn4UV^HQүz&Gr5w? k&赫.\KmhlC[bӑWMXG^M̗}Sa鳳Ȕ) NGU脄 #pã'-4b'Jv *b"q1$6Bp%`Kv+]z >=C=c"Y-a<>ɵl#'nJxбtw Q>5*n,Y28 u?X<%TXY. 2eJ0qkCrF Rf0tE5r[EkU 0Zk2zE'4*ȁ,wi&e7&čIL@Xi،bi+? S9ɹBf3zV*oA`CeH?_ҏFdSJ#S5洠뒈þ2$0)uRWaFkc0;~T͘Y+3*Ъ@'NFƻQon8Gy~pPYwR~^"ÏOaz-|9o`<# C϶ޝhy| 9SQ1Gh91$]c+04MYpM#t2{C\ G~yekDlA5 4xp7Ή~E<1Ojžk+ųmNK˯`;ml5'%UB0 #5o3C H?~?{ ygJ[$l}3Ϲ5Y`/L f X_OM̏X!.Z5vΪьy7PSp<?F[> VF,jPh"0TOv&wf0x40^_c E/1l]ϪYƶ3wθ.fJܕq-7z z\' ZԼUŮ~XB.HoL"L?}zGc&/yj|y4/ݟs[^,[Hv >7o;yI$_h5~9杩~bw;<؎f&H7XΎF0`;i{G2C/oVtnM8rϓyWt~w{ ;8E}T}y=t.*]F=R/PLiШTK^`Oj4Αt6`{ѭl'gPO>Bk--,U[* bXBėL' %|4C;&>I=sXj (òxHDȲv ??}l Q9j*~avӧ99rXK9:bp BtO:Z OQ_ O t>($j4GU+q +̲`he :usR:ky1셹hIALQR݁/ѦXT\<37apq-3o;p\9B9?3eqat2RR^F XOɅ9o?~_ q1-7THK';p\9Y80$X8>OeQ8 XOƨ3#ԕ' Pf}+cٖ"0׸6g 奌6`ݸ3Bm8"N7uG=̂?Mdddl +Nsdj `ܞ-jsFΨ֫ܯ¯fkgug2e5{#y-aw\EK`gXUNg8'Ksfh+a3Z3--Ha> ()d',7h,-&3ay|)ba̤)4O~Xd4t=E$]߇N1u8}HzQRph*58Is#x ͮ3C؝ ݅Y@,YA: /&.zJK]E\wJcMb2A #YQ91Σt [r\f|.8O>=C6؅apKzeY¢`L)@ xs(xY>{~w@/ϝs' 0iĉW^z¶Z{: /6WeE[z+ VϤ`4ZI%E✵>j!Ì xZrubY^U iӒ +/9; mKa.Q=FGrqfB\$쩿B$/٢k14 \ ,sZuKt)Cfm :`Ctrvw`Œ xgxyj9>q~E+j|tiI!\1+㷎ۈ:x2`d6oops77O/zmmS+{]#s371%}Y}!d+۔Y.wݙrG٪Z~N6{ gorw7fwa8KLBmVO(M |cԵp-35HoC36 FIx%6JZs3p=dpA/~Y&ra`e1GzkΐS)b)9ĸ0$ע'YZ\zM__#$YK^{pGCG珪wW]Kw>Cףy=$dY6itZK %eI$ gm?j6OM*_{ƚ' }'s~搅nNd>**P`JA%R,]De"wcʊy ֱjՖ-|KYN8"ć( .; ֓$"SnLb]jǶeޘ6OEH0 7z‚_&v 3zMUzdR+)Zsa JErênC5]S#:}iTZʺrdXfѹDQjW63Jh(kaCZRxΥӿ*U ^d;玓j6j( d(Jd(H!Qc1`rn.e(f|&V:,F3}PgPD`^$P5 NLU[I! ˯n>sgg:]E:'HQJDeXpBVeP/[ԋyO($ V*m 80Vb MjRƔdv(1VDv22aV(,bi(<<}F7ɗdq*"MX4NP:4*>Ke{Z^Dq:UU+قy/bϋx )'o|(Op0': OlqкR=M5%4s`6n3ILA7F53Jf(+\3|Iŷ~ Ec Z(ix'VZMZGWZkJk A=Hk}MOOoChl$8rK&Lw\Ɲ̭dSƴx8˙"XD%JNpu"=܌stJgm!3̒8e6Ag!,&T0w+ҦY$lYV2 ҷu)۝bi&gpd mSL[ pIaגXoMU cՎ k-gIykm0~/.t3o GͿG@h 7Wyn+=w 8?wEus znf/o&|VB8Y5lfd3 lEr;Cns8TaQ}7GãŘ JÃYՙ/4^Ee5[6ӪmaPl % &y*~apx|47,yʒjVcV^dg΂o_,^Yc@ہ`͞rZ'jpj fWPA޻.@ ʖqBbjhHrDyQFqhbxrE(~Ǎqc=n #J!S |+gl$&bjn2aX-6 6s\#;f JNFO-VA ?3A93t-TRjfY*ܣn`NV^|MaR? 0Fk8 l͢O F,u!{8nx@Fu!{[A389:Oo=#&Η˭/;h9഻V*01 sn x{g0EUdi$ rI #S$N)zұg$p+"K F*J%$pR%y~=C`+$ڧ11{k#PEGZ paQM؃JnT„YFF;hxgHAWp6V~ iV&c0d)\'{Fb#呣eb@L0 G)" 1B2%Qz H奮A`A_+t&hGKg3^H/i쑰L3(?DJ[6ɏw N0U9V03(E AtC8]0)+x3Ldєi=Z酋0r.z q 'Se2"$KAך4eE"!DWm'% Pțˋ6DfiE=^|JrkIk$& ?F<}Hp>`rz +3uSG_f%$E${"BLx`i!'A`" mG4s/4ͼtci.Lj$4T% +:@Q蠢obWsWp88(,=Z{caj3Ic1I`8Xh*(PpGzF>Gqxu53{^ uE[2M{{ Y칹F~H1M!e{{}h/@`-իwD܂?{^?p'Mn6.]v C.-ZA38=8M1nx'bѬyؽݪIl6gN fgʠ|q؅ݓu*K@@Ton>I#U,br ". qQٱ3 泊:ӨOR8>v I;H.wրY쓖. K&Y PqiqABkPiFoܫ~ e w 86y\jŮi_n7iH{7+FX X$\U`IJ3+n~:6㍺*`YVUVGo sֽzn'{tIQ,R*6IHNQQ0!Hc~Թ:`;hx'ڟcLB{.w f1&-6J(Pv =BQF1ٱbqצ0u(Hx- h3o0a^ӑ0*8+Yy<=#@f.o]x,fwf](XJ#F' byH8GT2 +) XDkIbAD#FBoa{dtj TЭnE[h?#Gzھ`1Y c}kr"wcvUJdT]_яM1pT `.+ ml,LLZjU4 O[|3ue`WࢨGW |5+/L{g@ȄPb".Mrb-02hl6`~t7Xl?d)n?eOZ%&VYp?*QדUï_ ~s94(x/W\.3XjE $Lx.go.bmF hzB?ʱI%j a]h.yʿJ5R1O `Q@ Q. Y^c0S3l>>ڿѬOt: /޿?þNSſNg aFt d oTQ5 T{ kkW-Z~ɆsQD;aM0 [ 5) fh4kQ@PbKIz Ԣ?ɻj6mdz1aӣ_?/~={E5V{ {<|̃gkx *жb|dl2^rOߝ=뫳7+oBk)3_(~ן_K7ueX͵<1G xCofp6Jwgo5jpkYLzˢ2&O"1-YL͋f;.&pAnoTwoǢy[Qjܛ+3M_7ޔ nqf [|p*k>Z*gsq La &Z?@yk0 62"h^o'Fe[TCc]^=;ą<l2qxNadL7OME/~E )5^A37J_ٻm&t7ߵL LJ#ydiHtE\=2oR$ev,\,F f٤;3Ŗ>Wic60TM _(+gV1ח$1z]Oٰӗ:8P;'W(d}Fn&~jco vL|2^@5mZ|6RQ/^+8pQ'Y\sTj!hka\r7Jۿ忁f*ښ[^6n٠G]{p:56zr_VK_ 5Lff4{j\>ٞEuMkB tJIq da: 8 "apMK|_{jISSH?~mMnlL/BPD.A{V?ٍڙ]B-(w9web)c(D4T`\͒SϿTƟtZh:d݄ / =4(4A<(1kO==#g(AT+?gL.rP9 yD9z}U^x5s֛\vLjz׻dh"krFpdhMg,Q;Tk"ɳC ոv ɏwuD裉uOqvM|T@+Frg :YW㨯A6ӎo%.|.|#|q/ՎaVa72*!8•n ېR!T(#ذ@P c,* 3tֽ]ƅےq>7jNѮujf!ގ2jh@[;=:Y81:V$ L9!sX.Xo̦@f?Eyfxۃb տw2p] _S?m~ѐ;qw QO؏8Q ?cPLÀJs;#-P"y#@"TGJYN( X!'B"pƚ~S C" ET*X"iy(G{ H,ߊ ?AwdyCJ](!ȳK5AYkc?LRh>2]<8r?Gv4JU)+K`L5UgYՂWvZ;:u~4p!޸sjo룗Sn#Rh IRBXa̔ #I42`q*ᐦLK7S4ra "94"Nt ! Xɂ:"A`:DH%i*0@"Xjq8%)Gi0xm<:y]?71{y ~v H4F٧jcф2) 0J#Ɠ0% kI?n;g/2O$5i HB4U$qD:iJ"YaL͡RkI8\[gElDI8@B#1# dS$dd)R\%WVER;R`'43Xca$H@c%faj8qLT+Z4AZZI$Hłbb Z3A)@(b8IG6ӮQkl|<Tdy&ht`8>NӈQ jЈ!"t$kinR\g:h&ihkgl_hZ7M-̪K[UC ^j;Z%M)M0, ,V* f7>6 ( 0ۍiszϥZU?>^7ːD8 0OAۢ$ԲUzX(戄`}u:N$*VT9o^kepzWn{GElK$:4"۴D߸7bA45{ރ&ߧ.9$VQ) ۤ| 2_7Ӻ8&ڠ w;x<pO'pgTw<qA -=šssV'm(@(r1PGجIq$#br8B 6 RQZd@]/ΰS`Bv'WLMeltW ZzJdnLhN͛7)@4WI7!( a(I}ADpguK5ZB0~}}9 sy} >4@' y]禠OYuhfws}}懭wPܼ5;7ʔߤoPX*H[d 0pZ`9͋+AeUHN1=+@}IO rf zOp6Gkfq1|{ܾvLtpo dZXQC5g X/zd]}@>cvLhd~:fFٴmo]xx4?jc޵ªY?aShVZ9,:^o+Q[g'd r՜O1ټ|Kq17 -zŠZc -zuS[-@CpצnT=c å谠jھ⦕iU USq &LCХ7VWU//Zb uj _`W ݕq]-tW Bw]-tW Bw]-tW s\11WL[<D.z ®+. ;p7H]qaW\xK{[}yu.܆QnX9x|\ѓ}Xdk0-qea o=`dsHgu%+/q8Jy ?Y7Ԟ"^(}#|R>~:UQ񍋿%X߭^኱ŽN?ki5&A\12/Tt`W^EIvU.&!ww&p8 nR=FgjNTRt-CCf; XASk)_RfX wkuh4̌Mo?:■~6fp[ ]<^,.U%skNj~7weo;,"Swܨ{ř m3a/]]g9Lz dXj<P:q}K{m>0jx6;]iċhEedE'w;k;T{3;_7w}T9:s /|dwnrRүp~JzȧPD<% 3 8@> QKN"e@"@f,~r5_`VBGa)me rNGN>t!切p MܝOkuv:p0Td(]z3[VDnhfe]ŭ\-?M+ =pjrf =xtTJѭ小J{}fzzf,- RS@c%?jb*$ I?ӀºA̵pI|S$ӥ}@dthxS@l5RZm޹`gf)yh! ?[ζYwHl9!s8s8s8s Bp W@bv=^W:'+;q}OQCu8s8>Nvu8>NI#{t!9А+ȾxRL;9LzJdX3e0 cD2JRߜ +{RKxkǖ/9m륲ιp . 7kl߶g W3aGseye|׏+*9uAW:fG&A{=hz`]ǐ?\դg>V8} ~p{'䨞?g><42~3 2ا,!ӽRre/YJC}Y6Ceʲ}YJ*WOQÎ?[uH%AT}hEߚ_,c578}P>I> S8(3ED7@Ȫ^fMu-{>tƚW|ˋPǾ Hq}*1<ͩ/D>\(-i֏b&`r=}@:3nB1vf,_szr%e=0~c~X-a#C10c5V8Fc#8fB= 5\A#J"])a0<]ϩ4xãZ6 \)(ڭvAfSdsJ{Ux@^t|)@CˠijSh=1RKM޵gvm$z`Tޤ6lݠ26I4Uf&[0*c8vЙ%fMȚՖ7,+ߌ[4!fU[EBUo uz"F;Wr2],˶s,c68թu筆T_.¼y+j*ٯ!Nb/.Q,6%$D wEcw*⽿n1Jȭ0W%㔗 %K5Wp 8Qΐ>pv/8УƞuBS!4b^3Jk7+"%,~൒;oQWaj%VZ酷N edʫS0A{ JIPYR#e6++~ 7mbu%j'&YY/ K#'P'mJJ EY,Zi,qP^m8E:(dpC!w 4z]+v(Y:97 iZ>ɾZI,Ţ\rkזv\_NR/;3?\\OwnD,Vx[^<,>şBϵΌ\V7!4F΀__1l+ 3`B z$M_DN Jp":~9jmC'bv >WN<"O%x419P Qp  = Vh;8y .OW)+6Ԕ(Asr 5]d,cS(.? >+& VĔ(*h8I92B}oXV҉$g-Ud#U =[|kki[ɰ&p$9S(twL NLFl,A[͐ӘO|c. DQv< '6[ǮAsm*Gl~ 1sRs mI+8,I)e ={g$hHQPJTuS(to嶛R%,jp$%+"Hd7nb;[0BH~˨vL/R;3?-dmoFA F\ڣ(5tDzQqs(ոQlRZKi]59wإ܂}qIo:_zRG䮘aڲ#ﹺ+G=wwոoU+wv-s6fYޮx r#ӼDdx4kf^q.mLME9hǔ7-jzw, ]5.ΛJwec/c].%aI29 +% ?x/-Nv2>_\=1ł*|Zc(‚CA>'*Nd=m1Bu)qx7 St +j m}~>ٶ&,Sat!j2eY&=a|=Ze?^ή p꧿7;m!,O$U˥@2<2;=Y\F[ff;}2g&vyӭvzo)^t|~@7Nt躙h0ڷK՝Hv㖵-{4u'}=#m٫G:PO\@m<kjX0zRk#Qx߱m\)oQ[R]VZhN!Vx6ʍ9n/?gCf9Ԯ>7T;ٓdcgf 23*}fq63 g&|=RXףpKoʋ-vY)ǕЫxKJ;:"ol;خv{g,;лʏu>%Bj1{ꃀoX @ڗ."[|2 6~wbeCG&A"ؼs[ͭ )$zMk;?ă[5?eURxjrjZ{%avyhS04${Zb:L!'Xim 8{Sc#=(ΔsfEe1md? ̮٢ٸExKW8-bQ]<0[Ov]zo+f\؅Yfn*UrzE9g'Ä>[aJa}=r;,0gYs1W- מCSXpe'M3~Ѹ6;@^&Rh]Lu"Yp&ǕPo N8@+0)a'/~DGƅϢh[EtD1LhXU;wWK^b1eW0>jصR>|øDxsW]rK75ءq]a=3[>o'wn٧wӧR>? / + Z:<4k}Y+~>tv=> ᏿ )\xZA8/xT,.+qnkA~ˀl2`O !\~r?!Z>owX5-~]^3bgK4JN痶]lOlhk28)3#vDO^sy܌{ +|`?16۽ˌ#¼bt!ގ'_規lW~n_k6J_Oϙ("oӅfKl_sPh?F*A_} Z+:"Hj Dm[gj} hӺy޼oqO/W,7M~Jnqܥ<~^nBnX?\~Ů VIJj3kt̷q2R@ښh#B )i #yjC\{H e耇BZ,PЅJY :mH#TA"탽j3U۔igVI,sE@9Vgx-")91`x@Nn=uv=~U^ h)cN, c (U5C1ѺNnyh&桊܎w=j.ՊrTuҙ*fPGΤ8CX"e%bdڄ\& "3*eS̊I՚#gt-&dTS AZm;VfCF=|򤞔2d )*)!fS6:*Fi&Y; b,*ڧ0 zSaAUVjq1[Lů/ȑe-dG`ad%$IfO%/!w>$P 5#ɂcgM< %[4b\2.ki4JdXC :D`?'  eխ*A JB`alek#{DG7- m/)6axbׁR %,Q9žJ{5)dGdq~iXC(+_1\km$ wq %Y#Hvī|6F=%Km CRcC9 StխsϭS%I:G͋Eէ (хk§tp}E)i0f;Dpn+ ]+(ɀ$kCvYviF* h-͡]Ƿ:mV(5 I&jeW)qh4yֲtXBc ]ɛb uT6 Ņ5 -cwk.4˺ F@B˻6F7fa=gA 0*T*JvdZ2ɷ LI , R [ljU!RcfQ1LhHpu#8I#`AV$*(k@oB&uTdL'BXp6L6:@@=-V(!Ȯ؁pYV ++5 e2aݘom0c @ mn;clQL+B*nYy$F)[Q cG !.A f|)*)ԙ5TDť#3إ~.V!joSCAV(J892rFj ތڲFxwD"=dH_($}U qj2 R+a#uY< LcrtWDlZ^4eDPZ 6A&E@H v/UTy V"КeM,dPgm| 9_޴bF\*"͘u44U ULl^vNR'Dh>`V9vnߘKt7*ՂQe mCka$-A7 /mA@Аf%SdIW=$ B+X(ITwAKCj)M.oڹgop@y>M'g)L*Pe,A[&n[`3 .-zL Nd4^SۆfMEѻT$I"e56v(NZ~7l}QeFI*vàDy آsE6G=Tʍڪ/-hGr%5 Q5A2 5@J'2(E7n֣bb j+Bq&SK:)H6r;XQ'y/nèIP(EeQAR܌EEH,{a7g=uc[qOҕȪT~Qc@sjo6i]0r0FzPIQ{/AFL(X*-lF]Ρ~]뜼ܲNKqB]h5ы7[A[FsE D o+ (|u* (AFW6 'zʤ轐ޟfhJ7a#ȍ5> $G=ZMWGGBPCX2\'*UkJC` ֨:5Ua4DX%d,.͋rpXƀpzt 7o2gRC$>.9ׯ~_{~_I8115UZn^/on72t%)V fR.ϡA^]Y w#^{N˟oB̛.s}W\\,qwū#*B45.._KqAWǀ,g_v~y,Qj1);3NLDz6Tm:{hQ"etئ6Mm:latئ6Mm:latئ6Mm:latئ6Mm:latئ6Mm:latئ6Mel:JR MGtl:$G=ݦC(md>t2m:latئ6Mm:latئ6Mm:latئ6Mm:latئ6Mm:latئ6Mm:latئ6] 3%[5t6ʝ@6}XZm:latئ6Mm:latئ6Mm:latئ6Mm:latئ6Mm:latئ6Mm:latئ6ДפN\cg=i6M Hi:{iNI6Mm:latئ6Mm:latئ6Mm:latئ6Mm:latئ6Mm:latئ6Mm:laΎt~w6;x}J+Am}j`>jQ<_.Sr9W9WQJw /U[] aBt^5+BkwLW{HWBOhEk]+uz2mrDtX=+aBtE ]l1Q]+BiGB;+NS+BӕFttNOWɨ+5A:]%U2)]E~kdZtN՗CW~ˮB`t(Qǡ/4w8]GЕgԮ2&L:1"!N)tE(fCRJio'DW6,]nz*t1]+jJ+k'EWv+k{]-k (uduteTpZ"ޤsSDg+(Ǩ[?t`1oZ/l`۹ VGi6w1r?Og[څv_96|fF$o'my?'~5[ke9G*]Vէ< ~6pyW?xƘmv9m^,R_7p@ڠ[;j#maF?\o_v:e1lt.f@xIlAdQ0]Cf^~3{>Q[OfLn5e.:}4ku][>GkHGEQW 槭 ߡj&? 1M(oF eiB9*|8e䏋NmhɯG6-~[}b7oZߴԿ.УtwuN~5?\g=0V:^.Zk ٢ ⫋^jJA/TY<yGz+KF!:ޚy A%"^nhL7ᵎthb9/?{AstA?κ;zo¨&~2"og{M4^;Gf3VtafS[;~q9ݔ\bCFoJPa罿EoՑwquݸM ( '/2>O'% d֫ϙ{>=\b"rOB{rOoѻsϦ: ms/( ݪN Mpq\2JhPBODhB)ZOg)k'ކƝ(%{IW뵃G.!͇bC >H5/>r%"VK> j ɀc; :4=s=ezE/aø7O@=|O73]p|v]y=0/և:\t%T_oW`>Fɟ=}ݳ[`?gڶܳOO!7[8PX{Q4+}ֿPE(Qݪh#*ͧvyv>pt+ia*tEh4jJ9!v`ǤGZMS+B®ܵuLWBWکhBN1HhΫ+B#ҕqN)~7LF]jPLWBWE ?#z*t^]+B<yz ]}s<]Jg*Nhݔ"f2 hNWRT>U@I+ -udKN9+g'zQ~쇰 povqt|ϽGDyXx+Lk'ІU` B3!Ơd0+V/~`#Ʃ5;NPzK>ot2.1;u}͘>)A5-?{W6d apl(v6;ؽL&wNr?.H3CU5%%$K1xbjzNT )tS}{?o[0 1¦~Wq/nK{ |3rn\.w?z &.topӛw2{~=uE3wM_y3 WD_zo6&~q׌%ף\V]]?HOj?<:#Y@ߘAޥWn=z L\27Gbk|ᯓ˄Iuh$J3"Xb -x %e"&y&¨ʔܱ$5?W 87p)n3 5G0\a^Կ< NoxXs萺hPf>r !/9Fܓ(Z4 '3!O˰>H5*ÝZf%\x6{mR9+DBiIg\~=2DWC>F4B{F0sAbIi77IkG ~ŘA? ͽj;&cSy$Ek+wqӥ bエ4?fIZN]?2~rG#֢I<L5$VqV{O/Wn=l|{$ߑp5O1L ]~1ߺ+K_\zY2͛F8[¶o 2o"NGst_za,/(`C9hKCM@8v2}$0{Y_:g_{7y{mf)rt<\`u/."mm]k<&mtYIФ5a\#S5nMZZ2.]TFXBQw-[hMsvekk|{MT ga{r׿,iR3dlꪰ'$[(1`4Evf f˒Ljh,yNc&e$/SF9˸L".y nT*<^z#Dn!]VcYWk<s=Lڐlj.Hc`5N:H2O:K]"Y*Sܚ!%s(cTY g]X>n^ux!׸Y &o i.]ѩ Ӿ8iHDadj#8%z=G?z膳&PO`9뒳61dL D$S8M,a2arˤ hb#UJ4#ZiE UvT#兵HEAv $<ӲW2Hj) #9%Mv[GT}9\*2)ו^זکޜwBu.m<0Go'Xm'XCsϣq(0~.;͚jѴ̂.4{sWjV2VR҂H<<[pLj_B]AK[U;]!J*::AbrB` * ]!Z++n!+l .=1PJ ӧum TB6dPXEѫ+D:uut/p@+k)ZEıjW'IWPB+i0tpM0 JĶ+CSV+CD=8vB"uh'R뎮N,QȀ k ]!\B+@ї]$]ikiP=$<-X͒!h/aރz ʡkim:#5y>o~,ptcJ x KuC@!75ϣɧh;FRwwA/j6$bᜋYF&4vTȈSZtLT;,[]yy6@ref%i}$b5h6i M=I%WGH<2L= ]!`c+km(thflOWut4tŨ6,$BOgWtp-Pvt(eGWHW1d@t hQut( RAY@te&B;]!J::AXf+$/2_M]>;OI; >Z#)=C>l]S T5g1p-m ]Z+M]!-iLyt(uzu(yKW'IWXct ۃEC 8zu(iJP'ơgB~ o?,_ƼhN>_{}ߚ_]JY / Qs E deujc/ӞO~pcm0 %t[VB˨(ˌYjh;ᬃۯK S__&Ft&Ӟɬip6 8YCg&LB)4z>]5 +Jpv#Y%tSh2nf7>ڳpq0:_PHE6Bqnny޽;/+n~\;ȅuCYd\ᕐȤ9exߟ仒^yp>gMB Wv+uP |i-^Kn ] ~ W)NzzW-^۠{Ӂ믟3.g>aOUgү&و_tj2uO5r;qGg;55 -Y0WQM뛹H0g (,-AzIOΣG@LEL70,n)`_+Ǥ*tX@omTD٨h:&T`03 ;Auåxw-#ӻ(ws-+6.'*- I/VƧNoΧ>L TDĉ(ųNgw9A>:ӱ?_~39[0UC[lj ]+ zH.ִrs̴1kLnHeL]qiCzL6M¸77nk{ ~y}_I*qYeӮ&}~޺ռ8to帀 qc} o6LFe&scK5{K4dyhML#F`fuCY{nE5}6l.[_~pnw:!ϟ^s&{@R@[=Yt#7]/tRFp.?%5|Z2lz ˾+Xm`-Px<,/)g ?kd%%AtaYJOa<.nF)#<}]*5 }Oɀ&ub+ h{9a5^$Ӥ&E0N`M \V Ѡ p1OL<a U[k4ϱUl{MͪΝJJiv(4Pӟg?/>WZU ylեrTQ`n[`[[4 Rb`h-Ӈ+),i'q9: %S-=,6JOIunOэW𑧋'[kBE²YׄڧٽQF?[P/a-@T^xcbRZ;X0  aYC=嵈DQ DhOFA)D0I'˘R)) nV;0F$!C "54x ˲!63uK7 . )E4 %/|:zdC%SJ GZTy rI -`I:` ^Ƥ#Mް:ѐHXΎh"qnd"& 4Lـ+$(3#lJextjCRU9׌3 9fuϔ@YO-0;, &gy~V-/_U:|Pm\@`N`e9zbP\SBs3ZxE9^[n?#-xr6:+A%JMR45Zͽ7#4H(ZU ^NjIsF?p,ތ?egT3rw-=`>Y~w+ImSe&&5A-Qy` \6j ^3xh(x2ܓ66>mZElb"Y8q=go~w=nxPy{I64mv)sYב# xɴsbc2)HUau""m2JMW`.f5ZkiGZ2MH@t^t~j0Ekպ9WGK _X[I(!9ԊsÙ+t{1)gggE9XP]"VyA,U7~vÇٸ}@L*}W? \u᠟ ]U}wۢ}\\kMqC^[(ْKbVkmX>cv~vrw8- JBEG҉`UpyV ?=#lh́ɫvܨj-iBKIn]$KQ,"+!d`$PO1&d潆V8qUC;7:0yc=ػE k#""x,ꅏ_%^>n0ݠϘ^l~7f3GPrn~>7N{ Qtute#D5hЫeڦmeR ҕ% g9 jBF;]!MӤ+L4Y510{L :jp̒ iI<zv[dlzYa"!mq?W4/m/5`]Klߘ5z@똑 qGXםcw8fϳo[.W5:ۛ{%sF:Z$*r!I`^'(KCQc}vdy_n5V##1i oauՖ3:#DKHcWUf} ؗ;a],p8ԧL&w"dZiOH]Qf/0h}^G2ф=MPGԩ֘VDWlٯЕj2)]!]9JЕ 1:] {8Ȧļ"lc hs= 3JWGIW)$k+q=t)RՕjp˸>*x8?OD] {{+3{tvj8rYζAI,;ۻ1?O7lsyu~.!reClY[)V^&?<}'rRn0)},OlAw s75{m\_Ix;Oeo⁞΢<1u {{e*<g ǀkL3iL)`bFH+/ȨЈ&Iq%Si5OQ΂6sEWdŇT+O92Bȟϛ;!XYUF3zieϙVҊm3)'3Clkqƶ@xlԱh$lFSl%[IkmߡsXT0! m.f$%|,֓ 틍ZĦǒl3ӄabɎRL=7 J!0f5T k ]05v:hPe؄&@wN l)P4AZe6yKX[ =h A@ka&h _vnől(.Z J =TjՇ %>,dp-!dd^PX[P(. -c5 d ͎񬽶# 2  <AGvٚ 膽ZΏ5ԡ]CԒ}{2:8: @pŵ ,~/H1$؉qokmNաR- 7 c lc!vA ?6/ v+k)99j}YEO#w(!ȮLb@j;7CAwjG 2Pư)0}g chĆvŚE kǤ< !.A -i<8I0<'!Dǯ PvDy!܂!7`S1Se7v 2GSN~C0J!#&3 2}Y>@PS0ёߺPъc4i'uy"T5{QQI-"ٌY+TOn0$$Ɨ+ɢ5ضS"z,%tYr!*DbbѠ dnZMiB([g}~ Ewn+tD| (q4vXN2_)`L|a^ʆw}TV=V~4w fLf JcGT^FP˶ d%Q!Z }B Uf&5,:fn>|`a+"=HiDm0iy]*}ĩ:#.s, 1|Hd Q& q':Cp_XP ȨU;o8q7[cEbP;ƏǗ|~#]BlSm6dUW+tmHmzԥf,EȋE}L!-.x W5MLp5r1 j`QE-RhBR+ƈv( BzxJͷ+.,-ef c5D8+{x|!9Lg2Б{wEq(EIbQZC!ZP {8xFw`<\ZTT4>VuVL3-wb?@`!]td1C4M1YPY+"6R譛ޫ." 23ߢvw@vCH>Ft f{imKFwp.O7cӋqǹAI%PC2#rIaQC^&$^rC ndçDGZtQ˵2<;5t*cX`[aHȮ.Az|bc( ='zea/l+tOoO`ݦD>lɹOA1whM;@ '.UP8'cd$=|xF9\z1 U G]urUP Wgݯ9;o_ھ9޽>n3I~-g_la|H7eb~cm|Kxz aE9 rs59 e5K#)Psjs@Dp49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@49 k@&k:_y@,Ws5PrԳ2-Ԑ49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@ǚiM9 c׭< e0:P}49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@49 ihs@kE܊Vt9 Ú:P49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@49 iHs@ǒ݋l6?˫rۗWo^7pm~yIx{~[2z&s%^cK[h_pZik7B?:] JB&Jk:jJr^ ] گar1+]!]EycWDW1'QWخ%b^*]]e8i˝l8\2+LNWkJW9cYt;7> -J] ༞7<͠bL~^ɼNe9ݶ__ȦC(yuI\P~~ߧ=iyu~qyu.Zi766^vG}ys|%u&uaN//5a ~is] +lU9z /x56zvmIƼI@7cػ/;o?\|5v¸GfArK~ӏ?|Go[luf&Gϛ+~d9&ݫX2,ȻxcfCDu%M.}n2{sf.Ef =n,8mfe3Gv5,8C;{7f?^e4h&y[vqUtjJϵ^(&R:r읡ѕ~W t%(mT:B"6;˫+VCWӕ N7ĕo/O?Mq<>yo?Ƨ=5m- ,/(Q&<7a6~SG*s#uҿ_ٿ߮\|'ջr~V3JL,zx9,]{oG*2R!^lPBRvCU"%H$q42A1?1y;je^Z ߗ5AE0/Z}%||_|Mbܤ}~\)rEZfU^{R%>H7?c/zb^nUrXKOzI < ? C7MIalFtwNn0gV%ge7R\E2ij[)!jPa_ٿ!]vgq`r"ĸYJпG(cP![Q{1 _'feG&}YėVK*u~7%V45 [pfm'_M`Me?fT nWY|M< Mz4V԰.6GwISR-iUm_/')1,E뭰4n&t0٭i $UabZTzD8~ߢ XIa׽a-svxFꘑnxw nNi(]2 4.~KR >*'1HsZeBƍd sO-XYOlg?Okx7l\ןJ"AO8: {SjW\Ϥc1T)^zߦ\>?K?[G˿ 4p3ĕ }#)ZŸM,Zn kF6i;/tX~^Z{~(Ԭ~&+ "U@鑐) zɱsfPE@RTr  [o~λzJrfd2qaN "eFx'In5߈z{%Ha6頀`$Jc4CrffQ+>CiUNaNU"HoS9.:b6۔der9[Y-W_߀Ռ:zQk6"bRZreLO Fg,8 V+<`uЉ@"_sE>2-AP8AA@>QuVkV:ᤥ΢%Mm.?8 ne?Vk;XO|gw؟~gt%L90FOBAIB4 p΅(ΐS(I66m& OXMugm)ٰT{ۘi.[@.eɼud108.d&Fdy'WU!d;ik/"Bݍ" oY\9HJIH!I}R[pGN 4l6x'Kg /'c}C,[ Ma npWx:&Uݤ7 v Hy#O+${jn]ژR;%e%'D+mdN_R Ts+K@iM@YM9!wla|l6+&qh-@D" I43&nG(pʹ>&eYh;7>OOpJIo1(דּ^JLLB#A%P#ýy;_S 5ovCtuٰ,k=;2;qM VAig~<)Ykg֨DmT d4=#s+NTap7!7<˻`y $e ʩlZ 0J B2R eijx*5C^a+ h0uMXapAqVhzAYaRHA`'x|`` DGarzeYvʜy1u22y)m]A'>[F?;V",. X_kcI gr",}כA&a v݇ [;,ot4 Rv#5lri@RQF熱kRxܮRIRh廻|45|F~ƒ܎_]B^')SO/=>X¬^8 o  Acw% uށ89ޅ_K&MGyX\\30FL}|"%8) (Z_V%g\2qq.HepI/)ٔ4jl5,9E7K%k r'7~/ueqn>>?G. E,sj\__ǻwOt0)E97}}j(݇*Eu9{5%j' h"n-G9. ].Hi]GM(ݞ6nOka 4L;z.ܔbwow7mwS"/v?ZWbOD<8{VOΥ:Kea..[~6hrl OTRv; 5Z`HJw̬RJ94+]A({Sf3[gsv@a >:#aUcrXrէ| VZIەeV*9Yy)\e8@JclEIe'땝yArQ@.pG4\d![#󎰃#6~x?Á^ 1y,DY#J+psV))SrJBk<@:j:rzl>B#qLv:OڳX*Y``s߼vȉn"N0%GwCW 1^$/SjIz2ͣ7vۊν p9Mp7jgrI;eZc׼5x͑‹EevZgR4`[@5SMErWyQL?}94C5B5;GSƅۘ9D\H]RdYk_\<̊QJ! `,HvPsdBSZ8ҰޤiӴ7YaiS*=ٹ~^)=Z,ٷl|oOjB`/]yRM_c8|DžP{݇iv޳Ʊ{jgw;J,.[AZ8Gl>f>de'&(U\%rJ\%V^{%<$yIoc:L\*|Ц u{c!G)}ayKɶ2ՏEKsС%Uw橆^{^ycQX1N /4Wzd)<W&`ƃpt„`nag%3x ѸR,{c }C!HEk3 9i5pmvEѡT\gIBh?TdS۲畯ҫK5 x|4t_=u]"kSٰSJa7fd2qaN "y,26$=ɭ3ޞwrv28lM:( #-MfYY{耤吸kZ3Dz #'nxaRE)@&:b6۔der9[Y-W_߀Ռ:zQk6"bRZreLO Fg,8 Vz6E):z =}d9#Zfkѡ&/Aq&12<ES3. &Ё|2r-\\0t8IKE?ʖPJQ\pc/0/Z};=+ `A0E1Px *N^s.$Et$hEqL@]{oG*`vbgS"]wR8"G"G#arS]zuu֔My/+hSf~ڞq;[/ x\GMcZH"E8 kwc'DL,E{hZ^b$Si-\Dh"|Ⱦ s31 뢣3UFL"0S7S=Ѥ(:x41Vɨ8d_tA1z7Mՙ_)#EfZ9]ݼ] ^λYtk\6$R"!ˍ;NnƔk-1Sf\9 9) ր^@> "{jzœ.s~k֏G}P{MZgmR)Z3#UaDbEˑk:%KKMJ;f2Rʃp-A^ySWBlG7W[e^('pMAcÕ'f]c?JK(ߎ?n} }nY4]C _aif]?`X%/h~? T00|}Y篚' _~ _-xyAa:I8.)?wlJ[婍%?G9&~q4|ekXqm6PGw$NuS&z2Za@zeKz( 7׽wCs78asu Zxӛ)bw$*SD.F J1uf: vr:r{([ P` 85ia@-)$݇r>$P,{?\|?2xdf˅Mho,3Lcx䦲Z(qǛWT,f){]L=xk$z:>tcpH\acꝠ,eҿj&\w Q(C^ 2o@>?1*2K@m)à CeGJQ&ot0ӏߏF?m 5"2ҀkA7pd4%UAI'wԋ;% ['& W :(h$E@ 8ޓhJ:9G#q6=4x╤ n}JڤSrCKs8RQjj,6˪s4xQc9V&A}";kH)EВ)EՄU|8?*GEx"֐+2Y"z UJGUigRɤbpt pyfo^&HLꈊ԰`FBDc^U'UFZ$WgeuVVݞ[خ3֌.`0iUUu$KKjpWԯN2ҖՉZxW'*Waj(źEDֈDRmW@-C'/w UUX/ZF\`E\%jW*Qe'P\1h%n3%LJTκ:GquH\qMAW\J"JTN\ZPݦX{D.oMD.kyJjBٽ2: jnoGU4 ߂&̮3ttKO aEΠF|ݰ] 0uo~}X|x9'f{Fx'׬(>.{%=iof JI_ *YyUiU@T k͑/e,F;`\AAC~;<\dT\[9ejZ"rz]\0Ygo~u'\ 4fʮtFK}R)knbByӃ3i_N5!CR .D TTan 5܁}Wo'()#Hz࡙"+' )[/%2]ɖ5nǛɖVbEWQk}yX8Ѡ=CV VsE7Jd@InnO^ldfMǿ`[L*ݖL:e.@yJ)LkvX0$̃s5re-YcdͻEvi~.)sRly]z|aJamޱJ6r=0cV :R\~;٘k MFߓ'mF'Nc =yI]jff~oq_)FxyL~ T? ȲɰȪ"WWc1& eMDob(xcQoA&t-\J^yY1+H!ˆЊ #)@~.k#eJ`y+ƚߪ\(  Z Mܖ`hSa&*낡g Rru\݁7ޛߖ3T) ԘLGie4 9~81!_jb/[Raq0}H+i4Kޒ8gM:႗ʒB%Rx4ȏi(0s;={e%},3eD6uK.ش1~jٗ獸d6hnG²Eүš OY+Rd~ u L]"WÞJ0~?RtD?ʓ%ag;?l4l5WmWZN]\+.DVW@ -*QDeC'G\Q"l3+ GDj' N\bZ$F\%rhJjz dS\IY; zc3&/3+Ƿ#_M^mSaYr *0wHsIM~^kw'xb Ѹ߾oKDn1J / &,DնZio7SaQQKT sŬ!Ҙ;{Oƌ3Z=4FVXՆ7ܣ f3^^챘9QZLsgG ,s2J ofMXhT yx!m&Ow|ʋ]h5d ^dwՕ 8qB ̣E8 I!n@BIic7ԐB+'jKSxdrֺJ>Xq!6Rt \뤈tR\>Hqr#rjǘW S9:F_msSRZtKM`9occ#w\]Vص2:Oh 4H'| IL -MLrOاQ1qV{`:FY^1Tk%i] fZp?W}{)ZɒeC̕7y_YԺ~VjLzJ#L=dٗ1C71 +XІB8XrK[o[m`Rk/YV{lAfS!hSQcѲn*-v^:,>O LGhI"'ec߄- a%H`mp4!iJJpp^t%ZA۠F(R5rfbjؙu;^d~ʷD}WN)|f\),M%yE[O_:mVWJJq9rǐK9rqD](n8 ,Az")-xR^!H(<2Cns-Cp+ETIPQd^DL1\(i K%řAPI({#DH MTR# ڙDRQD(26ȩ-iaT%T;edY[=ށNP lG1FIfJǕd0<:AI|:%g)k#QkR)m4+aDuH` %p6uVc :?G?RGNÚ\b.1/Ǚ?r5~7a͞ѕ !:(F RǜJ/%R hDpLn|z0Exbm:,OoPrcdﶀiLUR iBGaSId9K2TeZF;5&TFOO|Ⱦ `uQw҂O*l#\gKe 4b^E4 790:9AI:3,"?Rg|4qZ}#fv%wq̔lHtEBBlrd z#d:M@qi(`B{ b  K@igM@W<9ϛ&ZH6 ׂT\h\NDT+֞9Z z-GY\X,=.%6! wIQc(,<M5Zxn7En|O)Ԍ@tse[VF4w|0"5 t6N,bJ l,ףgAh$Wx>(.uYf,#)TDks 3âi%D@ى x {ep>+`8" 4qsfzI_A4#uyLK#jvw4F?ٿbp80~^pt=Ӊ7R>"?ֈo~MӪ`LԤ槿בOpzۿqi@1 ] >uh\B%q:Jsvtjn=r(pv>aժ _KD^')օ/G\Yr¦͸u?M[%zt쎢PxG],|{ߗY1."СM>ogasAu#_r}E~o?..dI\w~7kyjN26(O3WSr؋ng5nwӫ S*Yl2Ǥn`,sv+'u P=xaTZHe%q~pi̐P唽BgпG>88;QgIV#gY*-#ր&x !r h͡!hxj#Jk'HKOX4% ϑ5H"㖗ĂG#<᠊41%E'rLۺ 0 _7kSx} $4F) ꉦ_6@uWh6~Ԗx;[ŧtSf9tA9 tC~E_AzTqAfbiFwfٝld5/y&}7\bܛGο9?_&=Jc^MUtJyX")y5jI{M̤ `5Rj9u MQ(``.c$x]}3dKޤIib~x@P7Ӛܿ-%Y3<;C*|POI,ű\5h :eH<+Jx,1Q6<$9ɭfQoO;9;g0Rʏuige OQ`82e\FL#dhZ$ՙZZqO 4Aij78ƞqa83ji}gfc_'?0% P1&'ςEAg Ʉ Ѕ>FE YP>:h8yr76?XJ`W> # dbq#Ip^ n8GhrL%m1x4'scףQMMqKC4,wcRB@xIU_R.яcW%=af/$mne׫vDڐ.70qZ1śO6?Cu %85p+Cעr)JzbPUP+>7]u?P3? eYߡ*`$kt*!{- J5*lRy(H̡I7_u|1Y,4) 0YhY+>kX(JD(HMʈg%(VkBƽʃ)hK%q:JsvtHxǦ Om,a4u .{uXٺqe vG w4 o A)aB]IoG}-|{ߗY1."СM>ogasAuZ;K.WW‰c:p}ѷXqو]x|_)7g{a+Z|f<6!M9+:~01uwP Leۮ؁zED_0vIDN(<0*N̋2иG?NfHrKsOm#xgy$Ze۳,KGTQk@hPRE4]ϑ5H"㖗ĂG#"᠊ή7eҳfU0  U JIoS#5+4izWҨG}H:nb&qBpV^:;2SLӟWK[>=>@x>s˟/ηWՆ/aZ5m8)؍HE/l\"ϱrmJF#Y)ғ֌Є݈%LqNsK73D֐i03KRcÖ >7]Lu)w6 \#2?XXr+^:db0TpipѸYpj,WɊĕ%D ~rς+UiR.4[qu:rĖf• *7,R.WPNWL>83Tpipr#͂+UQ6 TFW\ 3~"\`MӬ]A-JUYquR]ykML rYpj9.WW\ zʹv)ipri+UT_OWfrW9N+,Ri2@euqE;ڕ Ji.dDLTUW'~rv\r{qEGqE&i Wj jvۗ6RGeIO=\ E缛t_]i͛>|OwO@w߅_tNv Yf.dmkڷ 5b/ }%Ӧ W6EJ +T%Q&Y'si禊9{#~Ʊ?^%8pf? 0=^*Yz1QZѼt}vqx4R/W2:E\Yȇp4Rdgx=T%W'+$l'• J&WPݕ$qʹv4R~wjY:TeZ)JWݺ赸RN+e3 T-JU:q兌1JT),WPI(?^ͳԮrywjW2ȊU3]8R|2q*-J9x/dKp2dp?~j㑮G%?\o]d\Ɋ=y&`O+,Zt\V\ N0 8|~9=m~Q^f`,`R쏰G& m?aFi`֙_*vZѼz$:'8{}}(#O- W [/{ʯzS vϘO4Ra\d+tf),J3 Tn jhR:I\qLv"\_0 V~*[quBtKJ7 TnL jɅJU_quJLJy֮Tn* /JZ7>=ir z̛cy?gbѴ5:M}wwᇻ{]iA=J#ks˽O@W?: @o|~~YM.F1;*oo.+]~۲C,G㿞ݪ۞*OqlHՇ% &o⑩z?Nn|oa\cmki۹|;\1`ޣyto.+(}^@]I=[#윕8:Zj/3Sqc&_);Ưb z[}g͝$߱C|]NUUo @]^m.Z\ގL՘D÷=))Nj!:ld5LMRo ׋[ ɖ\bw]H:gr3+VB.ڜ6La(lH!xg,4(pơ&am mP($[E3CN"Rm=)0JHɢ-An4ʣrC~Ztmt)97/v"|bLftH&qNJz$8wcFJ-pϭ?#м`2cͰCOW8:F-9Z+$<Ѥ[,&z9gh2tb(ؐ\+cM!:͖J*`?OH>R;J\nc0j!shj0>4_o>?o!JCKGTL%p;| т'!T{͝HgU`zm5D3fjm3\fxȍD%[C5yzs_,xHD2л9;rwcORs̘HHI?w2ɻ&TbCC5TP )%@$@mMs/*9u+)Lb1ȓ;h^Vٴ#U![3&d]4pJc,(2>b :=KE Ȏў;E$:K+4әp4Z%KlACP[ >5h.*u8:AkaFx^vݎ#'Rl̥7C/iWdA[8a[Fhcn 㒯eXLĭe=( qk ;Jc[`;R#wec`T! -; HJYl(MC .VD@5Jhٻ8*!ȶ]bw!QWH%rbئ!}EБNxq$S:X DZX.ctXzOG3HPBb3,k\uU;do:^XTuNv#[Qh{Bbމl8&[WFNCkS馿nAۚBΨhFQutxUjhnwXV3JFXx7FaA̓pF ]l#č[,E03-3jt2,?= Fk7{ 5\o a1&@g0H!(ƶDMqPF_KBzОEwv4T, jVMCP譙*"S,eb促h84Kp0 IM4Ad_v Wu]6P"o^]^{zL`% .{ijBKx,҆xv$!ɵL 1׊=]7r^G=P6Q% e3I:@Lڐ Dzp2^)ȟ \s4d0;erXV>0Օo2J BDr`T8p2mK@Phk(f ,xBN3x$z++Hԃ á@%Xk'7 aYP|L#`npp! vZ#U>󡰘>nb"t0e| alW pd$bwCXA\ z 4 d\Wv@z'Q"ʨo` @0(*e\K0E9t+MY.CSA CE]]@rz﹆K X^XA+Gu@ E:BVZ&izl?Rzt~=?`:6M^d]rµJ c,m^ >~˛%1(V~+y piz'E%|pQ)Y|=rYG5 Zozqturjrs ~h)l:'ǔ# q9ohfo)i˙vtd/BdE{QW!p?Śi:FC^+TiZ~2<򭬈FO节DDnU k;ܪX =*V[=n8$'[!nV[!nV[!nV[!nV[!nV[!nV[!nV[!nV[!nV[!nV[!nV[!nVO[1Ejb`[`vf/*XVRĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q'ĭD9$J\kn%!q׈[I q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ĭB q+ف8$ pp+O nr*[F [!nV[!nV[!nV[!nV[!nV[!nV[!nV[!nV[!nV[!nV[!nV[!nV=V7w ޼KEoaHÂZwH)sa=ъ=Ka9: 5`0 U=Vr^v6Z W`&\p`b-5`z ܓHv@ rpU̵PUcjUr%pz•5H}@ pb:{ wf+c#P*RW2XїVW3ٗ WVrϒA}p1\=a0eDP*?q}b{*V+f9U1Xك W\C%\kpVR5+nrMZL*u( dUgp ÕC*ˮT{*VX^ORc(\ȃ W`.롄b-czJ!+0X?CX_᪘fJ{*V 5+pU F_f< Z#@`BcvU-oNc\y8G6N:uf\ ~ x8纺4p>:uُZIʣͤ3][R vge=}G?n3ȓ|_U놾9~njl4 Cm+ŗ0ms5 (L1YӕME4V\b]T5l+%o%1Cݴah/?[ydY",(d<;LtpWlZ}Y*,/RTdeM+]~hbpj~,=e,oTkwof˫+z#<8-sz@mJ"N!{nq>gY w8kKfXOm\]">>ݛb,}c͏ֶGKY˽ڼe_]a]g\p m:w:>'l4ؚ?b l;C|DgL2~I0 s5;~6u^m"Vl}gך;|g$vp?tq/y _\0S3ew^>Mϖ'9& N+Jb8w!\qgGA+W-GkO,I V*".x]ijY~}[e(}؂`“wߋ=;]RBJI*Ҵ- =訲Ts_ gUe]UٵuCGmtWV_Peݩnw;]DWm4ZlQiR+ná݊`*OﴊiwZZ'K Ɛ㍟V&_=)KAfVeJ$+ϙ%LKVF BRnMB>zw+iJ 6! LK8[tp)P͘H:pG&e}1){as!E7=+ZCn=ce=nzv;I) |Q{@]ٞ?5.Zp7>*}N*Ҿ&Ft&Ӟɬipq!"p^ve2y^b+ny[MUnz] A_S6wŬص٘ c^h:sMߎir#x.iRQ-zjN*f #γdWxa1=kV~'j3AC ;1 eT6Ad^*Ɓ&FdB+{F}lܣ0͜(7 >L!m[oj3whݢ,7U 9"hf|"rA I OӗF<^8ÌTDT H+c4<,dPl:?LZ`r^ZA?U]=mn'E@Ћ^u?d+{/Wdzþ_ٷ2qVTZ]Uy~, eGT#K͏Z-Qt˩Wv?s,;"v>vSn|!Q]}JαXͫ[+%I6**轢9:*%S-=.%NhqhgY`/Oilٮ{ڇԖ$A3^xcb|lH$x"ʜy8GF"Ec*uPFk!h Q!ei%d%2TKRx^d̹[7~h yR0j: )UNxA~敟Q-1(cӗ>uѫ-ˠ<.E \@$ҠL;<'*F.xyi RUX=ߌ6 t\䡝.3ws{!nA{I qC}\`TQm<B%"e_X*4㋽s`qt֖5ѱ֛OT!␫n\:G/Xv|] M^D0+V֬&KzyK ?{/֤ON]\:kyU}=m~2y]Uuz.EEMpwߗT zڡ h)[{l5ade.N)L?{*в$HϕU6FZ1@7SX՛}OZZ~t*k?C9A=WQ}Vژhvn֊xPIM&PkmLSI>v4+${IB٫dBRkRȅq0pTLLZ ח-Bk_ܝ>pYN{ U^dxet|ry KOrJ(Q"1"YU@W|>hZ@V/]fb`Z%9mAgmp"+dt,=@ u7,ݷ?v! M-CyHp;Bhkyʎ}pM&,t8]q͏Ҟyrȗug% rug(ΧfAbB}HA[94IʋyyrP @? n6=k&i"jӕ9[LeQg U ' ;Z9‘ktGGƔ45$'`A BwX!msAJ,͍p~-2~/q|vgaqrwDоk?NNnu-Q oZPYgj{H_!%`G@{Ka3E$7(R"šH#TwWuv=V+rȑֿq%j#r[4Y7kwCQ8-b J*:va6c~*Dma\Ila샖}ID$0)-ONij2K LIE>yiCODȢ"KS#%6&nO2 'm)6 ZњCAtV⩧UODUϾfeХa*+9RiMEa29Km7U :N+E^*~4W-jǫUT`* -|PC?𕁃>dmgHϥtc1bd&ʨ#CAzUL$9s,F s Aہ}q tè{s%1w}i( dս맂B:e̋-4|gԳ 9'DR"X-5>uDB#j=uDB"7Hj9(سAg*TӉ`ebӕO*o&W5H__ԣZc 䂶L;$gbĽfb0~9y`G&'P~ hTh՞CL4j99OG@!}kn/ \YCƊ|G򉟯vCyR9vܑ9S19=YKlwu`4a]]c1ggFs`$J,iMorKOfLNv{7~eLu96m0mb] iH.\mVڥKAk\d}\4D+`s( )Cs E=&77̻{377ҠD{y}fJdXV2'd*D&) #fl </TB!de4?%,ja}1/5 "3MkT&Viκu e?g4Z9`"XKv6*\T0Qkɫ`T7I1aVi)O3ynz?ozM7*Պv.k>VjjV>aOěw\ 'ҷͷ=I3k+[!Fܠ"&9^KF9)nXY?:{ Dyh>DJ0Öry,# XNƛہg?7~cM^{#_8ՃyMY>*!hTFPϡe#Yh_$q>qWXOçd@vcʎ~vu σd@KGj0CFjOחk?uH[`Wns7Ql(]jًze=lJGdi^O~kxoa؃4+Z?h%^P R`Y1*AVCш?OFì1݁t5G˶QL+72zs_g8ngpsr+N@%+.} 6qY29rY2 hέ6uJ{twkk@_>>I@6fli.ҷ&ن3͚br"'!DTx=oTcg;d-@7lX7^* D+;'h2g[se|ךc?d+Y fNsVR4HW."kE9tҵt-(*cޖoד\6_ v{@HYZ2]Lz/(4E6 e&u=`!N˾" ڀ 5Wd2LN F$MfY 6smH؃K'c"3@MZIn.#( d @P p6dlCвN:I;@(E4][V 'Ic,"2=FCEB*#TPKm?c2D|}KVcM`{yID")c^/3 &Kjg2 f< NLڶUAx Ѹd${Id}BoBj9W΀䴫*j-2rҖlK?Ks{՝ |תrC2(ŝfdȖe¤A1Aɣ-,212;mOrYO'eQ6  H f$Q*)$Y0$glj2: rH\XvdsbQ& eÌ)'ɜ-Ykl)gI){Q򩀸AJyCK1H%SjYtV={ڃ9@'mojm[G0H@jg& Ax`X m\ALddZbvV:?V0aO ؔ`cBц ePt\8B8@{$qZHm+Al,[ۨi.G\ʒyQJXd108.d&Fd^ဓ,d;ik/"Bݎ}">޲m!;91r+OBD3Ep,WkQy%#b6IZ\^KUGakϱ7Q}6ђq"Z &if2&[k5(}L.$!#^Юun|(.%.#w*%=ڠC^JLLB[ K"Gˢj:o-[<,BM菮nX{Ot|w  ⚤%wEk,hghT"bT:H̑IB/_u I麰;-!dib`(f9(@5(rTFKE g$"9d.ʈ'%$Uk"Ɲagh0n0 # ~u,2¤[%psV3Ӣc TV.w>g#B!`ad'xFCet]AzN3fkYveI`Hށ#&I߉xVv} dܓA "s9'/m V/*NN9Yooj$pҖb*AJȠ5 !9$LOm%zZwm+){NTV] ҁciAZ(k$?Q9uQv /+:$Ր4B?L?kKR*j:6*"hHRʂVU@N'+~ΚMԥٹ*' 3-9`$|Dlb,}%୍a1D=rʉ{nr?\:hQZFXz&}5}1?^Bt׻׷f]uB6-}6 ?GB^AN^c`~ {yp ZQ*<#yZYs *M/ъ6*ɥ7 lfWYSYs%. p_ufUꬬzz:k ]ʶdiDYZdʹpؤr4`?`bEN2g|/zv&>&n+Z`vrNV;ܣʙ[qV \H]֥Ȳ2X7Yeu,p-",'Y\9O"B'SRCCztՄ8f4Ǖ+Մ%߳oPz+Y;HhraТ7~Fףa]ŭ,"铟gj^cg?L(߿r;*֜шr$͛ޟ~?ݦ&v5z۵9-fɻJbdm$I e}h,ӋݗއFlTFV*$V,:Z-Jɸ2ШP>"Ƞ5 ,eƱN~1mR W-sbc%Zwzm:@Um06`Ny9v`JRXua0c{;з(DHh Ҕ`t4Xڛ (K \J΄hꗸNm %"])?XR;WC$|"($`Ճ]4lK݇ujR=6B{=|J+՘Kt3?0>= ^j RKq(KC8et/q xYU.ɱ *\*OfZ٥'Ev 5uWjBΠÀY_k| g)5D5BOx~MVkJͥ)Cs Ov$kX:݁9>CIC9uZ`ϩsWlnadu3zշ_2)蔱K÷XJיBXg9L@PRI?e"TXM'>̲ P ӤZReOI*>?H2ր^)N0w31:V`$g8ZaCzudڰ}%Wg*ƈ>VWW=+ {2 nsG}V ˟¿ )(:$S띱6 D佖豉hj4BZ"^|Ῡ" .N~-#5t/:C+kn>op#V0tUmqUya^l(z&nm7tY6ަt`]AU~wY[>6[_&C^{s??QFx˧<6<[+<^L{5%` D=حoVNpF!p^+˫2vLׂr-XkOe+[NѮü>TKZ /lPqx_~s6G\(n1ˉZ`-p(ӈkC8=wF8{HK6GۙsiYq)QVx=Қ:q;s mOܬLڝSKP'zVubBH#=qA{k ';z<(_W8z B[D,ǬXVK !շb٘ғ~\c0ȧ}8צkukïJ; NO.eӰƕJ Vk\TvrRr2o!M:}G_ dKי#Sߟ?[ak v9Yf(g a}HO `PVDzo¸#5%a4{5|N}HS52/Qٙ3^QLW+ԋQ+мV!Db39})@|F ]8C@}*0A})(lBckA@k6QXFa2!MÜ]@݈9r*( Y#Z~SAP5_^QzR<*$0r#k5YMM#ňUgHR9 zk! T=j[)ܣ{ԪPn>‡4x\y+,xG菏a "ϖڹy~[BہW~ ݫ4ْw89_^ AHRL BLwZ D佖豉hj4BZ"{gk Jt%ɯm}2`z3&H k.(qCvY`iI(;aaR"cl 6䮉9ݠ}eTr {-7Y?dQfr3YJFHq|lGm 8j^>W j0DNdZ,]-kݪm|^UGY3wZePvP4 xN0+D{ZDJGt`ZD"rRu¬){EY13Su4H&$ >rXAӅXe" #$$5pFBb%#1 )DXҠs0IX2̒%-KZ6h A($`VԐF#E߷-+qTLӔ.^]~iNe.y\ Bws!#{Fz^u``d4B?C"~lv|!_+ȅ|BSU+ISk\es}nF,B0#BXYL^jʈhA #G!eLDgT_Hz[~{Z,ݛ ;Ɯ:(Ukuh5Z`(3*h>#*$;A Ata8/ײ2`q7ǫ>3^zP2ԹS${'!8b^rÜQ5XR?-wcP6+:5 v`VnzWL@fy p>‚„[ь(=g4*C40BM[آC1G EέRȰ, Ƹ#aREbH 3o9j,!zwˎdaG-{*k+ 8a~8]B1|\a}3[ >K̦ϙL9Y-!I$ n2)mv,*I{wS{ofcTGQ^>׆O?.a=NqeۭyHqTk=ܙ@vV@rVkYك?4znU6͛G  ptYp-$tnUY!y7 ,<[YS#nXhnʰƨn2u\`I |?NdzJF}ӨR<(Zu ={ݾ7N>L'Oˉ,z#H~_T㽲V-rovz; /p*frbG[-Oyy{Su/mM[UTt|5d?; wv?6kδM-[_N8 yP=k|? |~f~CV[ߺm57`J/FKՂ?uZU؛s^YHY\ D;A^}l_Tc}on\Ve wpB#B6y d)^cJQQN\{it+_S QWew-8@>f=pz6ttP:W$r㈺Q. p'qD4FcG$FCW%;-޹Ip[1pF:9SLGn5z#AD̥ʙAPI({#X J# FNgbHElB@Ѵc9l;-嬔oW f%N>cTx| Yj#;edY$FNP lG8c dt0<:EKx$gGBQg )GRhVÈ+ZKj5L&l20-<˦Vgg?)I+=}#H$u ׄBpIcN@pqN{A A>DqHX{0/wc|t<Lچ#0~mvl9&a 1Jj!MY:";,'`b)SFzLUe8K[w4R~v)Vp·zB<5&Rc@EG5fI > ($rEIvT{ I#a(xt'+ci:y8A:=Ag`u澹 ^)WV`U:':w*]XrRdYMTl"d>R2!>|O&yTedzےI;ֿ~?n~ݍQDYP@kUoRj"ys?šu.x#x@p4HS)D;B >bY0p ssOF"PT8r!4Xt !")9Ց+UnBB>a^hfN\i g#=P]Ff4&scN ҝd-KScn"-8 9) ր|Ł\z-Z/Az5g|yEbX(R^ N-j\NDT+֞9⊦~pS"H[VX!.%6!`BL®Jd8X)µצ`^yֵvZZ%cw[2p@ U pMAd޵6n$E=vwd6 \6.Ȓ#8߯$Jx,RfWUP.%[CPJ```@bmdԂ-2^u s<YRӂ$2 HitL*"Q* JZJb,A"A$V <ɽT ƭCh0uAXwaiL8(Hp =P1Α ]oMcr2M;ma*1 \:Hّ`d48tPf5>jƈ 1V:cRJ )i2v|vzY%wep"y y:a;Ly5 ;~*FC?+ ç0oH͒'X>jXJ*$C\Og9<z&w!: aLjf\ݗ|՘SWْ%oGE<$CW, Mo+F\Arʸ\1}|"|:MQ0I!qSbָ馤{p|yjUXٻ+I!h}*܃%UO exz*3`(oFbN s 05㼨BBaz7^MzVeYN<р Glh4),PGq`A(z׽/:,ϋ^iN{ .,k eR<NJp'(_0-TIڻ&黢[YU~ O)1Dz~Ȝd,@2Fϴ,s@3Ã&> _h[_ @GCb/rr+Cz  *f9qB ̣E!&X;$sQKiY@i$#Og{̳ԧ5c=s'74ZE:pzUe ǻxf[cv"C* Pz5 9PaPi |_J38F|1[.j*͜I{^)TS o>V}U/@j xrpyKa7@)A{MH~إ.{͙ +JMq8-Һ$*X|4So GUX1[Y66;48 |G,:2m|S7ys3A0ɵ rө`1qV{tHbaJ=fs[I i{eB#kg OeZ˻|# `nbޔf"=I n4.6JܐpQ,Y >zlhׄ%OAMlyˋ;E#R#A.-v|+@arVةxDvT>(P|`_/ F'!0` p8%b"XZZ;Ja8plNNЌ _~Y 'w ,MgʹܷY=?xSh2I0koGODA9JS %ƐF{+ꄷ[]@j߲Ņb (v!v͑Bnz1C6:|x|a~:rZʹ4QH9DŽRn&x9 Ȱ, Ƹ#aREbt itcDق>8WŔɰ!!/y'}l$ͮ%'aׂO`j><L)Rӄ5?w.f\Rl<4Kݥ@zJEcƊ"\e$H1.tf,B$`|{w0mUy3E?ݲµ9cd$h0~5n FXUPqߟ aR p<{Cm/OdTi7n+H rH# be}0z)#"b1h#2&"\uAL9\k}_lϠo7 mnO]꥟֎JԁOnL ݟ~SetrsrZKNmdY@;kr'ҩD/90q;'0ܚ$Zpa53Y$rtry,w'+$N0'Ku'kȂ Łbd//5,j$SmR/2M>A1 &ِmB%nz>\T8yiSrK!Nӭ*KaC;;?p竟Tݓ=-)Ɂ ?<|7̨2'bMͣm|˘ST"fV7<%7g?|a,c`pMxSH^ʋ!y /fRXI4N:ZU?V1]m6<b]͠qhOx 4 atS rg6'=ޢ6oYžLˑjDmehBIe-ܧe_WbSx30XdE pœ&q&2: /zH9eGr_U壡߸P'e%$@${g޿LLr^iKJ{Ћޟ~{?^Jy1mvם?O9zʼnߥT#zE}(90%˃-ЪQ:5Vm,+pQpY>,m~t._m#ztPptX<%~3QcisTⰲ1-Ҽ|F>gz(5Z\Ϛ""E-Ф69SEz&Ϸo!x)8k7yYӈFK|5˃5U8(UVuB23eV= (6gL 8k>k–k4Kg5]$>˖D):<գkpt7+0Kc5M%fVYṙ=gJ"xixSΗN2,ZǭDu06`Ny9v3dhXEc3XAXcXZYXW: =W#/?zCŘ)2[,C7w:r>ɍS+ivR U~ Ӏ.͝D *YywIv똱v5#n]h3vf~HǛ^Τx~RF Y e*]y~FEN$Xs[^Y]|,'au -..8 st8ȫO3H7X{n7H.X2ӆL8XrKy `)̏cJk\VWe}Ya:ZYIή87g#ЄY} (RF,Ee"Rbz#!9)_:QN()ya`"ZA۠FH"aci9I1h5֟}ӥ݊$OJFq6@&ɥ98.j 7u =Vhhޓ6ndW@Ӯ0.2;6i45I#}ER2%]U|wsqϮQ( wL X)59|j7OK]DV>3(* %`c KZ 1 RjDA;C*aEF| ߿6[5A6t 0!wd{㝠#؎c dd0<:ٳ@NxN^@>@N]Z+Ji9X #x@{GD^h-0u;7Zn>OZ2G#A"Ct1P^%9^XK39.r 3hGmSڃy~DӇuvGfov>LS+M!jSmB)$ b@yg=vYNR$%Fb0UEDi g|Ḣ33n֘H-Y՘q'-$ 6bq&M RY1&FѳG>XcۆF5墐r[ ϒbuf n)[`ˎwMa}N]rLn-9 Tp(HMNTlB4{ɢ\+vꟙ P4r\bJl,Xz(% |+w8E]wkWߗxX(6 ׂSD ԉ*`S"1G\B=#:%KKMJ;f2Rʃp-A^#r[9ʒ)Ԍلz ZVV4|0"@5 t6N,bJ l,ףgAhρIo:\(~fϯ E0HwD@4:&(fE0 %!H&^< ʏAކQs;hNpbC_G10'8 i{)" Wbc"Y񒐞OB/#STAL56FA1SKT sŬ;6h1w G%ĥC=#e>}E#b2£Z$Á ƆK4tZ9I8?0nLW 5?\juY g FtP-.fav]D~xa|V-Q(|**R ֏ S_f @V\?LzZQxƤjy^ʥ> M40Z$n2%1o*N^1ay[rE{?:E$y t: Pa^?l40 M>ݔlݿ?p \.@0`4X`M@ZXyH\E_]]w)h^aůgl<Ь-!:Trڴf'̇DXPS(Qb̓ƒ{[3PBp*ʈiQ^X<&Lo; ˥3e -Lduhq$HHF`aPZG&@S =QRitZc }ˊ3/ct8P{S1%\ uXI'82hd N,*E_Is_&ҩ`q*/8Q`,K~?f3sܜWxl3WHO_xUU]n3>V$#ÇˋV{ߌ4e*|qO/Vuˠ%́/fu؄d) jg\gY7?󫛺& 8\H*/ &mn]MoVG_M^nE, >[,J64n7Ui>qx;-@bR} ߙOGZ8uѤҚbj,7u2<]W<\ \N vadյnKTcdsT@yɱ$QK-9+R 3obvhW=n\ԓgos/r \P0vY4a#A2rVk/;^s%8qD۹nsVL&hYͿ݅AWÆ"$¨}xUDoTTz l埥S2;~&t-4һ%.iwm,ø4̓-p_K3Eae*>ѐ3Ѡu7W l< RYې Ǹ.o{qJ11dp p,C2Ncʠz+.~HW1 &&VQR?3cĜE 8ZI}?$LnmI+rLy&[L+f7vz;SSEg R7^-?NfB䌂+]P% S }o]D;Ə++ HHk0ZL MpX& L"!+=dL8`9 =!"}fkqzhҢi)|cf< z/A(3HC51Gi `a䒂҈coU=fazx,m*z3ٷoBX+v%77p':q]uIOV%= Jϫ.I?.IuIFϼ!R(Cŀk)C%HP20|K$PRHj.?N6,h듎?\XM3یSئҩ`B.u8* Ei0X#H# #FÙ5J8zg9 \tYjGM֬L*؎"q&4krD\ZIOv|g%r)n%nEOmܮFG#$:nb?(rcSȽylc]j0rTfPa~rXU7Gt\lx94 ҟ S$+ƞZJp}?xDt~狑nC-c; oj #z!Uxd!R&R/5eDDL ` Xy$RD\+s _OC*@.s-2n\=u f\ZJ0-YM7g7\<Avb^J ,-z1m2MKV1{=cy:\{W'f)'bB҇jj'}@=I6v?VEq/њK3rk!^mŰst3աtCίhvzOwo ta:s1[yULwlb^^YMv6+VCk2Qjˎ2GȦRKY' rX'\} ޭxNƊ7S$Clku]A&T]pȬϞO[/v׋ٞt|\r$Ir>%fXy%Yy72Ƞ5`!Y%vZT)`ix [ܢ}a_RKHjvzv o{rp AHRVa S띱Vc&`,{-#chnDdhݹm(n~2}Q I XF¿9N]`HZʩR ˽)Fx8E~I! ?{Ƒ俊vUnp v1,Q>w!E=,J8Hk ȶfG<;~] ՗J4|>oy9Uk U o䓷kd_a*ف ` WAx[Q#w}u"켥p=T..WO;4K1|@Ǔl^WZr |JLj\Ў3љL2UAʇ&\MD~p e!-LM0S.*9,]+L(/pfbþp<67cRbPd6F9&o+HY4j[9m䴑v)G"^j'\[}?wk\7tkCUiTagݕ*mJ`(M/"oY(xڽ K16]tV[ JW5s cXu:HVNYnlUômPvgݼtUsm)w|~vQmlG0MN>o 5B` lbϡ"i&I U8,>AyՐ|t$<#Acغ @4䶧]Z*CR'BtfKۦ%j jKP-d}zP}ײ)2Ъ "RS&vA$`Y#[d4ZJٵWԛ藣ZVj =8wc+mOSYee,+[YͬsUيR\NNsmUR*jPUX(4E e(cM>X keg0rp{GiU%6Nb!Q +0dWUrgބjc$HTwʚPaB.[ɂ ((-FKEX&|`0wjY gYb&\U_'!k$zݲ/RAUVaѹ!=K\ *1HVet!Ȟ} @csTT6C󡖳fɒη /Bpc6#* [;mf XbQU-{QygX=gSS N{xN |Z49ʝ7EE.'$W y$æR=1*QX8mqx/n⾖{=,2{euȟ83ThNUTY^2]I%پ&x"pL\K˄84Ti.ri0YJkuU`Q | Tİ̘Z󑑆e>s AUv^ n^C u"[+ q%<1* {>5h}?KK6^[E_g͝u4ެLj7;:6gץoӟ;>5mk&֎$?9m`ÛOҶاl] g rx~4s %6/a9;'ZNHソԅ1rz54;{N[-^D]훵2on>N$4#WlS '!9A<qLtʇe{z aԚT\ZA|y|Y\OףޘL'o?~>,4={܏߮$s65ZcԂܿMւVo!xysѕl{YL9ܼʩWe"^d~i\tz`0ie][f;omQߠ,eMS孛G'@O % 3((sjH}M}Ѕ贒 nz.䩁ˆe}Pj/"ЂPZ^cL)9kjLS+n(FBU0n_^%:T\<|ćS94ۿxm\Ͷ/cL2ɤRBX3Z.5]ٖX=򖠉nóяU"o7!مʆ8$ebAXȠ/S>L`#-˾B-h R,Cn>-{$='_ԚNd\ӆE7_zogs9'W=u~zqDkf?N"@ZnQ=~ZK j3@*r+"4SGg(TՉ;usP5'4fl[ i݅UkDY#ʺeMv#鎁vDZτ! i-5 ua/ҽZە} /f~o}WEsχ VۮZHs!t&6ܾ~nȞd> mC(%ؒK `U9k9h6Vtl^;ޗm_ihs6TPLO=\E޹qz_{6{?{-ːT[ et'|qNgLsm;OPFv2[Pu_|9]טȻu[=ŻwG_y?/:?zċh; y&3>鮏yJNos{]F ,p'b_"N?m`g}},l f%O?lZ԰켸*Ɔ/!v.` ;#r슸j%⪭2(^2dWmwF\^?rqV+Y%ՋWN4ʾ\Q+F?_/?ǿ\LbW^+Ҿ;,d;c]88 vG`m^W0y]CbI@tM@1Er5,uW}0<6idys\u}j_~K#/_h9Xܾ{9[-/~8|mMOr#v>g#dVĖкZJLױ!ISm`JuuuzpG[S|z?]HxP[]tV[ ]T,6ðΡb,s͉cb)v2je!/TX3X͕XPoP8 +%5MѪb@6GHH_Sŭ䆇*PXb>9)* a+if&lr=C]=u94@UJ[N nʵ+4RtE8mxh䡩(,dr+kI!+K#>*X]U5~ W.SRdp[HmkC7^J$kAtHҳPr R!&B:0p[ I=7fG&nY}Fdv lQk$ o[џg*P;.u5a[7f>ZBе[_O̡V<Q!X1,0j:1bJgDtޓh+F+39o#K2RAKLPvcsߋcP1$a)l[=s&gvH LYyL 좱3`/m4*iLj7xIHJޤ4`"Ŭ8'"#OЪPDgfe$}8[4VNͼ0M+* ;'j$bbBUUbSVOK%G"ɦyW*Nύx{WMx{'[#g}0ًP{g..8K>ŻS`9\{@$bTm!78f*QfEy:剶6SL5cyM>/*<@ͧg1zi'z1̌vxxHe\+`沊і_+@(OyCfxOl-:A#2,/8d)p Ȭ<0"\ݜX'sbb-:{p;ٕԫ\ji|vp ys\젗7fbuYn-^g]׏8T8&G#Z}գ J0>f9;C.ϵ;n^{}/NDŽ;;cJOd&icSz~WNY Jʾe_;n;]5nXԸ8'r/O~9t";&|<>s$4ljEط/vGeiU3b-Rȵ/KkJh}E8`MG=k8Σ5(PXjCDrv^ٛqVN;7T?I728Zn[[2[d۽y=ED>g۳^5{zݿ>Ƈn{{RnW-OC$Kq #F9^Q&h> & K%Ӎ]4]{V_V8/p4sDK!q[k%d 6[J QXk9TtViQe2yR-{vI}4>Ǔ:L%4OG pgy^ NH6 QD<sZ@Xe’T`Ӓ L@*yS+ }-_.$Gf{b{G3@ ARpj2\p^@{L@g΋HZVUUiW#bGtKB;E팪"yGtx[wo-D:O!"KZ~eo]ߧe]KAt#5(?:ܖ*tFQO|crp2Km:HK;+Cc'w3}0oLN;kT4*c, jk:EK^q H)ncG$u,"=wxV mଛנLA7У2^Dc x_+Vxs%qֱ$o@*:5v0^#7RiOFO)N"F0 HHEJh؂ P&WE\Bߊ^LsNR3QɥGiD9Z5qM#8fLcXTX4Cyiˇ]UQdյ獋93+yO]Ry[6 Σkl&C¾')풝s%|t3=~Ҕw{|mř.;?ґ!g7wnusv'n׋,t9e˟{YtԾBV9HouqHbm|V#3E S>G L{@MPv#rE#RX!SwNjL9>K\Zd8X$FEݑ9h GS,0͢P 昤"iG$ęQD ˔*pYV44TB%t<3j('V8Qo / 8^b69jI }‘w<{ߠOُ9/yxk#E)o+~r'Nqqyq6rI

$H%hUg$ˁq|!GENH0t0wfIjAt1 X pYf[ 57;^[SzW(!Q!wDc5u9C@I|$oD<}f)6Hd n"F@0=C#9R **F_'-{e"01yF:` F&s/nr46&NˢT$] ,e<^f"Um2"8Qlr#3|K_19|0{#:kw4E|P!=8IpB⚺D5v4P 6 rX.!Rr(:Xat}gm iC` Yt99&:wb%d^PMUl"`WMRBW-k郆^zh o̟WJ8YP!^ޥFPmHcIHjQ喫˴d]w;UqoߏQ7\nQC4FȽ#՞cg@%g2$9I|dr>)c$p8*6^8v˨Bd.US0$ɄRss/ie~McY9}D6C2H 2Xxd9#Ũh%" d搒/:8'Msv*x*2ɓx@SaE&j"2#-tr{-[Bý8eyd4+XWL =$裗m/r:O@eU? |SI)WκB+ ^GrSJb Z*p#v\4PMZa`ϚHetRMHƐd)1eL2r2W<"j%(2"nR rBw| K>ͻ=- /_9w?}/qf|;ȹlw6qZiG?&iʎ~mtLZ$8Q@ g5hSoĹ}_ϛ)OϾ.M^\MMa\/JI%YaמA´\=4ŀ}:o^Y8 `Lqq{-84~?>~d]hyq- ofNė'W?9/(bl&r>=O8q?>'S}}ޛV P7>j28Gͬi௫f8COS|Kb?/YK/{f>xFG]n4ol +?yP/.ۃZiq ' a1 -X0:-HP8s@rqp ЧϹONM7, .RXXYٷ3QKONKq/% Oy+)),DUͧɧ^Vp)x'‡dUis}˥ZtJ'RQv4Mpoox}+A־6%3&s‹f2z%$5ENBc@kͬ +[T4n_J^`YPr 9H8ZSpAb"&"Θz~O9Wzg`4/y웒oKӳ-8fv Zb3m^R3Q*NTR!.o3V4;%$`aj agB@QGm4ŧVʲ*ɲ]%]kt-;ʴĴ$q/4RvQxdݷY+VF!r,!l\ԶA&Y%l{SR77vZ8ɼ2*7F %8Kè!HOՆXADF{.y8^t 1cKώyw\.Zhw2mɞQexԨrsLH)#B71ؘLN7Z:$4^H܆K)"4N:$&9wziXW*sIאQ FGϥ_ia ܙ BDLDHL &p&AHΔ&(8Z1xΒzV8Vl8`8'$0Zs(;kTgbi}fZ/v՘})KQ=vg1>2YpL&b$kn qIS\W"]M`t4~X>dGQ'SWXJB{͙WǛ jxp=^ x(zQ1a0iG;獓.#XiұH|rPva}:8>,QlK|Id$ R@ϸP,$JVfmeGq))\Zύ$)]MT%3⩣Fks/ùT:-6!- }1R nsՖq+t@y gVX+YO#F版51s},1y:}FO5< umA6B*X'a K#Msd\2|uWӳ^v/j}oobko0G /JqMBo9@ ;eԛ|)zL0ǃRg{9W!H_3r1$FE?%ZPR/IMƀfOt=~U]]B&S[_$sreV1'8ÁHT ƹD6[Y%rj~Z($=gotћmO0zgCs2k7{З9<|oo"~z$`Y9DUlH_*trht (»Uɥ%߽d^0{<`xֽn# V//,[[chxaZf']D=ʤC@#,dmK,ݠU= W0)ixm|ÝA\$uc[iʯȅ&3[[(I,qظ11{9L_;[o3&)M6|nn^d 1g-"Ǒ9Lh) u&:SA6Yl?P  oI;nEhQ)4\y O68|ӊGGd@yT2IbzԒo<%T$|^ZAt f{sOS폔jj׷#}Q|QC]&/s4*9+-L[k{͹Dp:#Ժ_`y\(<&7w˰^aGYFo /xY(!_^PcpXG mrgW3U,xՁvoFݎ(x-f"YL1AҩsZ8 ȴ*Ĺe-3AdBO%I"8D(7qseZJ7)Eni "in}ܝ]Ozqdz]w*`mUE5W~88Mf&,PY{C8DQ\nsBbLޓB 1YϢhRNCTTqG.%,[1t="c>(6Dۛ͞|~]H?~;rhXzЖ1< 0}3β^J%Q3uBlOM h9!M zoH’To$јR LGfL U b8 t2 {]^|>Ean+zHҌ%ZߘMpbI6**{Q52T*(KZzg夑 ,3%3K҈@"N9{Y#W1[¨ >" b0N0w.<ŀ'5%bG1hqx:{E|=%NQ!%k 1kcQ[@H$*k58 d@L"jZa6OZiMhe&5A-먼g9ڢ<|􊣈8\xIQg= Fiʭ7* qY>B"G4iGDϟ9M mSrG\T6^<ʉTbT` m kp9pH YT|))ҍ)) '>H&F)Ί'̓rbLoKԔC%OU6FZ1.T[QO@)BAP+Vpo.!\!)Xw}Rژh"֊<($&PkmLSIA#I^P+2ғZS@#iJpolg<T&Fs"EK(KiR نey0EC et<8ɼOQ& N'8!sIK0s:O^pO\)mA'mp"X(J8P\f Bnp=VBƽq|WX:SBA 4<aA6ES*n8Riʕw>%ũ0J`\X J#(;TAQXFr^ ғ&kIr(@“M8LJ NjEv\8>ɤ3)Ω"’} ?tc >ސZqNyc=CrF4()XDroyD@.But% %g Py[χL\Du E?s܌h>ڧ-VX{긩 _$sRƛK]݋Vrh@kwQG_oѦW2#€\We.]01e4yu%{s|i"Fˣch >NAGi9K>E 6#Dn6'S14Z)5B}F%%)&SSO+OԻDU_}K焁﷒ԁ#XN}/c~iw2 ݜa2yk&E?Ӈ63 ggWmOAdz,'ɟ/ ~_~UG.0C۰ GSjF#Eȧk?9~趣B?3MIh7汭9~ga{ :@NݿROww~iJ0_y&n9>`?umhr@iōZ>8@nܡ7 f:zy: ?|ۓJ(4!q.h>_kο9MUD5ތR % W'Z*Ü˄}59^%sRϖ}ңz dL\zg9Ƌ^@sJ>K,h`୍c!TOd?&'C^-2VW.apoIvasAx0qC Cx`׷uNȦoxmʴy>+˕_̏xcv:}_kUVPKcP<5fq2xRa$?BYѲ+m"WYUYSň.=z8j-#-~zaUQVvl׃:M_Z6=:by0h34e+7JĊZ:X1c8B,+*)/f%ivHE_ Ad FPsi^t쟢%dAY{b#U2j(XSA^ XmMPz#?.n|*Ny~ zcSX.*4/]]#mW4=:::Ғ0r0tf`h/UH&ix. $PJ"[ymd޳WU% \@p8 8 XcOY1MjE^U&%!q(lj=zhTBe"GvS9jS-P8$LLӥHkNhHHg5.&@!ƽժjy#&?W 0fNmQBޡTZ?q}Lj5~G?02<لEFaN/g(,Jq[Jd~1}a(~VV<+wgOWij̪.LjZ#谎ӏVv:dX r˙&AtEvqqNyz bē)U=rYZ'\V/zeƻK6-¡v]=\eWOA:*v%j2gW9z%jqB&!O]Up#gW}˂UҊ]Av{JJ cO]1VvUgW SgWo]I'Į*N]1VSaWZ]U( -+2dPzƿFgo"窈Bv9lPo)}tu[iT@#X^"y'Z 3+'񷞟bFUJ!lpJt(P9:~!UGf-!?-ĤbL Q8mXA{^+Fy;nyXhR2$yk(){]؄2k" JГ*SrvBN e6E&PN2eq2ꢕOIRzh(_нB&7+Ec*hA`)T^z Qˆ|-d,Ft˥[E7%(t kE `dD%lX~"IP+;fZ/y ,Wb Rw+,4 Y K>]Y\NŒ:{Dz[$a}.^NkJ[wIRbMc 66I(-:+kE-h(ç(x,Eٞ ,"_YyjJ U6*(XQ.a]Wlл-u۲X5s`19_CYuìuYXucPIJd&eqWlï<nN׳|* ,;45_]^T}yYV^>>Q -m\mE  X%h 6eH2T%"I&lb&Ll6ӊ {:;}`_ņVZ7N*f x1-5rs"yH1$AKp)i*UUCP쩭;&?_k_g(\߬䑃EYDMmfHe(R[T d̚Hį-'ch CKcPo e\k}PxH7@p=ץ++εD-37 Hȭ)"'ӊ=%9fjU׺|cG($bҘCidg Rq_}J'5e1vcW[T+Sl*/nrR&qR(joFg "YGp((i :N5Ct۰QgY*%j2Cet@&6BRVzHwxﮝwΖ x~Lff/G+eiFF&LldG4CV?A l&C!c_&k/{ ubTKQ  JMAbیeȞ !;m+qj#pOq/0OBkUۛ`nrrOYAN|zѠr9h1"h]>DHJS{l)ѴŃ5Ag!bVN;UAҢ A3LHF# :6,tOHs|Rd\^z(q|$k8SY\HXmRǽͤgsj8-|8i\YWqߝ4auv}wUyiѩ]ߎ&aߦBﭧ%8yͿIj/h٧|fA~^jh^1@Oyw~cK3%@g.Mٚ kgwC;1|{KtmQPlMF%dg(ScST ?^4_n?nnkaq-=f-̇'R݉㚄f|ٶ>Z%8x?-˔ٿD Dbm ς\`' pQ}7Z/seiRg?OO+\/#M"~\\E"4m?FZT G'櫙us9!zի[%b[igQ_Lo>jf67KAU7(s[&X@f'H=x&aZJYk`IgH,r*B?G>88;Yfi־eY5`*Bh- ^B!:k%Tt2t0!! BzYt`/&]"$H8]w*({ MuζM/X5[?S> V Sn*}]O1w(UTDFT-(w֣]QCz:]uZVemԲ֣]tS⯃͉)]¶״i9 4mŰq/1؁zYRXr{jD១FU]T,u{yajǺ[瑕c4Q*l&5W$R4tzJk&((@TtmT" cQ!U@: 7{Q32>3 u.3f^߮OW4cݨ^NQЙQ Vy\TNz؝N .L -}c=avNԹ/ՁU}j+#[e魲}T]`MÇUu;gҽjo_ *dҲVAKMkRQ(0kK:+j6Z%5Vyaz|7{>Iyֶ=qOank.6gwսUɇl]=>-iǼosL9ڄ ½8թh 9{Ee5 ,VMjڲA)Xj.'wف oK6hl41QPl(B4*@ބUkֺY)DymL؇CNxYJ&eR5E'4$>b ժ~\\-?}{m}UWY7KϏ%Hnmr[H^;\lHh,.<5x16!o3u Z^;9saEc_ hUёP}$E>l#IY8A@]*boygUăc#Œ8fnv[9iܰ}@s9P FkGҎP*ٽv@jAwBkvyTpCdsl:Rr-P2Ėn跻~i`ss1 C gcF_r M>seT91X;j T7*{Yb. z`{c1fC'QqX]r&+,'K!yV;l6'ӑ>C@%X:[+wW7zn7_M]ƛA`21)('WBTe .#TYIUo]͗`-fvbdM-ϗM~`3֍(֊cvV>p>k?mt0l~kAPq6FKCGAZ7tx{W(<^S|f Fjϧ7Ϸ;~Kiy}i[^®dw$ OևaAי$~H.]HdI:pޖXd\GYY烍2&i2KSQ*`[h-7QDMq6"vrNKL LѭllF\ܳ <y?v()z-}Vh] dZ9p 3dQ)z$ J7aI j1\*J]tSJs*/Ѿj36RZJǓZ8/hMvt~qn/Ӹ`_=?|O_kB̾PB P#{B%,^h ѵ1 ZxIqD +ƢB)5Ì21, l$ud}6m9)B6$R%P!f+Qe#Qy:-llM[ZniOiimaO`\FTu)>3ߞlQUIvTNbXӾǓsW mo}|t2~x J2(-6yhtSR+u%{ZE7 4Zcjg-0(ad F:(ʤF8Wt(4ݡx2rsoJŭ5ɶU3t/oL8hL O>#fNIBBhI-J@x8aX7ve+mD]dY,iYi2e(.+KDlT% ؂قWY/NN˚kk_ &@Sz\ёnpx# oDC፨Ңwވ*37F0ބrWU`ƒqW,.CqWUZvwbܕ7H=&wu?I]K\-vu?iaG7w%dwW}R)A ]Uq9wUJIW@Y F"bi]BwH#rWJK{@Up*uktWZ~băqWU\m]UiwwR5+#Vd+m/O?~rՁN'خ}в$ 4q /yo_sNT=_BdE}Fe5蝎\H)CYmBRJV6cz_QfcB]8`9W:xɤ3Vܑߍ(~cyO},(~I݋:m3ߪuS]2t~P&@vNzIO;i'=ޥv}&=݉/l fvN \;]:kncVvK \;k'pّٹO9*UPiYZ bߋiJbXL5M27;t|ӝoMw7;tO=YZ\ף[ QbdG  #ǎ;rShH az KB\bh ԭD]O *RZ%M';cqu&%5 4x]٘5@^,^n{J;EďJ7=ta1O,5[9|a{a~tN]s7gs6 9𝗕D{~3͛_lm1;և ֏wy3o[Ւ0^`lz ,6_޺9O6׫{<'v72ԱNjWfȭcaslͩ}5 E"IJ9۵tI87qђ%?fU|j0zܾ`ʲ>H*<Ț|̓`tA!E  - BAIXIl6)CMa:Dezc-T] !YTA{JWM3q-`7yMzח’q5e~, 8//?+j+rm6^QD#T,$ ka611HO29%rYQq< vVlF Mȋ`qlJ-"Z3@:<4lS;xPɊ5c,I(^kE, Y O5gK;`f3E=9KQs6&Stsw,rTIB)D;f+|p!VZp[, :G G Z E)B@,ID8HZBx>8㤥7+dFKyaL`P#0!1RBb+3Ta莣ӽa=kb=cՔl9ޣB6˴TE*OH"ŐdD ))Gɡ&,Avkka.2Gf*P㓍O.o"}QA,B,QԦтr2YےD_HtA&ɮNvhg`lȉiC[$Ŵ.яח-e}}/9Yn-9 Hg7,檅_6*@! JC40PJry]yut/@i"ߓ\/xXk!ǏͦYq!v9x8)5JHHǬFT>$ƩJ3$J0yaCA>g !KUpuλ6e0(DҭVlyejvCrcDFZM&;>C O%g YK:BV 6! 5]84%WfQ6E@Ѩ5:@⨒9d0=BT @mapǙB걉 c݊GG~sֈTcf։D:c! y&!@W$vmilv`tlmp&NHeqeXOvCjkHjVD֩Rd\j^*)(IzV8 L=h|Xc~w^fX}oSz˚)X|?yZYǓi=Ɩ2qGJUeou=rz~2gG?.]~?Y_%ٷe'i1q_:߫-z)~-?&߽/wz4Mlpnh'a!Y>⌨وAa,G 4Oo{F`Bi.,,Ǐꀦ n󤒈-"< /׼ f%ozTelUs}JSRR7~:͟˛9)1 k;+l%GQ| tyJ,M_Y㯋k5<5bҾ2ﻟN|b3e/mUS?˳;f忮ǻwwxZkڴ e7~?7_x%Gs/f,}wM[ Ocʓp5}Pp0NN>O֠*磒]/?h{~)йTkq  mv#(ZT`5RR* ړȧ2 %E{2⨕ ?Wz1ʺ[zu+芿gt]_l{HH-OϫM?ygȮ!VBu Sb0`j^KnSqE? QbgtvjWL24hURXh4-DPHP$1 -mх s@- Uq!B3q6"Bԭk[tX/Ȕ*i(ů%4 6F@7U^{賛%Ӈ>? A1G-6z.yBv~V2* = (H3Y_Q&l*yIQ )(w$Z?{WH 1Owd40`{OcZUԐ:FwdhE$UjYQq|AܛahExNLn6u-,-ezq }*2ou>$i{tt"t$~4tDfb298jdcQ5hX]+sJ9rJ +=VYhQ҇ڎ{X~'Gzp.m RX[ $=Ft\,Uȸт̱߄kg)}PlMZt?d d^2PDPOs\Yg?[@x)k%ɕNJ%'+/ g u޻鷎鷋Fh+,hP]  ,rsdquY9*m"%9QY-F)4(-N<OP+ Evn`k쎃NTq/σ{VGWy%-ǣXHϖgP >"H̳ 2"f΢JNq֩yJv:ijECw)=Ĭx%'k}kGQv7ΎY*~Ll=#>GLFuf*>6dT D])mr^5hd[tGq]/*Ѡ°..Bldk]'/RIoAL?s~wӰ]IJxQ%@Rܖ&Y1JHmd&0\ftM%ejd5R}?/~xot蜏۞`vsuY:[l΅+gCxlG%]M/7e=^@!}+|?!ې\DFlf"Zn6\\6g$^0O'ku/Ă  X&^^5LXz_Ն)tQ/2mK@X lڶL=.TQJ뒖[ 9n)_>xg'tg?'i67:92M(F83[[(j,iX=1=nƗ3{fP_V7fv"wd4&'KVd&vLQ=nnLbi*-BDG_E_ CZ}àIprA =/7%s? ?5vd,~SBss]L{Zy=}7~aͬfw3뻙nfDnbnHϧ??/64;ʚ9 */+N9oa'=m;w;T1;Q1H#gJ\ΕfeڈRL2J"2i7{ɹ "@ YXfRȕJ\d#B2G¬ޡt/L5r<9VP};~]n?7.Uhowý6dj6W1ԡi:T3U1gA33cm37ҝRkDn%mz[n\BwݯȠ!=mTUFW-m>Inz}piFnhe)x3߃Gu+9|ѥQ^xyKO]]9j9~ONaΪs}ݗ\{v(^'ͥоoNjxW"h!'V*t:T&dBr^nʏJZ.D*T}%-(-Q_y9&IBsgL*2 RjC0@)B4Ne76I'1TV^9V]v[#g]thg69 9ٗ˜6-^l~zi楯zw p/r-0\=jh.PRf2dd0iYuLyV`{IP,g5rI-Z,#* %Aμ%Q`$pƒ!H6YP09@/m}_3E>1-APJPILAiA@ET=j ja'~Bx}3L90FOBAIB4 r # IQ !3G8ZQ:O',|Ax3%z h,Aһ,yDž(,W8UB""D8Ub-+Vx7rߧmw.Ksb> +OB IZ3ڊD&ZK'>{x Ts+O@i;&^E˧H@>(G6*$Ld #i IȲ׬sP.%.#7*%PN;DrȐl={8*m=q,iBnd.7,ʌ΋%&]Y 3唬53kT"62vbUM5=?;gy $e  ʩl|@NIDr$H]$}ĽT"ƽHui^ >. 10.0V/(+L )h,aAtC D'aczn+ӴWKHlFBr\[$ -r֐CE(P{AjW^eg̙RS@xƌ:OOT~B4f c1?Q?,qgJf0:N/ƹ'Vq0I\M./6)kiߧ?܎̌.M_P Tdzpi@1 ]:cSRfzl4I yBK@xţ%5q\A(kf\[Y]4#@|\\衩lr'ރYͻpU4!_h< i=fnDt_'Km5W_)dx] 6ĽsZS>ZdJ,}%#Fl1=rAd?\s"k}5[|$at7iPRqFoҸH)]8 V}79ikK}U V]}m ND|ۂ4D]VYP>ЇZ>FYFZ|F^>bvb]Y˄߇;,>f0F쉴cZ,GkAo0xqƖ ƲR'*ajEY}M9k 'e5 2Fv93A1 C@Ll5mh0N[|e+ f<0}Kyg7| cCiq7wD͝wmCGx舣~4t(i]>TL)cpT;"Q.m RXN% $=Ft\,Uȸт̱Dzq mI+9^֚߳-rZޕ-#=,|NkVO^ZIr+RK*Ù_B{.```zp< "eܥhB gc71db+!cvڋ`!ܑ[;Gtm~TE]h$h$N.7+*|*V2U:u6"n6AЎ!M7v]}  ?1y-&CCo[=^Ph6(Jg2e_ V&s@%Zf3s{k5}헬%eeD³~4LW׋u̮E\ڭ~rO<6ᮔ="~Ye=nzt7zاt{YP [sJvX`TnMsؼ... y1J'P68 IV+>/ϋH/Jס9 s^A 06Ls,GƊ,u̠YltӖu;29{{#SJœ{X*uO~!6xseo"y)$ǭ7)iţJ iɋP- uu֣3D7!$t@K' @DU +#9pPU?`iNH\FW\NE\j t]\*+i5U!تWd-uU=rqET ֋(uZW`OF\*뺸*T@/ޏByg'z6P^)}0ip;mF/?S#?gYi¡6J1YIK,Ā~#}/)aN/SkW((MJ Y54L#ly|ыe7Y5Og|>)?/K9CT8y:b 1p,% Id(NSe(ުW`\z,BJA:Le=dp.+N& Aܑ YxWE2XPΜΜ7M|'=j'k*i%X[=p }Z&kx\u@՛9}pB^~"OTZwx"ocRW%rOqK,r djT8ԡ18&$ BlEClDVM~+!gҲ>[*̰~^ycQXȤ@%\e{&`ƃd RםC0@)B42L'cN R HNR١u=΄Y}Va&׸ahHXyWѐ /|[;ihVRf2d2qaҨiuLyV`x,23 md{CnCVdo=X$elA)l Jյ,™U.kﴱ2"`MU!(AHA0'FNܪrDN@>igU%HJDI2t2Ϊ#Z\}0[ Wk`i,BJyC\Gc KlG1hAy[IKs5!g_z9}d9;g u4Y 3y@ W:'g.KM(y$S.MzXcO[CQ=AoEj%Mdrҵr!Q }nqrtB֯[[䞖<49J ( \Fŵh\I/ͬO@BAk8N׳hy (gh8 I4++ 9 lIȲ׬n3> GpL JIo]P-Z/N&/zEU*EΎegKiڥ+ٗgk^˽^A\trdcU|B-~rJ5*lT!dg$ۂ9Rt^0rL`&& -sP`,WueLr H]lo cPĸm'6T7]O.GӐ (H4 zsFHo/(+L )h,قV DGa8Zi;ƣHlF\[$:F(B)THωh̙'HegQй|xRQ0 (.,c~B~;/d^c\BY[AmGE 0p<-զ4r3K.lRX/Fm;EσJo6BA.PڤqlLg ^km/fUHt;Jsvl>@x֔#]K~uƽMr>@ŶTi;(Npgaf\JzK<].EX)I"=KZ%ڭZ|Lxھ\g3ungZzH_{v>"32 i{0`i1|jӔZvHLJKJEh[&K`U_Dţ;@fTZݞZu:"(h2 Z(*.s\EQUa9J_(θ0}N3E]7pH\ LnjɨƽnLoHw0Dq(3% DAL=IŒ-");䩏3^|uY>x]3^WOĬ\!±EĆ,4X 7uM$6>j9bL H[X2HrŢB@_1]Zsf戌B-.5YlV2l)rL"6RӤm&Uz;a 'j>&<dƆԜbr@խ.=tZ@kөc /y;#M9٤>̓ (9}Q]_^V@m𱻹Xi^s{)]C @|qK3^Et/ TZ.B0rE:\"YAet. :e`Ѯo+EE3 ۳nmظ|Uǥ~XTn..=O:/'tn[5,f^#ҕ¾}0¿y`dwۄyX!>݆bjZ|̟n:'| 20 `WNlXQQl݈mbiJZJw uKmMnzmqզ=>M|7\PA |@v>tj"ܝts>Ovs}Ӆ^}>s~E.\M6\Q{9kXR&35q){[>h8=Ӗ*8*D v2粯ΆYCÇGÐ@y3,j2'W xJ|ajٍYIf< h Xx̂e):hٛӢENO|)v@G#6(YA-?[㒇K((gVsC[ >)CPZU#?1EDtgg7bqZ .ʹ㹨-P{`->g$pʤMQgdHl Ve8 _஘$"ǻozl>`c9)E@(kT,JTRV.F(! -llK,혖vvё4벚B,r`pZJGP[yJ%l?8^wQ-w}xzjP& :@ґTP4T-d1hb"XaByVqٲ +- %ye*ڤF8W kfZ6Z<ШSGvϦΟ|lot%4 OGho?_Y fNKBB$ Y H oQВ\-`]^vWb$iJpe%g e2;Ɩe(.*WJhg*QL$zv|{}}W6<^+|:E!v>!#osSp7w~w%_tQz!6lx"E됥"Uq:#%A.ѿZ6R Wq<Қ*JJe@$Ŀ|mbZW%)_LJ/? THi!}B %*3e5TtӺ< ً_K.\I~4vs{_V]ߋ/ t r;V4R͐諒("IJV.fMl$h܋rN Q>wj|_v}x,ۆI%KY#bka H(.hd! +3hEitF`2da^uYL!@BRȢ dTuy3q9ڛJ%Qu4"c!QX FITd$IRr%mgf⓷g}=uMd'XRfXG>BQbLmS;xJ+Ò9K5i,(R.#&*Am˙!kI7figS bgE_嘢5+w,rtI]OV"[cCVB` 4o}xρz>RpQs0K)4eLQ|QȜEj5Zn4IC4~/(rJ;D dVdɆ8>FJHlE1C8GV6Ӊg>fj:~yQ x7-5dEPuhH"ŐdDB/U¦A˫ft㨛)sm mClCl/6H&!RuTF($As1Q$;<m PwQRgTdy()%F "A:lDa*]DBiqY$J0"e6p 9KR" ZlB K"5+m%ΞKhڧ|/?l-kg_(QdbI97* Qrv l\L Y-vb]8:t@ˆm mVFBĂN,wA[DljS- ǩBNkuhh4뚱5nGe`#9kDFA:݋1G3]*A:-:RD'qrM{%ܥR貲u' ^!hɑ4G$CjkH%_D{ȭ0Oid `9 @,AXf-Z,f$[bU.V: $0{-S~("P+"F[FdJ{z'ﭐ6_k1<.?oB $ޯuvu.q9N{If߆??(v?n<rOmo<h*Cﬞ}ߨ\iAnuk2ɦ7YU8 <>)X4(pfQʋ>@^b6̸3r1[ieksgeFMB_LqQ{Z: AO_hҺk<}3'd7Z/VE%v1&ZqNuorsP7D+PPDl$cfDо>meΌǨNk Ҁ+rH _jt h^(op~Z.c$*w:E0YU5 L7 {@\$x".sx5%, &Psहu#-h-ǜk}ދ8,A4=,CRK'<5+)N!YW;;"J8{SoWߠv7G1Lrfl2; ! IZR\Xd$T1LXٵ]+$6W|hѼ@y,X[N%pYmc(D`@$]ԂOEL E̾'we߄Z :HlX^ťG~eb/Ƕn7q4E]E_ꙞϮ꛶~7i_C4ҒkIXMYl6@5h:J;Hu$H%  2N)! -8W RM3!tZHO/*eu(k'vZ]`rfk֑\YL86U\y6m2)d*veaYmi>PI%CyU" ZZd&עU:V'_byY:ݾYaP{]:+kЦ Om"IP& -FRd,H [P\J8s1$I$AUI !=#: s24D(F@U-t2!5Ȍ ߸ jt{\ZY?]~y_{=n%W̮iZ[vڑ,5AC]5]?XTrY%I=**kVX`! jS.6S+6'vbϢ<@`4 g. t.X**!ce%S:N-ٯ挥 1P18ֹ)K$Љ: ܂t*-gL73LNN'wMĆ$a?4EepC{LflzWOMujgD3gF1cx̹X9/[IG:6f }<뵩/ Sx!:pt<>=>acŘk>fBYȪx|?x䭟vFnheۓ׼Ϗiܣ{g;&6J.UoɇZ|]TǼ*ixS:'tn[퐭e˭͏ckJ`X}5}+)/%b2 *@_ Ad墱b{)S֙C!>I7)]I(*C%xVI`< =*/ro$U(j,XOOH]0 3跠9z;ﬡ]_Wۛt늝c7 1Qq٘vthP]kuY% YZEPIx$s^0PқXg_fIoZKr%4eB# BDNJx0US?7ʿm>|p!~_/ tB>e A#A=nz6Wo!,I t焩DrJQ4AᝅLLLC0N[<0C sʱ |,1z,j+,9)ȅg 9YZZs&)iMQj9KVPKG#g udtJL _ =KCP+4 ymUͮRhhkXBpV;&ޢ%-7"<فEs#r9)B 鄷-."v5 &D ^˗>ph}s$FRM_aِ\y9qA*4bb%$(Jk`S,ߜx >@|sB$̄:y$K\A0ƚ=4\nbQJHuBXk RVu`f})'Pdq˿Z?ˬٚ8d]#f^jcPLW7t\@䰡|&A)L DLܐ;񐻣2(Ŭ` DybM)ueAEyٳ}}݋9+_2=bc|]>Σx7dnCϥ sy:$`׈OJU|ܜ5wk8_t>m&̢#vo(gF_:\\65 f'˺ rȂ + ŽcG`^%|"IGjiw jڟBm ex |iSr+.N=%*mU Ch}`Nv{|,_q(pt:^ƓV֟VBGd8&'9*I[e2{gMO杲PiD@'z.Zl%4|~g=t7p[qR7aj^S8mՆ'%O"~Պ|HT2IB "}Bk>N_ Q*DYWggۣ౛qSgCќaa몓.Լb)Xǡ]Bbf=淙\-ΥmNmAI6 wavиfϧ[GQAWHeB*`T.Ek +W?-O(J=w66RKXa'&znRG Ltzq韵g = itZJ Hdhj2J>9JȕNyiU,Ĺe|P#>d8,i)Dː(ʍf\rcUN51tRx9i;'-OWOY{uU>VQGH+(H/#*<'ai\ՌBrʘ8ez7H8l &%au *n)&TR5c1rvk((gZr9R{VF)'WcswCUW_c1=-[n"QL?@Вz"'ì/d#HɇYАH9w@KR<BDt>z4e2 P]ku5:Teo )QE*R%&u>86(IL½38jդ9e:(,E<@0!32a&EJ7bc>ϙ|-r=ioɕЧd~U݁ 1 & `)qL4I٣YUwebwuջ}܄7GG/#pIxF^N~ c$U(]hy8 ]>=ۥ!q>HB24f$ ?GeABϸP,$JV gKZںq.@.& HJQ9!!<YGD]I]\aEIVOy1aڕ(Q֗@nTͻGb>u/zzK /8D:= }JEWxzK,a:swP-RDoNMT,TNj! T:V\5l?3t&pc^h ya-ir> MϫM-K! 2)3 z7ѦRr>|85!:w5GF}֟&WMgGy7s:\Wl4~6ҧ֟)3d>_ſ}o6_&_V|t|tBǽRk5;>ɒU1KT/yK}&od(@XFC-wT<[K2x^iR9Cߪ#{Չ]ŭ~`lg-Mt^΁mJ& Ԡ.xF-W郁DsY58%],oxp:nAGHTD2$2JQ%ZQDTͬSzD@c^8wPÈQH2>ʤ9-i+D( 0.ւ*' vg3Ƴ\KYͦh|u(.bx  nx_Jڣ[]89O{x^Ts^qsKI*ʕ8{Yy1H$/iEkps[Y0V3ƀQiH!$Hn{S'zcƠSb-R؄L4g,fθ^Ky^X 3v䅔|/g"ivk8in֑$N~6^f@eѷA:9`4ie]80VDRIp' ޢVc\Y͑E\lwq&WA( =Mb? TLă.]89 Òqy(\vʵiaM;ikC7 ڤI\$ϔ($7d#p#fZ:E(T`Ӓ Q3h# HrE^t y^@ 2@<-F<,3CcW sDqĎ#(GDR‡@X"j$,e>ZDhI39/ sblEya&CuÒ](y;x,kw.@R운n/Ճn-`g "H֟Ѧ&Ƿ/FWX8?!ᑉJr+AXZ !G.ί;J怺Xb w>)pZԳ Iq"3IeJeH IdQuorWNzX2kKK4S9V|>Trv9&E~j篃Oeh:!h R "'0Xih0F*ebNe{{CerY,F.j'y%NRƤW"Q%OJ݋ճ:)-iũK_ &~0We\%J:QqT]b8S 򬢅$=Jެo#)7NQ`cıI 8R8Q!p2 UUa 9Q[9v9Un}0U\ȮG ߼Vy&h乡Bzph C⚺D\v4PdMM#rPƪʁ,!R+L5:A/Yz;X\CzȢ顇-KWՊD$tp30|ʋ9'ioя&!@m pT~wė!>?/.E/5^&WX w?>E?Q7zzuIr uj?Gx.ښ(z3dFf4X7xo&(:2oh{>6o^3I{:uFWr26{^uw-9o{+ PǓW˓y߿sgFq4'7?3KX R2d VYTcIH)&oYNVNxQ #|^EIr&՞cBր$,E$9I|Y#Z9 2 @m1Jpj24z gD7=!jnFg+ƌ΃,Dlj-$PRZ+V"2" (N6b|.*-=ϽZI\@E y4!ŠM͠R@ !*.3P{prC9?MuX,)wNIpLEkS-y*'`CgAwTiAʕ.%ũJבڅJ-H;.m"F*06uTvjI!*S1wF%d#LŌ?[;]s}9[)N"rR{<e4AǓ\d4iw5J̉8#?0UvF 儽g8ܿipʔq`H 5yڀ'$:sO+yt\. pkjS&8)SN{4?G=qiF78|Q/ڻe89Y24#ZljJ2ϛf8~oK5{Mݳ_+'K2-҇?ezu0󜽛Yu  -TX D2 yICX H,R0,⎅HP9nfp PY̹N=yj=.0/et¢|;Dl<1Myd(Rj9O5x%%ȼܩEܩ [ ']/>K簞']3R|)oφ}?=_kVa9:SE5IzIsUbUPbzH_!/Svއ>0n $hRMRVۃY[,2`Yf%""2Z&@v7 wx+D{-=Yg7_Be9ǃ*+z\C8uhC'cME-杵Ax'r"P@A) oכ[^C? ^Eo7qy7|D]x}qz۫ooq_2|[or7#^|)1 cu3k^T tڰiTgTpEP.3XV"";%MB U,T1b-QoGmd^/*\PN:е蕁ͶCZgBZP.n'q]) <+f GE\$rKekA){7bR>;Z)0SH$Y"r+ FRdYH [PMEOq.s 1$I[UI$gSG1'ƿL+MxbOdo] ]R+Q[iWchq u2=a\1ټ vL->lQKftm=rw]l]_Ea֟9`ڴBE,-_=) /n,+5@Jv/;TrY%I}%@1k+'=p еZ~_hoi)&Q-VC./c.Dh)AV1BΐK 3D&˃s91RHV'SXV:v#gϐF7v(m.“۽!up<{kg]DŽN-Ͽ'sݙSϙ&mv]xzZxqVDq_"6Xr#~SY=}.vptytՌ>=.`cݷf=!pd5ݶOf^ =j3/kw|ۓϼ#/mQzꎉM;ѵޒ7gφyZS9/WstMPX_:y\ٕ*MaH{jBͫ)J~%}3Jw%}Ò'ǏZՅLd"U r@唐Ȩ9eýĕ4' Zᑽ$L9e#- =`,7EƫPԛyP3ۛ$R\sf|a}-2?qr]7IB_s:SͱdKs"qqjh> Yx!muu$SD0VN:[%&9MTSz)e5dBTBKk҇\UFl!LDH<a U[k4O6,m@a;kߓVs4i!3_V_Jtٖ;|$xa՗ѷ?\$GS}T"gP)JߢCx&Oi@gZf(A ӹ DOu>AQ=l jDNʇ rY@C" 9ZHΔ8Br>M7k)X4nƏ1I^ mIeBUZ.Jw1]_]׷?ݤ׏75CP{K^{ό܈m-2{MOlYW(>'}DR3,L1ATQmʀSzd@mQ5eB%Io!C"p(7qɍT9K[D.[hw¼j4 172--Q_U"CQG5B+㬥V{7pYPy- 2Ӟ1()1!qPDo:$XLJt *J TiXݚq;J9.,BJ|.oǓ7sz%!5'G,(i))wH< Rp1F'1βJ{TJl =blbE0ڔ.T aɸ];ڴ֦֒p![M!)B3!D!E GDͭ$E P&ZEwI* IZȈ @+Q9"3h/:: ?@ 2@<Qև٭k~hbqFd5"4bZn sd{g5DEO K8tTTxslu Ո(Jx-hڤ4A qy#jhI39h*|WX##gBY.YKՋ^^EÕǫ> v, ɃA"uxfR5c+(TɓDŽ-qp8I-AJkVQ_m!nWt39„,g/%֏gU*4y]v'՟2_N mroJF2*:<udTg{+,}V٩iC wo]D{};6=T?~RK:H;Op++J: Z ɕPʁcI)O#åvBCnxd&p(J䔉Gq˘FR-ٻ6$W?M]VFFe.w3F#OkR,dQĢHJiD"2(ġ#g׬C&&YY) e2*LO:-luy嫼4_b?xKDՏ6$Q^~j'ڤl  ' safX!قƠR6 rD xɊ$d)^d$w8\dg'MNE1Ũ0<*a#X,R̒c$ȞK;8%fP3͋H97LI'm6 d,Trf"t(؅X#gC9g f9w0*Qc-;Y2%FA;PТT"@Oc)Ya#kwQ@Ґ F$FL"c2 jūq󓚟Þ.0.(aLN+RHѨ&$pq,Eu8Q5~{0xTw]YNYo+|?L`"^$bHJ2)!a]^ 0T%gJ&mEDO_>ď]5JM& 0>Jd#&~z9l|^?)_'qSu@V+m,™AY YvBvցjvBa1[8'Klr hTDT:haBnև۷(RnR0ɬǽ[Ij8Nkk4Z4K7b•(oe_'Vozqu> UFO_cŬitvcPWOɇ\}`Of* vsvi֑F;45rNοq\ef}2>~U4`7'I)pw暭mWFAn>xeߪ( hDOc}=bunj= D!rY8X crJ,M_^Gs e\ ߝ:sgg]XC?s{/_͝,s5V97]:PT!`϶F+lNg3nľY1<_06՗Olӻl+ ym9[Y.oѹI\mGI6NȜX,l;2Ҭ0g7Ilrc5s }u)˼O%& |bnXGWS%,Q)SR&˨[8upa%$A1AգOq5Ԗ೧pDaNIDˋqb'=1,;BJU)XkLiTT``Sn)P gWC`)4J缵P]wA"A K)z~RP2jvfT ̡r\GLIEu`בsoZ8li Ξɹ?؟C .q;o/Ze?H՝5}g:6O>m*}X4m^ўA S@}O5@>9J]UjT(%+" ]1' XUVCWWLV+&؉QW\8uUu RIՋTWw 7"U߿N/'_̙sIrg_̇ezrEAsF'Q):t>'O;O~W2xup1QS6":jMѹ1j DKX򐓉,T,L!)&𪬬b>~fs^=V^KaZXPhiuC)Wa[sDb+yrE`+[:Nz':| U甙W'+eo_`TF}⿍Dc*@,q){V_b1h|+F}4fr:{|A ?|SƦ7& =;4Z奭t ^ki-;ڷTjR[}Ko06@11UNQiUrKiYv]Q/-Toe/Lk oRaoIUZw!OC )L독$կ\(rt5P?B]*bLT;ι/ڦ3)Hք64 `Cʾ`2I$& F񁞋.?B "XxnwL/ݝtatNbsgOEɿЧ٤>.N gxP /KL2QB#Q RR/W š#JaDihs6n)=("sgJGV-EÚ',1: EP2lM4CuLL."A9b%cP*NZyrPSDYj8V`lhsdg;gyʠ>_Vgtk*#>Mw|oQgy1"YIYU*M 9h ia4Z4ś{Pͳұdt I*E"OX,.0hLPہJkZoW90n>]l\ۃO^ۚ^ߛ]NjFԃ ^Wr)TRiE{*JF\{(}0@W.Qv=v4iTk|MS7SOθddC(+-0AA3:#ivj% %yf\aCd‡Ca^*o?>5ҚGe)LK k^YգE/Eڨl6F Rh"sdݘ.kC.xkoͿُcܺ&cBY<]h!oϬ(+~7|& 7[%n3ޠ)/ƾd#pL>g .~!E>HəXCTuoz8ަ㰒[J6ŀW?=isGve_bzȾVd]|5 `Lo $!iX%4z^w12  ^KHSLGjr*)Zd2Ϋ@"mv6quhD .hH7qɍTyĄJDbTG R\˕a1<9ɕh>X"8qvǻ"NΛB?mroc3Ss9[ J=\NpR+CN5p Sli i^ƘeUG{+cb!끐阬gA4) !*T Iwk)O aƁpZkXEϯcx}HQۧ]¢on8#slh2$:%Mt\JY F) '1ڗYmpgmr2%f6161ԟEe  ڔnP ۏGq͸<] ;ڴ0צ=5؝5C I\Tp9[oV 7ݒTo$h!fHZсF^f^ y~d4A;Wg??lifǡ戬=GiyTg$ۣ!z)b~TX‘%$A'ewI _mP`(PTs0!%̈́.1DaX 7 3qEa{ţfY3ObOуCu>գZj/ݛQ^}xQ+\x"uխxTj+'d%ں5)9q~}ߑGEKgfZ9"*7hd@! 3N$0IRtL1Idߠ!|& sžn4[5KЭ.C͢MEv:x^& N^"4D p's@59 q8bR`LxgE:qgx>LGfLU ߽8)xt<-7>Y;3(7ݮ-',@_ܲDu۬6#FECt/jFQZe TKϷیgQq~\?X,anho&dyi"3 RZ♤"OV34<@#ZBP˲و:([QQ9J^-! u6s4h("CУW n`5ʗABȷm k_ŭ_MlSD,EBƙ.Zo#/7OjNd% $HC$!t ^FБ7,c)84 K1M$[ran1 $퐌"d`2HJ9dY%ҙ6Io2<:g5IԦl#q8C}""5"gjH0rK8Jgى@(8 Y1p:YCW|d,nQsGN 1; \Fb@ uG1hqx<{I$ ;&CJbXg%j n- Ec`^0C$u2T-UbV{;V{E-@m!F h@@IyAK:*p"ՂJViͼC@bRm%>|O}Rژh"֊<($P6r$bH$JYÑZS@#hhJpolg<TDjd>)ZKgc~㋳-r r\9EE5E1BN2/=Aߓ`%}kt=4_AW\R^W10i-y BEI*HBCLZm`q/`!Bƞp-bSBA 4<aA6DS*guEXOQDS';;T%u0TKNGsX*ȕѠ,=!%'kIr(fg'q^rxp|(7d7 5Jm}\QB9arz甇S7^(S.OahųFp 5@ ܐy xk׋s{FzUEY [E/5|lڳ|'̇XF-iQ^$c:Ͽ UM݋gojh5lⲫ6cM?f S ez&WD5`8/K7 ̶ۃ2PoV<m(DDom>vWVBͱ nwEQgO~l*._<Ӫ-ߙ_MG /_WO;fԪiX]5u2C<_O<$|ҥ w1B0g/eBQϧa*:1`$VKQc`pV![8 e> ! -OOu5P7@ :-Qy "$j4r"G{fdݲ(כ,U T7$/޿Q5qYy<{NiUլsݻFȶoh~ʰoryqtU_-9>OUM~Pkc{P[, b1:|" hTH0^ m9ΪںjFu)&CڈHo}jUuZVeղv]tQoSօmi=e<33j7}i-#t"bŌb_{YMs^X~*ZÃ:7za2ywQΠyDI`( 'tV0tnocz#RBTheS@.%eKuI뤭M$Z20^&<p Sۉ.ibdyz]k#6Ř?/(Հn,{T*5/(yh,ED@pNTs4=v^FQu.v9ݟ 4Vrfۏ)AlJP)ue35*9c)1C7Bn@ %Չ.ygҫ\ηX !֋A?6nBz!O] ;A[hbkˊo?!eokݬ?ѻLzl&đ5}]7O20[notw͚xDʵ_3MzG> 5_wyZSƪe1:8ܡiظfO) 1R2T_.e OP>KL0Qy"y< u_*,q~G=OO_UѷA]Ċksrfm5LA+xHv^ ō ff{ Yf)?qmw\e_/IJVyem%EC^so;:qp85}Vx4nQQ9AHGk""[{md>5zߛ}^E|.N )l_?@@]fx,Q#lX-x9gDި]a?OzibtjC2)_]: T QO峿PaOP諸h$̧gOZz2R\| <$ZA'DnΫ_LW;< TS弊ȘMjߢpY 'ĊAUȓy({+m:õ('_*h =!7x>)T' pZBfYy79d\w%˵rBP2[ .+$ЄlgptyơN&Qv=[ KMx:N o!a8U^|1r;m#i_z?;˿oF{uztQp=$ߜl> *NwW )+Lp7"2I_t1Ғ|fmkR=:ߩ3J'@.bY{[枽&9枷))|zgy+5^M.Čer:04ů"導L9lt|3'dlKfudf2 /}}gTN}Qw1ZKe0YKgk(k/qVjG:ۃ ?t,qPc>rt2*g6_RrhE:Y\^]lᝉαҲ۝8(S9ahuu*;;D=6'GqzDaa5l,[s~E ӶhWp%>8Ѡ|C>U Y~|Q6p~ycߊ>?g|3q 0z履cKC%m\05Y:ˑQQ6݅DP2Bryǿ:B `<ć]Ek />h}I[Za8>s99"uW#3f`rLq]5LT/UuztP:Q5fOp!bJU֧.nJܽn@G}GadA#-~Zhn\}" TDHz` ;t*dՂ6'C)Miym &FÏ d>ޭ-5k|Hyɚ &ehqjJ1&Zj!2:nZ;B4bv[\oL. B%_A3Lx Q#4G@R+y6lKA2 IWDIS]Q4 !ƨ&ރ%v;٢Z%Äo50%Hû);R3B0Ƒ֑5~7 ҋJcF(K}S\.@5#ZU!60\mFOAjTGx"GfҼXL)&>*𡜢\3 zX ;dgEʥ:ju&,))D#;*D{j$}m.5SGޖݏV#_f F U4c˲6S>#(3tk@AGŘI܄Bl JH nKࠤ`gWT7?#3Ɂ'bU5T3[(HqB dA$>y/(nM#P܎m@}K U'eS Y_}^nqgUc5YKaVA;>n qvr?-. O! =`Yg>bM=A#%DF$rm A/&9ІHTʗ蠻%_>gmET2{ExhMf AnK)A2v6%)%ܱ",tkT([|`cƲ&$˽eg:|lyB "25;9vǍEIAd&&ULfWbeUqvqPD쬮pjX(&a!d*)njpF̪+ANX  sL{%!ԳpgNjd eVoPʵr'+ުj"J>#M*w(aU&lf{L0$Y+ S`V6mzqn|x<4}|(deoj` vMr43IҢG* oIlfhmpk*ZI !e횠@N`I{F7MJ3bO:V 2A z آsE6* r#bs ͤYf1MA:h A@%fdi  Pkz"+PA>c"h$G\k'NnQ!"O:|AWtX[(m@ <E!&C$:)q$,mfyD'cyb!w\@]pJ{^ ~v7]-k gس"C, !: ]r_}み*4rrjQcj>\f;1Cy>?_[4R#SOc+~pn\`'Kmprq8(\&RiͶCFYht $sk;_oĭ1}es_Vy/bm], YBm2(wMd ~ۻ_>=E}a'%.<`8`3> ߧ6ކ/.Wg˥ߟ-'?>=̉ ǿ|{U&M5]›/R?JJM{REi?7kyxD[pjPj~:v:\=@hrM@ޕ@kz.Kz.Kz.Kz.Kz.Kz.Kz.Kz.Kz.Kz.Kj hRPzJ`z07K X[/+\5@r`6@>ۇ_QOOPX1٢[* (~:EJ"2\igNON]5 w¾wkzl[Ik2}%XS眴)ʖSU& ]sleӬ,דtʆhceS,F|2s--?{ȱ\ q?%Gc $@A7^HC,z8H/IMZa{mMgN=O']MK?}&mߝ3˯}GLr[MYlΎtnԽ>~t om+ȽKk'[5SWSY\bݭ0buo M#z^Vym~yM&{[^=Rz?Bm '?0qtv?1ģ.yC|?ۧ1昷$+G3 Uptjst0cFbaaaaaaaaaaaaaaaaaaaaaa}> eX chy$R MHN4:% @z 󡐰>~^B/ܜˡ/\J+J@4G̰$ AL"q5 "OBIJ[JϩTz6Tܬ2̰C>布f^Ux5Yuknr˃;}W}ڧ\@$ZYh|GGeiNćB!& Fji`,nG=X&LgjIKUmq.p}"tC~ IO]-Na z.QۥG=i HG"qHga~Y!-9YX%H:IgV",",",",",",",",",",",",",",",",",",",",",",",~٧! 5:/Rj7Jiüƌγ]HM&]#tK+m2M:-Dtw&jY, UEn*뚲#N< $I?aJ ɮB)D@G{?(o嚎 \ysVe\ILb=8Px}ѶRFA@U{Hu%TE1[KSLSime.? T{oZ#\S1)$+{{GgY]fpa`X ~iKi8nSm^KV!SD_Lk{vv?Oz[h 2oMd쟩\iA[N7"0¤qx!f<s~ф3GH6tTگ.?P]f| JfjF;EZ*t]r%ރ}л ҳS_Of}:ɳ/ΜH=Aiتr.P.\ۦ?hSn.6F/ L@ $yJ'SnDi^"XGꥩ=ڿn>-R.;[Gu T>p~YŊ̉|{,<3m>fRrK JRnXPNN=/Z2e$I==t|DD#}*8D8Ra>a=ѧ*H>z-,U |>$<0˟->_t.f>rŸ% 7RXGbcM09l!]cJs[rd5ɑK5_sv zp;T/}.'zwf%=OM~#kWl #PE@B_(vçA :sWX9 o߶ ;8n#*kߖ<|U ]3\X|QMmAhvY\'؝j`|en4]/% 7#U,qvDz[hJ1|2;@*˫4o(.mfp}#Uu17Bqԏ >.\eDYA#~f t;[DZ%ۓ!f4n@WY+KȀi ɸƗS` 8 kЖ:뎽4^r dFxɂPS02j.e hT7E&X&d0睗s&A:ꚕ0Tm;n(58Hn^X^b}u`gpϛjnr5unt 1?:7NFi :XYl6Fz4AH\oJ9B($ PIb/m2+"y *'T>|:#}LKJhiM \QYQ:3"B@HZyz*q?@j76~4ȱ 1ſ/JώesI~mYuFY2v\ck 5)d!74 !YMNAU>gΎJAl@MQAx#>pa {7߅"T"ܥ,fkPWV[]>$J>V!=ϣvAXoiۋ;ٺ?쏛=g);&Yuxdڐ&7>֟6g+ؔ`CiDrFQ4WAطoo+nVhv4%X˭ew.X@b/jJ_U[a6q69'RΕ>v};ځ My#7B'FZEykײoZFt(&6z=Y,ȫ;BDlrzA# 1k'Ⱥ4;"aD2Gq` F|VI@s6gnB:{, :gڑ^ ,jS-܌3$wpotYS2kQJX] `i:JApa  x-3hytb>qCjxvnbŘv<'^ӒcNjiAOAYQ0Mðq{`1{ҥLgェmG/)3xA80)o?ckbbnF~ύa7ųe.\#e5˗vEK~ I8]'#pY΍g*u,ss)'Nll_x{6(>9}p_{;GRR23CRR> XPr:S8f,wVzU'>gW/޷-PffsH]F{E2oj?Ev'NJ0Q{y5qv Jѳۯow@V2/^ro!/ҺL;8h 8H )/EuW AzTܳfa|i_,AcJYY9-F6b\t"[F&cYڷw H5cKI%Ee%bT5Qn;y3lvUh|9de7G'(+]WL򑔥G  Ox3AZN[7+aj%[[T#\5{2p̽8rj"&zpD>+1تW\x2pլuapլ 4KT۫ڻٿ}]ʭ,?_9tY ##+#*!!CKgI>EPZF!V*Ĥ}%St`S bdz*l\x}o:k)O-lh/O \wl7VQCo\`17N/9Tw}//Dpfv!i鲃aX5)" z M*Di4Ss9GO;QTrIWnQcYqd]!$0CZe[Dl}K8.YUb㧡ZZ@%u T!F XL# B_!3I>kSbrN>yWl0UחF.ӎmKKoёfD5ccmcwO<'c aNRy: F99UC%rFk mK->x?UEPn?-)(N`!G Ύ͜Z޶*\hi| *89Vԩs \_|[S A'ZR2,e[dz ZbKКuEF" ƨȱL S%rkd27\z#c7s#c? c֫]Մ'ϤɁnx22ɻ~ŧEZwhNb3Gl XCOVgmMJ:;k8vD/b+=lb 6$mFh;E'L*ʙze~Ď|KbcQ[wFm=`w.!QZsJUckX(Lz@+1"(mCefc#HN[Q LA,.^ %.$ ?͜x,cAnXDΈ"NSE3jz4KYȯY۠@B,^(Geud#l+"+[ Gm2Z gblk4R%  dIvFn]W1uv%팋f wV^ &z:UJ!oX I)ڄ.ǂFX'=Vs'I4eIpS#e?0?;A6<7g.fB4HR0*CTB"64ZߞFW>RE$MSʷΚ?V*^\r:jfոJUwSuSTw~&Z坥V[=!偯ޭkdMFs+eQgζ^]pщllZ:eŊjD"/fik..`K,:5"K"ĩjRc(wgnrOPޭ)V sc)K/z,uVK+Ks{fr_xO2ˤWS~z|'^ʣ8K F}rK_K/ nZWJҷ+e^,$ϖWHKެ$۫߭u|l%iOEp1^bCק~嶟Zz7wbrÝtĆ5}^o~^}ퟖgYeEmVw痋?Z0\Ykcz&ްu׿?xiv=1Uxo5Ww'u}!l3ܬj3b 2ť{a+c`VqBUQduak$V9j0t O㙒M٤4WUlEP8XP[:i%#Ze`[mч:.[4<2OwWׅF_ٳp5v[U۷R= NJq*:qo1±hgPNBs! %=؉L1)7 ZE62"Yi(cE0BLq 9譎Fk۵h*uO==f6}myKg//o;M[uh b jxokV~}g{wWrr~+:sxha_=m=suK\ /n}&, ߜ?K! Z쿿Xue}ݥr#is |wm.ޜaD*WYE_֌C`|P`4hiV#/`̚U3Ҭ捡h#mپ*}:ȖdtQ\ A#NJQ9@L:*JFYy|J>׷DˆykT)!AbjckWTөw]7s(D(:|&Gފr^-\zSշ|W̮i-vvMbM-˻/&ś ǩS&ՠ؁̠)\0 j#Ie2PXOsa5x~P:qo _8}`8GF"aRh2EP$äDdL- T3OCF.˜A#D4Yc!(vἏ%tlE1C8:^5L'^Tև12뽊?̀w*] t}.ZUˊ$R IF(R-lJtzGT k."²]_vMT';6UNU8/:ȒE%j'D 0 iHmHE_I dhr߶<35kU# yuC} .O}v֔HԄUBlrdpZΓMV\5+_'As1QKI6*x^D*v=$t"~CʖXG6BA:lD&)J\ t-TI`D8 2(}R)nT6PH]@LqѴO!Va7/NL˭+k^t(QɎiO&y"7* Qr&2;JO/| ]8#wyum mVFH !bAࡐY cȞ ![m اBcPgyNqd2-cݩf+DLԠQُiRC)VK k0RVv Ͷ#'M b=:R9Q%]̘K%G^G4eZIo Bɇc~ʘ~P#,K:PǝH~F?rS '0?<}V8ʿfyRlZVꤟV-.#//6ӧGUf>UaRGۏ洨~%/U`ubWla@uk45$ӆL%nj5͗ ilo>0v~BsƤ߁ֺgoIF'TNl=[0Y;,`52g7&3&^r̅5_-=L^o< b;8q I ٜs_ 1g.9Q L2I2TFdM5޾[P FEl\*SE>LƓFYXQӜќtN½z}ɮY++Sidj;7S,YTmx}΍s㓋Ď#^&f"Gm,^jѺL`B,}g$S2?ML \SQ! i`I*&u_>#C$g hMLȑ%ax'.AlӥHE''59B]&u_Aj;۾&M^-b[%6+@D[Э ƳՏvnoRy7$jw(2q4n-w5LjP^439a$k=f/4R@l9 }ƒ)+ϥbx咤ZSY$MtQRvȠaT < [gq43qv.pGMlV>9$, qo )u{>uOJ&0 2Ibz nvGKd⪣j{Fo3zpu=Oji4(i f鄿fr,k\Lk{_0D?o Oc\ԵԢ|̨֟c~;//eDc@r&^dr_5r.f%۶Ye5D(+_wJywb Zx<< X<*U}hGn<qDCaWX}#50p<)\3zKfg\V*.UʾUaWY׃5tGWU7>|\Ui;\U)7\}!#+X+W*;UJpeX` x4pUŵXJgWUJ78\;ԱUPJI8+Nc]U  \Uq8UZ}JWH*+) l]Uq$2TiJ9Wݞz'ڨ EAJ efyUE5!*Z/_q[$O|v?C%r1 K do ٙqP5o]Cռ] UtzYBCpN;9z@$+UQI%;jq. hSE$ KZx4SԪRQu\53q{U;eÕ>CEI/Mzkp^_.0S_n*C#r%0F.A2z5i/DkSIѠ:Lj]VRxLt4mqӇ~sO0x:ȷXbۡcF虵F܎;ᆮ4- 1\Ljm.1% DNLnW˒-Svqx]O` OYҽ'bVrӢ`HKdž,rQi*)do x"N96F*D1&T~ H[X2HGD*K3q戌-Q<.# 2fMr5RliAM%o%? &|bQLJwBgؤXWt$-@4`k|dFㄗȦ9t}&n:b2w7 '@cR{E!C.{]!Jdm-ygNXJf~QuJ)B0rђBʾNE*ʃʞt/uL;irjU𱹼]TZ_c29gոP_~29]5޷зitz_]MnKFXUIŘfk4C*%^-;0r"Gz@ B093jC"1z  XA( M4(J6cj;٫x3[6|k:f㗧MurqV/Xe[[$w fh'X #t6 Px,ђ &т,؞$x._=[g@<2Cd$*c ЊtJdjL7dW{pɨV>%YsפwMBm! Tٝ#[} ˺NEc*hP`XťғgZM3Q f`3XSѷzZl]Gġz)ڳ.@rKͬrsW>b))j4d-8]| 2F+R $ƻ;{@y{NZ12-ϰnOHkOl1tŘ)/(kL4mD@Ik|CV'~OQFd@ЫY3WOxWN7L֥ve\gQ_f>Ou {U}>E(] t (zy }8X' b7knuqC6a<ЉRvM̞%->.ivKbH}T@9mdru}6HBL-Lj(Y s{WFOw ss,6dD<< pˎe) +IHb[lvwlQA0ڒ@Tt #C)c`1ԅXFW3oQ}J9pljۈLs I[) H9 +z]L,Y:(ZGTQ}p-be)֢ 2+muFhsdsRcEm6J캛1,Y]OY+z*Qj3996JR'.SZ~}L."ڃC]`G#ZwC?L^(dQj J̺NE>ĒI~OaO6S8{&lϦFG~_p㲌/v<n~|Gkj07lWۼj% {gwI+ ՠ 鬶3B0QGr- 'o'L N R&SqUvgK *2BPVU͕ >bbqv`w~ v篤R%_~*`IeZ!~|'tgw)wxx>d_W~Ƞ%CfCZx' (l)_; r8e" -1DӒ%6}B>a`IFT({l0$z tk8+t(;*8n}mBP>S,#qY>Sd@<\MHyy*4pυfOdMשP7ա]dՉFUC4p4yXaVF5L9HY̱Rދ UEY`P‚m$GZPM޷B,|yK9#YfHp' {g>?ރ>A)K)Ĭcгq#~zBo2^?/[T9Z흯_NYoE-l2_J9_n?~-a5Qm)=s7f῟^6MgAV>9S32y~m@1ն+V&F^uQtcz)3aaXs85 RԽ&($JBm8֫*,dGض@/}y)}r~ Wnv+/|n'Ԗ]+8r?{0?iT*D~&{;\g7>zyȷ'_uP v>{%AUj=޷OLu5.'_XaCvR&Ug)##!<:R0cs\>w<Q〜tdԹ@(-zwTt1p1+OV̪u8ȹN4<2#_sTM%*(2yFEe)! ]*&ћ,T.%mK-|C+Қ R (y }*:%[5Z4hlCO&~X2_. |Lp[_G3]s}S.L۴R_> GhHE*͡"e"ZQY]B/%AeDxn3jM~'iLDU>X֛bD,:B"4HcAA" *a;&}2K6M L7G;؀IȱڡkXw)]ʦ>䙫6˥;KNz Q N C35纘~_HKMItFZAβS/\ܲ:z(@i2W)O_ Pvt\eY-$O$ɑAJ$ k6JH,J>bz/H*%8 <+cd" QkK.sYz rkNf9pƚRU~ Ӌ;^˝G%JTm` cC0>EBA('}/أs[ti|=;v q.').I EAk!+9mxS>V{b -wt<DvI9IoYA^ Qd2X%$\}+5$ \A w]Xg} \,JLu'6ڛR݅:(R[E:q]ХHT*/P NޖId*qHމQ [7f̺!w輧؜?B=Lxd#G]95e^8yݾEߍ4.bR@px;ή< W隫|3nܒs6Ŗ/juAtT (R^^Т#jQ38Ap`.XrRG}8AA|˅Uk@Yڋt]xmg; gBZNy'c:b(QŢEJ{dLm@Bg]0k%736|qw/Ľ/B &Zښ%T$$1mH%#XzI8Br"ߓgEZzrF?f5gn]TIŸt73utL(`^`v1Qˏe.Ȁ& E~.$[ʎ$C%tF芲*&0*'`}6%d%mS5.I:B6XK!뒇LTLԇ$vC>wф߳ZZcxw$VmLPuz[7^/wR 0A *Iiio2d枦 W#u'DbOnLȠNR)8F(yɏ)_˸Z9gilvy >,'ӛw=%J?˳OO~xC.EhQ,~Bn%O{5'_m30|6g%].2?ݺZoth5PNd>*=7];z=嘗 VʹNz!6vK-J zƲ{ *|^SG5sIMփF&HzpbV|n_bGA)B2QEE0l+Tld`ڦd3?,WGN1 fcqxYh(RJR>WPy8yr\e[ʗ'|3TS*f4 T=TbAS4pV%:]* ֠]v5w8V g? *Qj V'5g[!Y3Guy2e P yԂrQZƖeAvw8W]) B8 l(i44|/$j \yh啛5'q=9YdP`G11E d7 hjQT}z v?A }Ȫtd{(&-B``2F5Jg1Nl{Ԥ1fI *0bĦ g h$6h818L|'L!UVco9uByrY%h4# Ձ@U\mqy_ &"U>5m(wŽ!T&LjM@ :j̸)VV*+1M(ec8؀M^$ڡGL~Z4ǚd{ !ݸd]0RMSUfakk;t  W PYT{&R(wcvd%KRd[]Q.)}Fʀqs3h`s`cj0P$ { bpEDSlݳ8=L>ql>Xv#{vt\1|U-RSLC698Gb FGA@ZsVEp}2zΉ+_uKOvD^l֊6'Ͷhf5;=S:/1w,AhD 7 `ߋ}>պg+]Ge?-q0!ɛ ,o/ӶLU J LUq1Mk)kYm KPQ(w^D:+U֩BiXEzL/WVU«Llj$BW7 0v!ޝwwSMf{3 KV޷(>.Lj8>$eڤ/ӟoM묾v~qz6k3[Pstei|g(-4DžKU1eU8|\. ;!bVR˼>ʷŐ>A-N%wOgfU:ֽw]|#-t@~%j9. ~{ o"*LobQNL|2B $Z4J b]ٲ\//^ӯkS2=[6~YzY".#Wݻ{Elhè4l2C/N{׭R nԃwWe"}<{NE>Z{y&íg|;fڢv9ؤMKWz&'=E`YQܝEUbK\Y\Z%ln.q}af}+(BgD Vз"->b2r1kNvLNݯd;8{LRqmcJ;hWX6ʝ(Qw6?!v3b.#y6%t0k fl}Pu܈]nnd$8$x8ifXyolq#z%He D7?茋}'! &\GŅ.Z2LaTzRA{QN|.:F'25X} 8$$0yl"-`wASOq^y9;pPdJؗ$F#qOuwfѸAP'N :J::bhM,fGn,?N'3QM"㻣 L-D |M]FD:rq#Jao 7 ;-} YP,$4)2 ! ^>S.ӵ*C#1j;ߟʵ,| ,}]~_`vGJPϡizێ/I`.pIlqNvr 枇MgSwG56{SPJ Z3.P䲳H1.@*h,`(u؞rHWpQ2'劷UEFJɊFCC6pim%_6%\UwWF[|/_F!_w,U3>[jK9|oqjŸw"Y_ś~m۲.rP5i6N;ޣ1D]JWrfH|1يH1U)gjOZCN.33jdP[tGe䄺6Hm<&g{e#=8==:;icod^gҽ5B v8㦁S;s R"JmU5"X0rFDyW{] Vw'ķa:hk4/U{q ^+ߎ+7z>2 NaOy.K!R #tĔētα<,lc]sw*xR'n/RvĤn­ 'nS ^ Ǫ2X yܫdvs-@~fˆo;Z^B|}LtS2iV[[V$v 7kx)!)r? &2:pb\IT%/:%@z 0g6?Gӳ{(g5@%4r2;m[~N4D^?%/?kɒĶiN{s2|씼m+1/u][oG+ŴYcr 00nI)RaS=lJ$&)(QvP$էΩsYRj<ŷE1`a6lz18ϯ[?tΆ~WmXt ?a.2|"rO)]t<7S(%i20%d^7d厎m~o#~<언޹\ٽ:8 =%o%87]2~ f<<{qHq%)feoy)6d4 [FgzcI6{ҥ\F[b6_ID}bn4s)tT<Ӯl ^S8 oL"qq[*$!d{+zY}:Ns18z;?- /5ZlNiGbVrZL4`JHjLH sMlcLd[Ƙ}ܸ R)!`! V'rҩl(Lx g(}J6}q,r|mhD]>)f"Yb:Pz.T-lʀSz@ebme_?+՟(*E,I jf\rcUNpc|鲯{й!-|TbrrjODKg7Qao +9rIg;ؚ ^v=[ͬFyeJ\>fe@ p{V-1ˆݩ|s$HUƄ)C>!Y`RcPTqT HRpv#c9R IƁXFCxz,|'ͭ*33bs}[_4`<^G_[0ԜY PL9] wH< Rp1F"NCV`,&W#@(4A'툱!TL9ҙRK g7b$ŤPԦQ3حF8PƭMsFH)H ɭjrF+a1oS&ZEٌ5I* Ij @-k.9"3/:d  aF_zYd`<D,&""+Gjn sd}g5DEO K8BBb:*d*i9j:Ȇʖ5KDAmRKg\pވG=jLhY9/J-5=-G|q^0.{\ܚƄhM SY o͸ֹ7xk74$$=.OIǩ8 a;^*_qv ~|!GTX-ѱ ʞ3OgÍċ3V RHoyd\J* >VHMnCwF˩+bOv= 7Hf7i"*1p 3N$u0IR K\\yĘ$\R/&pb_I徜kEx{c wCW$>!G /z>XGK SMy2J'Qmc:;wXP :2{KerƁU8, H`̤TkƤG5 UJ^`Ϧy฻j׷]zsBN0^|j]hd#N~j7r=L.?&C\LB#6L`:+ #8|LqS?UYW_e?L=>+Ȏ|v9NOz.3=.z:gjG-2 : #3JяOc7?۠4jֈvulh{IV9}c[,g^w=W)̯fz\>2X v\rzmoiZ:|;oۡ")6>uqˏpTsW ;RL7qlܾd޹**I02I/.q//ؽ}2ECHE'rY|21xˌ06*yӳ&fcpnϹ_/^l&M C]~g}n李x<ƥqm恅@Atz9ߵ JR%znyDq:l3E9)IMpDȲ܊q =00B"rMi:ۨ#Fdm9*h "WQ=-vNN(2"h\}_}8tJyG.(qS%i:yeAl2+A"QDYO硇NkCEFRͤz3<#ME3ʤi=e {)<g?8b!p _9^υN!i e$@r!EwE2`0am0%Dꋭ;)Q'źN-:qdhנEJkv#{0#Y/F U_;y{b|㏇C23 xD!M.Mɍ`'7Y—C@/rC*Ùm h8Wr |:iRoy|B M0  ?$dסL~a'f83_ċҽާ{C0/7nax[tuk^>4]JRwu"1KT RIswW# v ˴7ߑAM/g{w0p|:&Xt60nâ{mS{Jig:){ȒwYje2rGGjd66BGϣY΁ۮΕݫӰѣQ7Ӯ~zt]5MJGXoU<)Gـm8IhoUh^?CA-S#u:?ZunUcI6{ҥl76Uu_]Jbq`?Rm /~c^-K>iyן@4rLR*CD^9ΠR0E+\ү# |,|i n4ya"2K@:] uZ521quSʾw`rc;KU|⹙V>%ўFmpN,+pÁ9ZK#wBK$ Nq/"eo_E5}&*CYm|ƛo vg\ck4෽~B +b7aR/j LPe~R"RBP2FdQQ}̣T2'SIrNsX+xGfBt&݀XNv⿣ʞ֔Ĭg\(%AD1 ,ͥs [$)]6 HJQk9!Jsb5 Jg U`̋ys8wtݘb}Wf|e(j_qㅶZWD&QAW =VQGk*ݳo3A[QPF}"Sc#J 3;+mH O '}0L0@؝`?l bD IY,C'S-~,|fu8!;th9bBEL(LZD@4: \p#(sRKl*\Y/'`寄P@ $VH-#K-HܸX WgTlΚRa8Y9ߞ^uѫ;5^̮6Q(}&m0Hv7FuBI{˽%fU|)N&^ǒ '-B 'lzpDVM|gJA!R8DR٦Hb2H\+$jKj/aje) Wych Q7aܬ/7ίqU!_6h`4\_V%<;!+m%b~z0%Pd{0]%<ȓV"휳Ė$"-C1`Mil$aT'ɶcgAIySrv:j gqabvEjWc6,y/{ q9dX*Ga"$TZzSŗ^3'dPz-UݜY 18YȄ "+:J$k(ًA7Ȃ:HF5Zp֨_X>DD%N-b )qLH¢!`'RexIhD$VbrƂ6rșBt n%ɒJ(Jpp>C@rqNҥjTr\Tb/wfk ^9k @^[!-:fnh I1h{r+utE=yŖZ4U\K7'dľu!t\d3XLfo?M>~ ^P5xؠʺT܆|4w  pp51fJ9_CC,LJ,'Ɖ,u**drOQ+ )vwf].^mn).gȇd }PhY=wC0,OL+6S A&lq-3`.ud\ohXxP@>#Z!t$3*GVps8+d[<8ZcK s][/dYJe«d~*ؑEmFŐŵh|J8FYF%iSle9 6 6㕬{Əܷ0VC֛u.y%4DUD jV?"'),L;YB&fuk3dfZK ,(Q)%a2)5B{K.s,վP6 4 ^ȕdHdMR|1 LGEOo6ƲLJ\Q[vF R:.1DqLH:sƀEЁqE2sBA#K΃HHӗߠuzyغXnsIQXZ-sT q*;_zG #pUfޓN K%("܏8.#"Œ]`փN* ,20ifhv/*' FN!DfM9hKd"?ǞڟK5 å r6+g)a[ AsU , *X{FHG+gYdKXg@Fsy'rTvs0gA0]38tZQq-lߛ;*K?|?1p@Ԕ'sM’$?L'W[VRx]bBixgp1UUT+.X|`rIc+q$y[x^ׅS;u\L&q&-7Kݑ@mUθK&N5RrOa>Oatu:_Nt#߬w!(b=d ׉+0~Xۇ”8^;o˟_&f%6p!)rm.ڻNY慰h;ޞ-%ChND% .ooPˈ=X~)ǜa\NoJ` ѐ>Dhs=@0J;VvBTED=M Z&b{@L:+ yV V:/&Ri@5zx8{"x"T@*K&jGH9FA:Ȩ%Ch6 E4qjNFGyrX+}Qώ=T.gI7 ݘ+gsKC8%FTTb67jCh@(RVMtNF/#1EۿqGCn.32Z"c.dYn.KL8ΉJH)##S!9<2<ˀV +a.U$'I@pa+.҃[Kx6`R(=`4)? .pW$ͧ^/?e(k/Mv=앁ʶGZ/>dN h]p֨XUXjPpa~'xĺEc# 8 I9dmBE ,ے7;r&V\Ys_JyʻRKGnΉ7sb5/9wI;|o>{BK$# ! R7YH0NJ&#(F9I͑=g6 JDPΓ?++%*)VQ{òblC%%R)l2)u8yrDD ,B ÜSҀ¡I"kJ*qD (xYϪ @>^$|\B]Rt`; \'S$Փ3#pVÊn ,Ε},KA+A}~a9{:kB 3, Sd2P3. "Б0AzU jŨz;N*&zQ0E1yH pJUM&A&@WIfL *[ hjWf;*}p % "JZR G˂eyXNPUy﹭މ.r? U#cWfM&2,BQztlHyIA nu'N3k٣{lƏ2wob^kv71jҴM6a_ARHCBlRلlOSҰ eb`)Ͳ}RE]BZ !><7^]]tIJnFՍ7m 5q<7] yO3Yͤ{-en&V`6T4㟗o _ood;Xօo?;9d' R(νϹJ2 s.)s %Jd?;%~+vϕu5L&"…\n{}'"\4'XǬW7'˴?3nF}-MLM* E) ( e@J}]_q~b>DyZټ,LQ|8ҫ{}vd[ibD =j8YxRR< 7XܶJ'wۖ}CNKįI27d f8 b #!s5Y\Zc]ZD㡕sC,p"ώg`^lCMH\,|R2gI!0ށ"'ר\$tsKB Fo:l;2HA5"c"e?u`ß 핫11m 6=Fx./N{~dr;j*@r >2[GoזP1ֲTeƥdUP"d.*a .$,S,LJ 8%f"sp[2 lϫV)u_#j1-FV vSNχUK{CzkIiٻ6Uu~U `@G5EHʎӧzHʒ̑FT˦1vaUTpV+,)˔rZ$ ѕLBe'SFv2QPV*PyF`A AkڍYR N׬ +BAL(RB)g]gu]ψJN% .F0rR VvEqDs觾kӻyˋJ0;m63-M->6V}a֦4hG61+2T(dpMPgyʠz6\mzzgD ytg15ZIrvt#y6-$U 9)K& 6Fi<\i^`>߼Y\0h˼ R2A2(U4 gb&Fo&wN(wzߴ١G áCOmYFQknM6ZxEX]w텉#?Ə){i2?Xg|x<[Vsm"K/6M_n 3y,V <&o'M\SUf..eUsüv1qy5??X XB^B vcȏ.GAd=FcIb;ܱ2Q]?Ͳ+{UG fl?m&?Ojy9m]dz.wK_U>fyKvv/crvݟYy^2:w6f0?v9]{Y^c>Li`V&'[?hİĊ}Cև>>n_jMW24w׀?h?x<2^YD+{H=ySC=9y,>c )iiL(~},(TlP _8J{0KɠB/-] b{8Z5q/ڿ飫`ˎ ;XI5?&ݑp,y@~*#XYJ~O[=VܑDxvҟR{Whgyzm֯黋 u8t^N p}wæ _yKk<=h n5y(V@]od7FHfష=gNhxmD?6{r{.W<&܍Y(9"H |2Td$dPpXBa 79]X?e?4{e20&l։AtT+d0ۃFlA ULbzR=QI׬1g4 OcËB9od_OV#6hAh 릚F%Ԃ?7ylqư~NC2Q'c|}v$Il(%*b Vɖ}ZfEd q=6cjyJnPCmoS[t嗶˅H L@|u7)xנ BTmu؝AE*j-0Un&»޾e7fw7]#%_.xC^moߎWoz_o5ü;"__G{~%g3Nepjы䧄Ħ#hptl~~`N鴑>j/F&MeI`3`#ő lq;FD dLوDȚ"pс'U*b5)rX@ U&zll}S:I56j8yˎbxAJ؛ ‚ړ.D 1fR&}Q*NZg'gLV,$(QipRcAmRZ|9S8"pj9냒]x)_>@QN{qZ?[Dl=aɺ=xjW'Bb[&Xk,g'kl$Dն^( \. $C%\|9Pv耔-XA526Vi7*g b}uF,|FmZ=a]au{4ˏz#P hJ:8Kq9h=KmDl )Usc[V Cu{*FdMe11@XQ};|l?i]E4b"q1OqDZ QF-Gp(}dPE钤MM@(okc$Վ#onC!Eg@̖!h"d,rVt/FYLjaE6gd)Gb_2h/CBH-[Q= 1(""#bk|*L19M,"{R LƤ#b fyG#z_{Jf\r,.ƸF\qnG >k(d`3TV(Q*dY= #J:ላ͸TPaO4}6to.ڦQtYǯNh;F]>p\rurӅlM򊤞eT Y>$3]iC~LApO' 򨽴O&!b=!XYa4 E\g;Wt,֌=_c)MxA3:}VhY[={.Sͻݐ!-^(}~Ǘ<8!hY Y:W{)&v$j:•Ƀ)-o )Sp- &]Rn UIJ<-@Wdp|}>+FxWZ޳d훫 SHQdB.g4ijB*@Uz=*ޟ:-JQT<#7DrP2,$m-"+QEa%Bbd2m C(eնDH)\Bxɶ*i0IF5HMfPBz2-ieVx=6b茝e~;ʏfyJg~Ue*S!q/JР^F$PxQ #Yh2aN<ى,d)A0Jn;ɝU'ŨX$cQ綈dYΜDoS9 KGQA6*J%fdkY51Bv8]8D IUc9kr߁{??¨uA)9D]D˯dUrbՓUBeۑuLp!U4q/N p+*{'\RްA(OYIFz*ed&0m26-MfVG?in TcR3 IJ%Թ6F.\k3zĘ MF4㉣Rս/OH"i xP"ѡ(e-Y3$H 6gU}2 =ޮW4mLELf,rAcQ DL%)$0 )2NP;TЙ-vvqkMOv+t1z5y_2.y5|បΐ'1+%HFE/}/Uh6ھ]?s )/"`$%ZX 3VÜ W_B rq#82kkc=IKg)e%re 4p$V0l)&oB"QHYuHҰ6EwL"^ˈSihnEʏfNGŦwP+{ë[lE3!C9Wdq;VNݧP9&ݤO| ÀkкSn88xo@+/awb9M?෿Wpgӛ *Zff ,\*J;tДgL@'!qXxGU`wWIe,:uq? 6.S"` .,2]yK=eW(gJG$*8ʝO%BSϘ8P=tHbaJ{<>RO9;pJv%LZ m҂[k/-ym.YN^z]H^%tSZQ9= q|V.328f /DP(%T2oD̝˾ J7/:/0D@= r@ʸ`@a-MJi8`1waAX :`FhNPن²tYnʅldXyզFbvNW7ڛs1m_05]1=^}!JR!(U+ ί9aPix-F1 )`)$ŤG]quO 0ˉZ`-p(CkC8=wR H -rgj'|tj>ߗٖt;IũUS|Iη~Yoyr ;~ f?&w0Aͳ\Q' 5PK՘FT(%wa,qI m] ן(ȿNVo]H] ?ڑv}YBΞ~eHoi7+9.CmQ[Ra}R!q* g7ɧ  S hS)Da'K=.<#RC.u8* Eia#юP&y Oa>\s 5 =$ *Ცxnqҗ^y n|6HɈ8q3B514 ou/O%c8: ߩ;Ui^*Yuph~_O[_Q.X:Vޭ'xqnчW ԡ}'UkS4Qz;٤7;`ïMeZ1Tdм7xˡC_YCׁh?u@&޵'aYTϲ9X R9E"v׋-VKg"5TbK`ܽ3أE,-H`\X&KJۣ~TQٰEO]o vo*L7S+ #s8B9MͣPދ$YoP,B0J# be}0z)a$EV 1lwf+PJV{danGâO|\Y֦ۨ*ejr>F$!yw1될ت-PO6>  9A%I$B再{  @A3ARf239 HY$R /&Dl T{,m/Kΰ=WvZ/% yw!*kɟd؟}D)xy<_B8Gf PEJddtB!I'L~* @!Y/xl~{*zzIXĊcI82J^dӵAbH/lrKb5'Ipݸqz"8/,9^OJ8<";. X2 aZ{lw}]hX$D}!ڬТev9CCu(ʭٍ.jhud&C 4zJ4ݾw&)ARcc Tz,)-c>XUw◸㇋*։`é1Nz>(T1jLhC;'`qa1ʴl¶B 1nMtvf0|}a'aM*P91oǂ[$$&Lv`}&>y%=`؃%y6rd3n#/eu.T..RH ~gmd`ӽdcI6+ܱ^ cxiޟ0v GuէξuӛG0p|5lAzn;ri57+dx~V&XyOWPw x;mŘ<>?N\ߦ~hQᦰָMmt8nkMRC|Ᵽ^ ^s29$0OXR1«(MOEugAu>:=.:6aҩ#g.è:}4,Q?o[)4d9lC!iw~(&m |,yف@J<B ucm֍vؐJ}IN7[ZPÍomR)&yy1- *{OMZ_os.v6ҀfӦቇ&Ž&vUUtbwۼ $P˚MTYHұy"tX8wZWi9 ԽXj:xnyL`{AT0O9~f~gMF-`WTX+0#ڥC# q[cN8G ,B/qWEwϔ~,xc:[bWcVVa+#.xYuəLXRJ4Brbiθ>W‹=c3^Oq$G8c|,#:8F`w%į251˸Csd8>`UARG"wmuK &X^9@쬼}t/5G1j޾bO#U?a%ͮ܍07̸=Nuy̑0ZS0AAA)g {?fC/Mrp5S2hoB2'UFX*( !c儹d*C0!B*CHzg՘ Me4zl5ͭHљ: Sd_< ?~q)U¢F"=Vy] .P;P!Nh)2F I ; jxT>s>`R|`6$S{ F5poP]3pR=ۇK'^P v:j%'vWNV{}++Ji`b*;c]8Xs.%QDQl)Y")JG4o0|B))& zSNq飶ěU`̙ rKl/al0g) 1+ M9(󻄌[lpo'[>n>._~ݠ0~<*߹fG%KOR?\y@)4Q ÕRDא=Z0P&ҁK!#4AsV+?~mb^JmYj^jvNqmaCcBp S-Rf~[sHH&2*hNYڑ ikm#GOw{a'"x"K$~òlY$-6٬ǯ`5Wd!2䊬h HDb9#d/A2> 2ȐWզ[F,cc_(*KDK^"k3"8VQO!FN#WI Y@2L"%Y/ UJ\ .#( 1U$A&HP }ɓ+9+Kj-?MՁt:|LjT\Tb/-R&'8e L"BzE7B/!EVcj?yxƼG#wz13wQ[蒄:ѧᙨ4:jԶQLyLMZ%laig{tA5an=fL|=l!i腱"KKdk !:SI(ikVy]_f1_.v:(sx,cH[}0G*ɶk&M@ p-3lȌlдqsj-ɖx L؜<B#iTY3jwo',o฾lr瞱AyM10eD+TYnJ2vo#XsBX{i-T1<ː@ s0eDٚHؔCLN np'eonM՛! 4辢2?Fu 7M/Mտm{jn48rjP~W+CcQJpR`Vnݡh[]H뉐¨!rV(l!4ipVXWHJRD2 o@v"YTi;ĈgH'N DgW`\.5RjY:@&l(o9c2׮G\m:][R蘼Z{l]ҽ _g?~4P (D@7&op|q1 %3D'~"(]^[Wˤsd02i/y5:Jm|9z٥o]ca4~)TT!Jh6*ر]ێm=l8Vݬ{Xw669v8뎝 v},M$n奂hp>h#TV#[ՎR/C/JaJ}9ht୍N BxR%4b0ɹǾ|gBjtүO}ܔv*\C/p]OީM.K_'I$O='ўs4+qt/UPDV$e W"Ui0Y<0YP0&JrԬOFMR& 1'T֠UPހ$8֬6,Hj%MJ^j_3|U&z nk_W36 oy!.-U*V_=PQYK͐fc0iiP :&<+2Zǃw6|՗oZBGt1xSCJC[ID$+3 aG1ނ @2X+Y1ѿz/u99;gE :4řt, #,@KD ZՠΆI+=^ Q0E1:H $4H A' ф ȄEpEH2Cf^pT4d)5"%lH-):0I4IR[ι(pʹ=&!#AS,Gp4+%u)u2LvB1st,nUm:V\=ܲ-7FWtQ"JCZ|u*AF?fJ*9vbz X҆^VVZD`2+@!d\y+=~,F<1B2RevW5;}-?E>χ=`N*< 10if vK)̈́ EKXj+Iy=?jpr6+k4MH#2sszp5*IrfRt~r$ރ䅿-x|Wn*hzI_OFWxpZ k'K^oHqM ) /Vg\2qrNH%4]tIfqm5,yE@R9H1it J?ZbLYܼ1c_R|Kle_\nh_ڏ^ :ܵE?FysiNB|-zt씽. T6xSk2eV5>JIzE:yK9}A*`Z&`%4Iu_Oa$pLHl DL<+CB_a{23HdH*-,j.qR}a#Δ'% |&jO2 /]!Dx4Zs1{HMNN=Z4DT}.OP':17}N KR U3\sHK,hdqU 뾢4qz]+,oطEq8AܢJ{HDJ["n~hZWCY ,%RqF)oPY@rcN}7x6X@ 92 GFuT718&Vu= ()Go?^'jS9БӠp"511^T5g9tNM>l}3zoPy,+!!#AE%{0w\wCD.XQ>=:"5+&<$+њRZ$i1lWǠ >5]EnKJ'"LAHMR%D啎{`)+{M7p.>խB1g#n ,|s!Zq)>N>-jffALW kz5;S|>L(Ֆ{9A{>94QzKFfjMmΔ#x|9_^ ? |ow _Rhaѷqz>2U8ﲄ+ >G)O!l&)ɒ7¿tKma,&dVq9kHoP2;:NWњK`SZ^-hj2΅)=`-rez%p#࿉?Iz9 4,ڠW !+knTdXiN)%JNHsB坟6&_އ Fmdk,3Q&HÙ)9#C?v~8pyF ˙s9Rr(hB8ny,_}h'$j;F,J8o4mMŻ/A!3V tzigL%$dA)1i{¼ k[]•մd[2߲ NJźbod݁o{;ʾQ:L78 kk%(&ΪFJkyU \=o4n^oU>l } I#]חp{sU9#~y7=;Ӗg|e!ٻ6$W}FI v7nMM Iٱ߯z8(CȦDqiUT?]OuulqSeikȯ"JYVy(JkTxsnbNS;&tι9sn#3!:O $i,T@-.d B$21 ,[tKI<IRLlT%3⩣sNXCx*M g7B xc-7^fu&10 Rdk0B\Wro <6rз<7gqI*SBl< |jR9˛+4*Љ+[v)NyɃ"E{uд*eN&9#}TKH,CFYPTX[T8+_Դ: @!I yernPQ'p(`"YṁYjևM>c4\$"ܲ8dpZ$Ƈ1YWoȎm95J nS9T'4nZZXPdcw fǼH[}CJgk\*%ryS91&8 JI-A3FY}AyORm{@3il$F6Ը/+e-,&eBPȊz`!i!S%r)e8(,ᑚ@žb{VBu3ޔNƞwc4tۏg7e8nsգV}cIs3~`">0B8OGF*Τfy(RJ%A"GIpDȺܨq =00B" P xFz %#VM*h4#ٿ:QE .6#NKbN?Z[!Vג#c. Vl>'nS{.0h~7U"&&4 )Pn(*(+z)<* ==|iCHHR8!}.ڜz8huA/fky O6_G!p\0rU4nM} S/5NS & ƻ@W}Ur^~׋iѲf% l ԓnv K%@*u"CsnR&KWqP7W+Ueō'qLtvXD6g |2T V cA>bǰL };Hr8u~r}kO0~.nQlXmB/>Ћ\(E eG-m$MFCim E])V&cZy} OT9H59FEuMBŢNA}.}p۩vϦ iEtHoXt{o ~Amr9$҈ b;osTu )so M%/en؊HM"^iO=tm-[lCwK~Ev<ώ]xXu9RkD)z.B٥5Ox$]]UAXd^QMff|W"ErҨ+WB[RmuD J/{URœgeH!F%3-jcQ_ NDyBk,9t*{$ ņV>%p,o`D]Hd@ 0P'\Zȕ<*Ƽ6qtх'ʄ%IDf\rcUNpc|Zl8jޫ)qS`K|:ZLOeDXx>t3AIsN듬mȝlAϖ}d(`Q9!J\>4pі%\,Y`X(w34/{Q\#Fn&$NBO: R@B*ņ*Űf쉅}XHm[ܜi2]G_4`4| 3#`4. 92 {`;l$)8x@1F"NC&{,;8˰ɕJl 161Ec*&dΦt!Rٍn2+TPvڴ0j{ vN3qpASHELy@ ܪ^-`sjD+(c-I;#@!Ȣ=&S,2|1(2$H(ņkRx*XL?EDVY="nen,|g5DEO K8BBb:*Tsd:h ED\J$X"Z&)Zq獈**y4#fBsyQ Cm뫈!.Ζ,%(1!qV 440T@_3qD}2Zj" ;uSbq*x(#@؎KQ^*_q"vG?P4bl#+DWD|2XyS@D%4 ,U|BÆl3'W̞Piw{a̢ݤO\ڏDKq"I0eX2ZY$ᒒ}cB 4[S,j4'ۯ<y>Tp;ju|1c-5sN"4 D Duf7H+Dt&á. & dLZâZ b!c&EZ3&=4\Tɓg(ņ{ouj]ٮ@n_dM?} M$_qQ38K\ыgwLT7{3G/rvq7j C^Ic4-bYdSYjY;eZУZ[ J.:p_Y烟&81(1|0J`8Z s31v97 2qn!q߄{9⋣m0~HBgߞTo+koqrXYpr+ښŶu-ZSEOF_^˽ϸ(Zؠ _]k1qI*pT(<8>=?s_:dw΅p~ [f9ߚepFf~N}OݵE[?!"#2\>w`r/z?^u}K==6&`y0-|I${n~KZa~w.xQKp1KGGZ)*Ǘ?3v*%FcyU-Ax*fr6 rXEۡهmfݨx#VZbҝCaX}:pHH%s{ȓGZYd"g'0rx8T? W>;gQ6  HT@̂!>CeVqCe'AHAx!'N !%`YdRK6a|Y1u>:G[A-b@) w 2@WO Fg,Žt8 s',Y)ѿz/r?\ `hՄg&Ҁ Ax`X 0. "Ё0A* j٠IUOZ1arLJ6 1ZH $4HT$D ιlK\HfLQG'cf7M/ ɬN`{~ԏIZ` %d# =֝Ev3IM)pB}2M;#2qt.Ksb> -W:'D3EpJp=De*ҲT=V`9QKCtG4L`w i!o5r AiD @Y$`w 61-J@V˂.SύZ46F^`1yg @dG %t_"%ld--ZB$J/[k5(KvHBFfݦ3CA8]):Rd""EAѲuζs`JcYХ,C I:Y.^{F߳ឡQоŨu @U؍k{Yh7:K9(@*@ʩD %8"9bd.<]$6\~l wQ?ZYTf]cփӹ qNhYBaRHAc@‚d En*tKHhFdf!9Qy/w|hg4PEJ!H2Kl~1[˲3(s橔'mdoOi0 K$t~Oh}O};W2Tul[iyZ8ᴤU?iibXcvDCi/BXZP";_JV(|9au$Ub@個55 jce|ݘcJ/aН\]_`O_nk n!]ރឃL:= qc?8e6YJO[ (cP!V{N (}Ϯ$  .aXE6l?.So觛AlOVz^aoFmxK⿘"f{o7ӹn껎~~6?R͆`O,gMEt[?s\k^RByHE|yfͷhE648(gMc]Mr B)8cਭ *ze' 6@W2L3H}ek/uJ@bօJIL Q't 0P'\Tu 'oy}#Ck5\gOHk+ \1C &fN|G9=Ë꞊zEjيꍋ?Yz.7o;us~rQ 蚘i Zޘn.'@E2]ʉMT`& -|PCŠ>M1a}n{Ddh)Hb7 IzUL9s,FKkdՍ%1}#n(NZio,7kk"n IaT.Vt6;o&fC`A 9ݘ< ]1TtXR4urE] uuye !1N1A$$]!F4[Br0^&]V 9y'#ɚ̽)}`H;. Kܱ5` 1٨n҃ox~G*a,6)[| ԃu3ALūWF&㶚$:n:n:n+:Z##luVTq[q[q[T:n:n:n:n:nOAyrqkro_| 6e$ F%'/k gdu߇VVlD @@ά (Uy\ ..#ϑy$ov}%4⠲4!FUz3DK) RhPZ pykb"DH*b!;w? AZ]Nrl`5dOP1ֲtc4~):B /1JbP*e䲌`&sArJ$2qef;םm^B3ݵ)ߒ?}9%zϖǾ2)mwt}зI{.` oa9C!`$h!=I pGCJ&$9H2JHB 2 w $u.Ak$XHV `"LȂI_G_%v @-HN ɇ˧Y4#vd7A#Op},xw+x1!HG4]L{`r0Ȑd`?E.PQXUO7g4]*zv*y>IeZ+@#bɖdKw ·*y:<0(tU^E#qΒ, Nu<;" BzM$΅ ڈgqa 0}ˬb? =g5z4gl#ԅ/ x Go .L[s90!F/% 'r:&4& c*Ɵ/w(zsZ/ Vq.npK7y`?8:+?i{kY/õvH(1-3r>xh:v!mȖIY v!x5=h?_н :&-!sQ&4!GreY1OO,ZH|h\fs\?{EA4dr2PªY31Zhζ N^/i݌_)v5swFkfnvaӣܤq[9\w O^Vrn4ύPJc{R͒ޗ%-%-5Hy $}ȍGG6\)hRRm.&V1El͛A 'QGXB6̔Al6!i- "KK1bV>]5AJy)8~!6fvȽ<=s0 v,Gɝ'Մ1'-*`#I;0hQBiDJ+"Π4:s$%ȁ[> T긹-zc-䨒MD[7S9ko2Y,J*;)ca2D6W-:[~^ҘYg9_GX*Pc)+,{̢\ QF&( !B8 [U.Qrȱ[@PM3;d~2p' ^?%aqX?_[$.zbU3B(` fAx8Qp8%O` $LJE)C,};e\VG lrþ>x-eUcոt!`YSM@eЙB@2 ~T ij]=K[d'sR䓷K*#e_A(5XHͺ2*la38Jq_`s2|qY~av}}{/op<_6J0cA 3>aYj $hLȨb[D~ !`ҪM p&  s;CQL`0Eo]&Uwv[8kXv38jV[V{@.bZbqBmJR.!C2$cupRI-gjn^mږ+)4̐J,:i5U)RbTlffT`xfm{R)X,b38"Q q0Q1GSbΒ%dilr\53,#jFCM-"#b!Cq uJ31 9r4I+k ilugE|x#:1u6Cil`xo@O  ruLŧ7DK&`.=l&baL؎Tܹ,>bybowDяͣדl¯< Yn>%ucqi 2Xb"-r.yRP yG0~1s1s>۹7^v>zx3}%]Mr篷uC?M_!)T$b)jL$$JKK1:'`L [疗YE]jPz{6oH3H:$bNʐ& Q@ -UћRX4BD,Zt@ʢX@HWLalIF(p tiYwvv޶BǔڭCw૒=G_=P%6J;Ѝw;<QRYK>_[j $]EBmB)3VPHk5 E=d"x# }0Y]zwJ!GgdRbғNO5ih.ArַYhzo]U'|Ң v׽|' Jx^mww˻VޯַIzOZEu_j=d~7sW2y> 5J,vz[wW*I~wi}˖'vR&Oj3?7x(nu*E-\Z%y'mg`;gqR'=D>~/ yV萛4UaDdz|/O=`:Vθ'bD[HŪ+YίŘ?]<=|Bs1jPB!$L9KPQ$`10K\!-nK%>eޔ>@;P'0Vm_*A wܑdEdΔ ]@:\JKQo'+ae *;GDzv`_FnnFeɤsMHS|\*& "2rqNyc$찞*yҚB$A))!Z :ВLs5نƚD xer^[XB93iokZ׾ʯ?/<➕ziu+w7T⦡}rh+HVT(,d*)L%EJAF 2{*LnQNsdA^jTdvŋZ2gWDrgބbc@D҂o3EA J+Ò75CF Yq'\$-0z֬;{YW5uzvХ˻f]rbד ؑ} z0щz0 іѾy1ɿ_=*%/|}`%)t f@N"B6b!1Cl jxIO(qW#$% ;a IY:4db RX ]X?piqyq4ٕ[ADf5i nW_hUf )f@DZ*P(QnEv9#5:hJ5CO6VAݽ#e!SI:HcD$lHmJXˋKQfɦ) N=6`;$PPC}Xt[OP.HM(Ĺd`uU Ȣ˛mD 5u&iUr~A<&{Pr)A}&Pr,sgL"Y' !d@Zaaۉ΢D+E.p,D"`+D֖\fY r+ͺgƚf7g/Lˍg-ZR`f@VE( NF!FgCcW&AƗe^$T*$^IPK$x`AȊ@N۪`t`Gѹ^XphR{t3@= &9g:%㕣D zĘzR(R??L ZpE8-w'SE ֛5NLJ5ΠHm5\BץH*PGIdj тP [s$yw)k\3mBDZ_t.ǯGu|2Ǫ菣9K7??Ee*?OIީfg.ڲWNүP*|%v`$8li4o>_sw? f+ m"Ŵ_6u|LoA%anZ\Q^.'Z]}9hfkQ}WVmzS5@ &tߟ^Mrf:[-|2vzhsFJ#ܪq %B\3pZKNp>wqd`ơ[][ 4%FIH"6O-e(nc=O헪4Y3 ˟bf^ꤿs]_ڟ.b<]+iZyC ZT /Y ;uk פe- F]O .aWٶ{c;=Ly+ҙd1L h4AfFuusq'7Gqf]fyN(ԡ)YN'kx1IY+@*a9rqE$Eg;~kuaivX/ ̈9f޽Nzv%ыf)i߽V R/F}jrΰ'R g;9P^5+??/VWb:nyďgïbױ$7E&d:vʜWJ'yŝxK+f%CR*cž2w"oihcGHTQ(xOS~1"q[.г $+FݟPͬ[5._.?QOGhbmiMekTmi/ԳԌ|Le1wUgq'zoZI-Wx6$oh2ЗV[0p|OǴH_O:o1Ll[wE3|itV'-՝wYuf}RGަK]װ+`޹u#i49x3`,0YlL"xM˒V'}yj[Gj).muHC.#oնkY4,/wܼ!n7}ѸKX,:77O\Vׯ.CRx,՚:'龖o>7_&X;2f:뫽T>hdt p>n8ʢAS$j -!{AQliAk/jhuX0owwz?Zݧ|Fޱ#c$v]O]tÿ27w(^x=Pm^%}8\*{WG٬D=a;vtZp>z^A~+Ye/h/pV.`{αq$Nut 5/`b~'`FQA\Ogs/NO{؄N!2^t]}+*Chq%-PLukǹV\|8:<ߝ}s|~`.߱߼ kkZd%ˊaVqT9vDbk0W;rry8WBGo"}x~|>{ }ɛnj ?.o =!Z]jوo^|W_ON,af:xe"ݠ3yoz˧אƓ~"Uv՟_o~ō^A.p J8d[ݽMxo<7㋽VA{ԛ ?ީC>з F[ߕώ;/7a{l"#&`.nȩv)P! wǹ21 JeԄVY1Bp2I5O]E^0%v>VlRE&'RF⼶l22 Ib7;hߗJ1UPYkV()VZ$~AG)S-h~kHGM\ 38 B*P8B[Ul"IZ2 &wxsZjHPk(T2RB3JpMVdl)Iߗ3i-4ב)Ք/fL.IiR GEgit2Q0URB0'ZK×2XɌalJY ol&ۀ)JZ̔E8v wx3S u^_wL4Y!)!d*ҡa4]$4)/*)5 "ߧw)W 4fѱ]ˡH΢dW{! ^@0^#&@ zu~s;ThO1!X/O!Ec}Jv!erZ&>tFVcm8'*RI`Ef-\W*!wu}dsil RwUS5}GZGWH}fE643s7Kzּuړ(.zчD v%d Ǩ1K‹eXgM!(dGoUz$IW*KI!oDJR/^qjy=MzC6"ƒK-OnE*Yy9mQN+afu%JR-*jlw%#Ƞ-ZenX"\s lNSƎܬѠ+.t-a6TCZf1őjJʞ˺ Q8 O+CRgwD*9TULҐK FPlQA1x7 [-t*x$DX$A  l3ʄW&$4 { RHU(NP  * _tk5vVկԜ4VjF*z]C Hd!fH57]T? !W(c)(d5!X !`PgBEHC#i5KdY&ysS ukJ!AN98bg`&CD` ʮ˜L\Ρ'8TRp 3CuU6,!3Ф~ζ+@TT|FLF(J8 Ls$eIcQڳ$wSH[oH JERBA6cueRsO.uGZ2qЍqPBi5ss{RxL ^LKQD+fcDK;L')Q!&̿`"me;Gv‘ Ptd髒@AhKQ#ҫwSPxdgܴơYE ʃZRF<@HE&rZ!d^1P>8M>)3]6h8{* ݗTd2jnuě!q2pt`یEUfUSY_<{U|ΈJc0&K΄fNOcnۃՕC;<~YOf__b"BV"ԥt13ŬuԆHT`Qw@r!Z451E-=f nsp1%skh@N j͗)ɮ$)c6̬D$K&VȠpz=EȒ,\\5vEQpYL$E}4"$?y ZvvGycQ5E[E2$c;*gDH!(r")5ZOzڳ5MU#i%P~ѻ͠A(RIQR@t42߬n$,B Lը[ieT`LMiXq򼱛>x-ǴW&mE#A.nnf= k5Ev=MgGAŨզ[cVYkA#(ZthQAI3)/OYWӣa·UfU(5n(!/k CC*樇2tyܠvzI,c5:#SA :(B , R!!K=x""(f!=-T}f^ m6 + IJ*UJ:*Hj' 9)E^2`X0jaO1'J(0RSdPGb$djbb,b"ZsO~M1uBͩ X#3fn4Hk֬U%(쥪Iu2 ¤~]'kQ?!\Y%kP|P}bGTBmEѫxڠFDO+ (\u"(I&EzQBb|~)"Q%qT5]PzjJ²oC6]-R VA/;^@\T"LvV%Rqԇn"&|甑pjCoBECt](IWrlI҈5.7Wk?]wTu)XoзL0H5V[6קl7]휬mƄzwX!P`h 햳(Nk+PBQX15Cvj޷񴜬n]mAk5^]_ԝ]o_ӓ;co߭/^6~Aq/}_{~xnga|a{}thh[D"z>%lp+Xk>n+]`KĭrJ1nŸV[1nŸV[1nŸV[1nŸV[1nŸV[1nŸV[1nŸV[1nŸV[1nŸV[1nŸV[1nŸV[1nŸV[="n՟[unV}ǁ~sN8ƭD*q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭbqKBp ns[wq[J V_ ne2f܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭb܊q+ƭz<ʪ۟nu'=JWV Lct'/wmI{ؑ@pH,UbL Vo̐(I-Gl陮zjebT >OlPݿo~0OAyyJ7p5R.6EiSڹw!.R*$\L` }~%Q#0\ f+fOnygN~4{ՠy/W: !ZB7IfŢo{l\ov˃YSyq?xoƫ76@ BX\,Tٹ02~׍{"E!bB|Qp0<&g/c:*ɀM: ,܉ 00Bֹq ~A{Ѷp iZE fBfUұ(0(?P 00L}FΛݽU ~WUu2v` j{j`CM?<|;-/xR!p,JG>r TDzMAbL J,,SPʊxlBπ<_$25+%7b9 dW4̙ʬ)u5s;){I%1-P&zqtu:8'z})]415#~|4}>oя[7^> w}5yD f5$rV귏?>r/"/7i꫿f-3rߥijV/ϳ T͸Ե6vcbj0y 3n>;4|t%CvʏU/a^G 㔷WS7] =7ZMO. (6zm7Z9m4ld<7u -O4PGO` 2aiT,sH9iK'z}fBˋ*eV`+6~Z]Pe.(Sضڔ|wV'iMi@O̦0DꨔAWxӆg0$ӊGFod(@|Yiy3DOusnnr{3QnoOqt= ۅgU+Wod/Y< dMX3G徭 a*/{iVZ?BD%Q'xIy@tRO:\x9si-YY%Bߧ?{#)Oqޕ{ 9Uh ,4aíeZF,D14Ga8 q@.s6v-O7 ¼{OU?[o|ҖV/J@M6z=`,ȫ;BDdxT IjIa1x\r`OOisO6NY6`đmb9l#BWR>Q{B =ł9wtߙbݚ|%SJ"UqWZ=nD_ꪾ84*9+-` UZ.JVYqs:><թhm#ãփ D0vyAIQ9}@&M2H˃9*MOMZ}?ʠEQUA*(㭥Ve:-s 3 ^Ƙe )L"{ǫ(cbg 똭gѥhrNR)\*ņs*ŰdS} Xdu-㞝bJ]ﶾrRA&d\/pfBjNuJpόݹ&hqϢeDx H`= a+ ٨3p;blf`? TM魎R9~6\Ptڴ0j vN3qAˊEL&KBr=־+5$+([>K]FR ,C*E4`MS,1|3(d I;W 06~x.XL>NEDVـ"en-A>%#ϑ K8@Bf:)Tps`z*J7G`, jҔG-XF#JjLaqEZبbKTcܹHIpTTN9R\ .P@͟3vԜs'g'<5Ԁ*C:q#b.3eQQ*jkjSsN㆞z](myb;˾^|%yRyβJ iPX 3+Ny98mENR(+t :Ժ&75/vV_Q:EU&\~?ol*Sb$ŹJQ[>੖aDTH+j Ji,X?CGՙX\9'nx0N3EP"X]!$KMșHÑUx$KڤDcOSeD.4Jpol:xeR11i5,x LMZl8G5)ףּx冧G2 $,UbZKІitֆ 'e4QЈ@5@M=g1c#P i zTu4 ̜$iby U )Ȧ45Ry| Z#w0HK-MGP9+NQRŠ_4{5 "T,[KSL>6[x2'&`-uR+bcDbMFl&*A n~מlwA & : ױ (8i>Q&ǗStܻMP[}r=+Bߍ[tiύ(Px_il2VZsFnG]KFYq֚m?/=v=pfh3ߊY‡Ҿ5|&aլqkߌ'u0ލnDeZfMGiGȝͻ!s t|;yᒳEx,\*Μw3nC9/p+qF ]ťVS.\]k?\_k" P,X]RyD3\0%Íwk7oRiC1 盿\u5 ]M}kS>޾+2IuDNѾhҞ˲~ʱs^ o7e\9G /a~5š`L*L{u|Λ*4y(uO2OYSooWm#c攗$Ii֥1;Ìe3=#sQ*Ȭ!ZIIZ?K8+K쪂4[vSee*D!e!a9cm$(-#NLFS2lm8 cg ^5rU5?>"GY]Wn6ϡ V+&]^? 0zKh]?.7w?~EGvӬ:%Gk{Əw>#r ڿԄZ4vY6ƅ!H~uV0>Yi.X{J,NhD(Lq!yzAձv{XY} (R4YTq)"%a=JkC\AnAZ鄒1&e@E+T4hiB$ Uc,-g!)] ,6qUCBakYHU$:5{<"1<8gmOsUghpֳs5`B""'@ӑ[ͅH>-QtsA2(* %1r bDԈu&+ !6Pd4mYZNC=;>2/ABǼSF@@ޑ ;1J2+P:SˣYT$lm ˿QkR)m40:$wJ֒Zh# &hYZA-7e?)I+lzGˡ)!:F %9^XK39("sg7V6lK&^zPGHPmB)Qxyg=vYNR$KUm흈r* =>kMD|e[I7lLǀj̸|ؐ6i #X`,&Fգ=XE*Q$AZG<'Lo} 4)&K]2H& 6zIӧ҅dZgZ=128f /EP(l%Z *7Z|O<ǝԎѕGTERuyN a"10 ”W"qD [.wSZ TP < *h%îb=3{됂4o[N|D 3.JPRfSnh5.uNr[Nl#FuLc &U*I𽠝\Vд+9Q,JcF1K)`)$$gd\=+ 0ˉZ`-H(5xI!@*Kic7 *TNLj9+j$ۧ("jbmw6ׇ{}}( 3]/K͕c#>C@=-% w3A16}o L??d6a*.LŠv{d$?dEn>Rż`@*a0Lm4}tX?wCmg_}_ x~Tj/_+#bfWw&7xpwŔ|@PE*jKYO)/: םk/?{s٨KIbw21'քNq gۯҭ!?׿&wZd"Y^6):Unr:*S*ʍkـve9)jf[(:sz `za a4iQ Vӓex*o4*+gq>MbǰSrރuQL't˕_ipwo}qB-M+J(V{^=J<鸣/`ZqNʀf#Iqo"΃̴-ZpvAN0B}2?ϩUҒk"yU;3/ mn$ }ګz5WsTZT-Dϯ?aԔSDX7`oB_o{SM&smʶkmL:w5۸^'n XSy">RG}dԩu&vub I9E-2(/7mv@WtݭVL6rN $rS[KQrԘ V3 HOuo |H}2{2ɢt8I'j 0EfVI+wX۟a7o:x\v̰[p|VYY|݅ʯ+Cf !pkYq? WD{t" qLFc o| ')jn,)XBJyK)`c>XU!`WG%Ў7Ϙ)C3m.VO'>]%IkVt#Ziay+?ۏ"% vX.ti,qkdFϿhKv;LVaA(%u?˫"|Ma2R'z~i{ӹ4v߇Zðl•7fb F槈EЁ#L!#˄A8m%`^jH!!,GYzST^V03KK<>}w\73 咼$o.ɛK撼$o.ɛK撼$vol 0#\7%ysI\7ͥrIެ$oֳ\7%ysI\7%ysI'咼$o.ɛK撼YrI\7G.ɛK撼$o6lBl̃R@V/RFKLCɜ./)$ր]@> "rJkoZ|DUkq@ysxTk)0D ԉ*`S"1OK)lB=#.*%å&L7@¨J;f2Rʃk &=zVzwGm ab7=#V}=:_,K=0:[oVښM_G1VNp$9.8ʗKXHB^ (-?ijt((fJpT `t48T8ˊԮ"=GQ4Bq!&uP'yL8U8UCJx+c>#7zT0Ƥj3MiP8NEpP{U,#\ZTiʼY$'LEc"ɯM?宥ӗĽ?8_B wCY>Fi{uQ|蛛ɧhvpo.` ?tgد+Ɖ݅oθSDΆJ J1ugf<׶l<r}z] F 85iaaCR5(su25 G/ӟOAfX(@,dKk,)Qm |a:C] SY RSnG S6*  ,WyȜ02]) 8No dx\,vۼjge"-{}m_6Gaq?xAg!2;k?Zi')2 clo \ OO(O^bgO2I[P{E%1)uJ6 K{rY8!>=╻3Fx,Fd|fHZ/|p?pEdd_"}\^8WBG5#&GX]_}p94X>OCdKf0D˒`vEYQgPE~bylGz Yx1Bcc :`2).EZStB:&l3b97r}z۹>NGxv ZyJ$\7~̑e5{xН$B/I;[EqU]y;\vlAGpPlLBU@Wkd 3'HJ:)E3$%2ː%dtE-k B=&V.ͺ3#blQm˜=σK.a&0qi޺鼶y4kw1^}%jlɝAybfށpЙL ZQPh'6mKڗ; x%*oAQsr:꒓(dI"0 Ai3hS25x&0V }*>֑eͺsN\T|v@* 7g\x::1ҭr~Km$9VǚS -^FBA&j]\ٴzu0F2aZMkOMX[2x)!`tOVu7oZ]լlo-2ȳY#N|AM!Y͟5Vid3Cv\QuLURl8~BlPO ̺U=Myuv/BoW|ҷRʞuQ0MQoQ$79Hp'mOm_]| 8fG;M_TDZj( XLx+蔴>ZZk %C!x97ol1h !e*:yF"w:KHEDE`cp $;71O]W:aZ;~?$k 'HT)1! f;˖r\1Jxqzgu.o& 5"Qv:_M Lȣͺ,޻&%Be&~a~,0i57~]2dnDW-ߣ^tsaLYQiG!lyS6xADX7AbS$#IQHѻ֍kӱAY ,$!K1\kXh/R0Q(ҁ[/䒽qP/GQ?qT#?9nꯧהLтv!8!`eN|MȋUVd NJҫ302\?S-TrB]|ݗxzs99x7\#T|:ei5 @vR;پ&Eb]踛-b$:3 YNdcU%*cwN{bk~ OFc&O~DU63 Zړ P4: l^hJ%j!#\5Nz/)3Q3:ƚԱj]ʢm.$5H1Dm֝st4:٢ړ'٫߷6"=.^=qMEH1x9,g"xH&"l=ň肎TcSIP\^E[ D} \Ҭ;O] w5_δXƴXdZ d%P⬽-RHB `R610chXαXB޾1RE3n%|RLʁ"Y8g)d'cQ: :0&VSWtS6 ')w)ocw;wwpLWrI4wmR\u֞/-l_ .Nx]Q3O99O}z~DHx_0)Ἁ.H*WFrAJd#3BU&f)T-93r(@b5~U)0*"%Ej ;<"UPc׬; *{Lav>9(# [)(r' g+Bv@Ey&锊hzP@W@m?[YJj|y6Wj#])ZBm֝ 5<݂a)S>]lJGfr*ba?*(S[۩˯VP+UjD?6@X QkّGxR^M*59D)RR"X ('(ŐIIJkjuflUf3D](E[]xjI>!68i"=>qfWW/ݠFB1k,$2l{Y0(5"1! >ؚUP=b+XjSA#$ /l ?d0 [кrga>KḺhfT-km9i tys/hI Q)QH (D5[5Z*܌2Zmrub+ [Ȍ a+:Y$^rNzudID6'ju>mSUcшǩQ5ֈj҈FْVkQ̵\cج;'3u`&v,9U/zQOzqҋy >M+a'6P߽Ҟz.2zCJ 1vҋ8}lvETؑlE Ly&p4F&!;Q9#D{#ŏ7j>ՀilD鬶Js҆pAIK%J%xh'= R!&bqB?jx2 (+J"@mS* PL(NBQ(tLu2Ŧ䖳yu_+_l M%s-DODPQR xja`KhؒԠ%}}NF(!1(D*m hYO[>slr_KC&}?돴VYrVO>{Ȁ`TR#q{&TBՎ:):yV(bCbs>5U +0QTtj,u̷>vՈM5v j:@7{ Qg//C2MۘfeSÉȵ*p8Q^ov;f/p|Y>/Kes76 s.\ןȂ c=?zy78X3/? zIV,`I~h qEh.#s/?duws fr_~RKǖǥvR4\:ܳ1}qw}{Ywc֒]&Ӽɴ;-|ݨf|ӛMo8< k;t>ggo'0= >}5J ion{˛dkYoh?ɔ$Ɉ4hLzȯ B\'BMZjX"ipNxdwߦ>chqh>p̯Zibk-m-|oa9a8=_Hn0E@1&Tbb%p'ѸXksh:< sy$I) y$ş~Bi>g\(%AD!%-[KJ8ๅHe&AUI !=#:j1>14ekOmbl#hg)h[[U>zhˋyqu@k}q5ucj4>:LLaaL}޽I8[ͯ4)yΪyȦŰp{}M{!G%۬[sYG3r~ (^w.'Hc#iLhQъVʀsITċ Gx)R/cw;9ȏӑ~^oO:܎>`aXE%\(.!I!LQ 1E& 5LMwa/!;)GP ȭ\Wfv(ރ^fm+rY`Uh&$F\\E"gC\k_MђgE5y#j݁ONf~2#[xj4xRs? $ B!DpHn*5N"S !Q6D'Ť~%A逢 }-WM(a8dchӤdFv}y`^;LTBg?tި#>ΞvDN]Ai;HЭ;H0~S à =aļ5 ġ_Ga2>O@ƴ0B"Ǵ\H-y-C#Fkti2càe NٽƐی8wu6fi0I#dH}'-t%۽&PѠ?^-^mt9݊n;n#+ȶ6o"d,[h1WR!٨N} Kzv׬AIMB-.p>?WjW㲁 mM^e^ٯY{?yWoǏNR:cpjY#,s!gM:obe|{< l&4z08gn:96J7?9prxS?'`=f7xPh=`q-&'g'_oɴ'w5?ބ,^'^=_^\Cgc L$ܜѪx%1I8p  gIŤGvtJ[}R篸~im>7Oͫ;NF!Ƭ/\t[/_IEՔV"2Y9Jq}oR\F)bf@Ulbo[k=# #|6$QVv)9nS]Cq^kŭ Vd{[ mKĒѐg5+5pb:͸tHm=ϳl^i'/ڹI#k)Ba0uiX qj0Ay]z6.3 \A]l|}w9 t--|kw杲rD yw$WT^]<Z@P˨ N%#at "uW BQ E-Q +Pn 4/Ա-pZM?w%rI^ mIeBUZU+uɥЊR}8v>:2h+L`1G(1"3T%!I&|9y:FL 1hl%Չr4,@xV:C yŦ%0UeRp ? &贔dId@ AS'T-lʀSz@c^Y8we=(Q:MB HG ϒO!B.C!ьKnP_XPMg66U4ɁPO]G쁄 gQMW_ȓ:|ݾR2]vKOmQEQCWYKDa눶 (<gڂaiӼlMICCr ($N7Rd IIAQŁR) *Mgf,gӅ8cG]HIY]kN!m4ȖqKasN|\08}* Fklh2 \Rs2 {܋;l$)8x@1F `,Mr+.0tBlGM 'aј )t6kl7y(Zw쪵iaM;Y:4C.WPI\$ϔ4 ɭʝr+a1'LvI* I"d @yԨkb1: / L2:jt6/ҟb<X?vՈFdF4͡9Ü%ଆB)Pa G -4m8ZCE5"ɇ#8Iiʃ,e=FD<|"fBsY'ŦEItQ|ŸdW( EN/GkcB0Zhh`ܩ(1QHBN\/>,np*lCSGn>c'܄Qa[uяh)1J~q^)׮a@D%4 ,U|BÆluA80ȝVlOa;ͥ|$h7i"*wʦc2QF$u0IR K\%1& :eS!5GM[ 欳u+t޹\ƷE`ᐱN9" "rV:aMC^&@L*-Vᴨ(D.jE1"Pa UJlv3h4Kmy{OVzL v\!GBL+m#kuǯu@6չC][F'U"W@Q@Az>'q#BFYPe"QuY5U|ؒݱ}Ɨ/u6T^ɧwqtR a$UQ`5nQJ(Q8Q']mo9+B>6_,2w7=77A@IgYrJ2ɯb%mɲӶe$ݬb=J6`{2TlH|Ɵzo^2D OKdݿ0e])v5NY{H!Uz*F3b%(  5JK4Q?S9fyqElv=p,?E9QURNe㡨0hVdr$I%I~ H!LrT D+!Nz!l\H `ǘ62+Z1p!Z?Ѕc0aM& CGbEj1D$Fa*jaSҠU%CUXۧYn~0OTnMD|g[mjBQ,"^!AN"+ ʮ E$&/ mԣ?X$!ճvqS[+ܤ4J^>=Q$j**!aH6ydBaW55,N) 2(unbi@db` R !=.+<"V9$Ţs[!z5lD='/I} OY$J4Bt`k=i1|bsU+dRpbFE(JY{ [-9v| Y֙Z6BхB ,$(ע89cȁ![m ػ  Y>˓:"6zPI@f&Ϝ5":8e3e2tF" X=A7uB? Rb)VKp lGsAu'f4ԯ"=׭x/JN"s=h ]ЄOHf<^V8 J`B}Osl1^Y,v'ʣ9TeI[x_Fm=?E&UǍnt̢׫l|7'V(8&ORW]j/Q[a/0IG6:_^-UlwP}¹MB$_Muwn}>b9uksP0g=OXQލ>M)rhKeޣz[ tQU^q@׷'t\, vg-h-#rlU׸BjNZж2s)'yΊg4@Br%d'e(nzN*e̻ş72:a~Z*tvkѽ_Wm۷{Ef>n`j)Mh^*P::jauPl@uj*u 톴G0?Oo>TsrCRz=[;pv/4ٖ{xܒT''P9 ";,`kdlapAΘإC<5_#L^oyvYw ">j2I:$kĒJ˩,WY_2J2wǯ=t':c դp6W|Nn]6SӝϮƥFRMQY5Ej<h9L  ޒ+p-^=3/¬}K;r>A[Жvbw/MxçaeYFqY AOOVg7Qrɾ [Bj R3ȣ>Sgc6ˇ<͹]_ץp2mSs^}/deԱ߶pww)Ñp>ߊtv=o|aAJ^J_=Z6bv@[hr t<b|}D<䆏L`c4%ܝ줪cJLtqAy‚ ~+kc>$1 z_@#`CJ;稨V Kߕ{;{׭1q=0u#}eɼs}{wJ` [;/Np>} Bde_d9,6M1~j ! U0Lz >R2i% ĠɇPדW6:, 1YGH]cD[c{΁9 'nrG R5^;mWlh0>ɎRav/:;>+"\;^zû^ͻ>wV_ያ+r)].Y zR1nέn=m,Z_礽uA}]ܲRZvww^8y~y|л[]ݟu̿֡ewt3N5ߵs`-_5= N95m2X52Hў{a:m͟)6z|57+ӥEk/?&!V7Yʤgi McMԤ٧'~2vglf<S|/ϗ=hR{ǖ1Rumpݾ]~]lXrBAZ]"GeH*ބt(1-\)z .!Ƿ[.G.A*Y!yƁW3c}L(541D ɨ"O=i yE(YB$0Nɐ0y,ccۣ%?p,ԗcg[ VٳZ.~bY jxN3җLWw:=:B xEMq^N75WOgƙȀ,]س>}Kf0d%C% ho2* (U)X/_?V)CƻLPQ@2:]4PtNHkuI~ݔWpo7퓶x ןOjǎac9V|m?@gz?Ͼ3VἝϾvsC|t B0NT2ذЍ)xg_NpttYQ$G dNHBqx9vW)pJ C" )+tz] %($EeLҐ'鲷EF [ Iפ!%rpX \#J_Qݺ:V9v~j&L]|պ?irKDff7ÀWwxz)u3u%|#xS_)nF{ӻnEiTu2Mh`OWvNvX'@ΉǼ`@~oF7RNS;"i:+`Ζ|DЛ=F{H/}xFue~YF*T)n˨8nKeuR~,1^[*-(ʹvtk/R[Wz.E.v_Fܴ[vN!S,2 eNo<1q: 'k DBcP?eo`eGΏ0O>ARhX^{ $d*JyXEd#"E[J *IYǷ Q+`&PDHR4Moù;"b8OZpcsM}Vhm0>|SJfMYR;)F &7qeI+;/=Z)w5jL#1bKPV.Hpע`L6Hϊ,4X r09 #*Te lYPibݡٓvľOaj>J*Tc1Q?-- `'-hl;fߋA:ziSRO8|Bfo4(Z1SR?TG+ùwY)SrW1>9W]]29ݬ {K"MnKFXTyHjh8!')+- E&zb,_/408_-C@/f[[A &%bfL^m/{`n~nӇ fOjw;NV-.;{DOb OH#t6T(N8M.Y>%d`x1|Y?_g[3J~>&\2ۧ×ׇR>V:DAuvqⱃEA6w?߭MBwh'& @4ZM(禗ɏΪDp?}P.&t4 lyQa\H.GRb146HsF篸m3msVyz|0.L'C (Г—tg={͈MO p9ԷK>;+5UBpMwlfҷn$]q-m;|JM1j,5֐׈?8waAלƨdI,:ȶ\f}BN"MMb)ݳK˛4&!|WVoo9,Kֲ ڥH k6[<e8wVY!r}3=*[FA~ݢhq.\t^vJîu (Y>".>>GڢIk?Ӛ0G_O':a;a ԂLH\%XC.9+]*x b u2quµ5bu5VZXb jۂIه5w 0-1X^yiOړ4t-)+AR~;|2hI DelI mkC`˂Dn ˱uXH6JL%j`j!R&8sY_/Uwzw___N x<2 m0~fzb6z:bMUZOFtf~QSd~6/tq*8wݭ[;ޕphvZ wI{Ҟlt;)ժㄆd\T .gCңٚ=U8m- .B]lĹ]_z _._Jި囓ya8we鰋 };}POd}YE'urvn}Y/wL}ln n׶jk7躟8y8^nDm+qϣzT릇{Mtߛ?F iSCLɑ(3^>.)yJa7n$l'$p9`rT"%UvRర·2bd[=P KaJݼv&[o>a.iU}u<#=hٵ }iL>Zq >kEuO{T8W4tT m=ɄyC5 JڭqG^0kzb9-ּ5kv'Ű!]0yϾ6ȶ5x >qŻ-iozZ,w: ,Z 6x…OY7!qk ZEBusb/]YmvNv)9No3a'ka_QnFyttdxpKʋs| x4Y͋5nvGϬ/{`ҩ)َ 9I2&&e8Ng`@IӢNF N.xCog 猆}I,oGm ՓA.kȁqI"r4%!"EE>hgKVH02V>?J EP!ւP dRʊ3ƅYEv7} c'ÐU/iحGwJkbBƔ֗BBU!K`+uʌҘ"UYF_YRM5E\[7(juƨu m![ZIiR o@diuiQcO´4̝tY83*$ZvщU9-aQBMbs .h c6Ѩk}͑Jxe= ZǤ)Kwwz!+Q֤2CvJC#WyKD. rWmHβ+JmG+D  E47 OYs9"@,U>+;c'DɘȺOl<yK/J'1ꭕs 0ɌU sr5DJHB˴߷J0f<cz.i#U>1.qaJQam-.eʥcEd3(,I:G͋u0"{ CDOQd}, 4"f;Xp.PuE!; X;$}..9!# |p3UCk\4c k|BnxJ v (* :Y-wX ~98T.È#Xe~T3@L*VK.7UaA%W ߪBGiHX8bYG qAP2VިϔS\A\%0+*ya=Ҹ6ږ $!E+p`}ޙJ2K[i*)!aq_V q f-t*X4XTZÐ̨`A \\idSX+ģ$ImY$YuT'L'Bp,,L:@=.%f !$i w)#X`\AS@`!P)-w Eq:!nPZ`0}0ḻ%80)Й5d<D1,ŭ"3X~fJPgBQ"!\͑)x3f6"=W/: ߋI ^i۱+a0tYaxQQ@ #(be LR )T~M2伖H !Bi5X,L^ BMeVzK02y/1^e']/F v)bVMDr|1BXNR!&D_{`V9(6S NZ7p *eՕ;ޅHP}65Ķ9x >^`8:pĥ8 (}tEVIH#qn2rc2(OpHv9 k=ȗUXPp΀n(x$$9-d^2MHʔi. k,Ɨtғ=]@"XM.I%x_+p`DPHcQw@SvpC$*q_>P1#TM ^H^&^ ,C)9j '5I3Թf@r]Zx l;$i-D;*f`1btJ_<83 Ҋ299h5v퇅EQ0QL&#V(M#2A~j8޸n}Z@X0Fa!$Y sEΒU utcֆ-КO5 0].V 騚:K4$%܂D{=S"1 w׌Lԃ (L hTk#ES'2(\@X 7nkdضh"*غ<.Wz"8ircҀ'W 3`G-ϰaPj_(j1"&clwp ;CTSXD (WQ:7h#l0Q80(`c M #҂c b"$A&I/( D `څµNf֠BUZ,ڇ7E09%NZu0k[Z  +(|rNae33 _TC$վ iDF:Lr%p{iކ`ݨ6Ke{:U9Ҿ."-.PEqi`%R!8JޙÇ(/QKpposIbUJUrEP0eb12 fV7EL n$ aK@[K0K y]0 F Neg,X3- ?S0 o:zHōtp adg!"90TCYOsr/sw0W.0úC$,)gfc)Q:,?ԼzlkgXg4- 3gUM൥* 7ùu u n.?^ )Jؿ#J FcPn10C0eI=!>`rB%$ܘo;$멜 CnZ"ӧagY'S) (^ɛVo~2 68g7Wylzh\d\FTDu6`l+7SO,%ܷXn ͺRG))) Er!f=ƎEZ]9C*nA\XT@~tk,)1e\ߟQ_Ҝ-h1˕X ^1FGsXJ{npɸ#vw~'7<ĥt,Sg_,Í'1CϾز s gT[nlugŎa^ 1Oj7C6ol9Ym`.Y2#(40VI% Ei5LaN`㮯UURt+3 Ae`k28jtfm9A.hO"6C-hlYD[#ac/x\EaQ.'s/`y.0 bW%}&0ptwpVuu}scbx2ãu9\3&7f'@e}Mpaw'/ &Älip{ ?ͫ=wd+V 6>_ӵ(LD WSU[WYh}lpjg 캃 -v] f*Iᴆi`#kWA;|ȵúPsJWSF#ɩYV#߂K,wj%Dc +'`Oys }^䳦|WAɟ-m,}LeÇR5ٲhYry-o ag49mA>Z&&#,vhyUY:V=,2<}v}#t~v{;}4qi8<!¾mXNxOr0>:gw1<?_7- CKZ jg4ÀJ F8 ̤"D8 p@"D8 p@"D8 p@"D8 p@"D8 p@"D8 }U8 g;g}o_]O.(~ki[|w/n;f|R͘~jɖJӂx9xev/o ^|Qoo^|Y/9-{xxB1~p}}1˓']3^Ɲa:qpNBDbCg4E)"|MD~:<{I(CϿ:{ 嵕$ 4Yx/Z* ut.W]GjV\Y畡XӵwI&zjD=+= z>y܇{FAG$v;lf܁` f'l λl`3! f6Cl`3! f6Cl`3! f6Cl`3! f6Cl`3! f6C͜*l樉Lw?1tp%~N"PwX(h2 ] Ke}h>G˒;=4,ӣi&t~mNnd>) mo[& /UI7??q :~G2 Y9¼MG^ l gLAufSgLHݠ*rr)o X/βSONFZ.ଊ!$ QdIZo$?7{7T$úh ^L9RבֿpI)rZizMqIK=`8pJR` &VʤElb…GJcI ld8IݾllܣZwGgȲu'ȮY71Nuޫ\>4>: J;3)f,ZE,+*okmVDowɓSQf"rʷ8k(1*#T\d#$`r 7ܺ3ՐzS3,9voqnk כ+]eYԳۥSPzGP lF&Bq';m"11kR([!HD'Dܹ41Z am \;Hgs@@UqH*!k~zj0N ^ Nxջ %qp+ x9wR~tq9g?Npg0`YUlRk7\qaEߞ5xhv|;aˆg>ta|ÿ+O]x4}_XŴ0g}xvVN64 -}yϾMyP8|Q/~ k_GOz^ַd 0szb\ٲ@ݒxYg֥q}OuC7[ُp7Ϟk}de4wq[q-;D07DF%4=] !;D)Q)~ U'Z蹵Cof ^:jrɲРUUWqTBH׫oըc'c7S+qK>|uI~sz#Gyw|}|7+j+^[?kEu ?POD4VBxS2"_YV*Q#I#"XIa s@e!Rѻ٣ًz[$u4A)+CLGK8.crub"J>ڤ'6 ~L ؂@7B4*QǸDTqAX!ǹ #1( -.B>VU>=$,|-M쬇{*HO%$f૟IW|}58s)wߢLYEEIjf>ޥW q^o}w(Ulz&$]hRqK7Nn5Tc[΃8~q/^4jD}#)u0©/D@-^SSƞ |)z񡕣FZ J~M{ʈ\B㏪Wcl1b[yqk|Mm# E32Y럾.^JuD" ?p>">7N38ܳZSd @kv[6t-4fsC<;#{~7m DA-֊S4r?N+VAЖ&ϴÍaslT](K 43x3iBEʇ rXagmMК3LIl"jGsYa3Xw6S>eIX UF:JM;zߍqEo^gɲeza3}61YLzkP^-n mBspyNb)Qޅ%/ /y`ir7zZtqdN= ?R  nxN &irT6 < !#c'qmȠL 0x't; ܎ ;g繄%!cyޠ ,GcXi&W'/~,\[~ʢ홷o9A~cS!Vs|ura'ML1Gnzh:tX5 %G %U>_g_?ۚSۨ?6:u?=xUSY^>0PѪZ8ZY:?T6O~_d_x_Ujtᓶxϛi`<=ɕ\g{~WÎ60?>m Okj-"ˉ蕯s6u1J+u(/RR| _ķg_yMFŬ.l)H\ZGu5w#ԎRJw ݥo~VU4^em o8wo3ŕNo~?O=!-Ҡt[o^q\<^ߎ\5l>;p):f]X[z0 fy=R&ӳw 4_\x.!ShYecW=,xam 𚰛.ZVL`4s{FE &-;e|֖r'J kH!Lj'a#oOd SK h*2jTDY S1E$1eg=]b_R Nw[pRz \98mɛDՊjz+PHdH$G|%h?AkpB(L6tm-E6g )h0pr/.* źXip=AY,4DN#HȵLH { 'BޣȘ!zPC(Kux I 1b^2 )R\LuPEx ~?5uK=oB˔devӗ)W}帕yعa34q[gO059@euOmCeߖ~>]J%'A]3 ڐB4uĎ*=dUsfUܫXaK =I(L`DiwX9Y(5hcD3T%CN2#%yVO)p@M9◻s d+N#@ځ5ҩ*TΎVs0+ ;?HHRj&!)ROZX<(*Ƽ2&qʮDA 2ϒO!C%ьK r28XwV6nǦ6'ed_|z=uAO}m}@7. {|iqm-EQ%aQTޡRʮQ+pP#Q} esnpzVV ,;cMBkrH6o)Cڞf/ uIHsl~$+L9y5wH< Rp-+@"C6,Z9YMr. Y0tBێI 0HbBJ׍*՝ۍGa j}QFm:`.i2n\NOa"$qEDVـ"n h]%?*  騐! h,G6TQHDd$)ZqAD<;|%̈́6G13K'9Uueb>u}qQE>​[hMLh8# {X2T~׌kOA6y' ža1 @6S3G>YlȭELޏnx?Fi#3TZ:U,M#hr>ZҗwN xAO$(`A!@,ymMa)Y^rۓܥIp/goHR(IP &M&(gLB3$P]7%r8C3 pnVF*'\8$]^'9Ndx^^fE F%G{ik#c"#;5yіoyP(M8<7Wmܐ.\\*h5_d/ Ju HPe@Blr` Va 7wm)/RT„@ ,(oEh>eCY:)88DcGe$ߓScLLQA9GINlj7v:9,wzWtvrWD0֠V0'AHQkI"}DcU2GdU15U )R'J3^%)XH8Z- B\d P Љ*uYW]AzRYd IVjdі{%RhJ,#"=cB4ă,vTԃPEC8q/xşemYoՒt;'gI,.v?_P /8Uz~쟩R\* BUoy`api}-Hq:|CSWS%?M'+O qPfUk4FpjSjoey&_<1Y{9/rS4~:|Rpo.Q ?fN:E)鋶UqF 儝N鹒SOmYK7>-S.Og$ jр'$:s]Mqp5nƹ>e*'y=}b򬭘-MfEho揳,3ZpSNp9|MF~ұi=(8{S}bnqH%zi%.,~B^|L"s ]TX庝 7%ITP쭲(;"M q =IqmxTUZ Z՜K`꠽g`QeqD-#Orc,ɽ xrSUyٻS[qv+-x%gOqڤ90'hCd'K=}fY[Mp'PgQj$=(AsEbI(Px0O/@@[85#֝0+RhڂX>n-+M4?ʲb"I񴌀mH8ZS {Z ;c$:'-;lW@AzR"֩`RQ6t9 _49slW+w}U~w]o"}Mx|cȇC9Rj;Φ-Uurݦ8[1-(Vjtz@V*!Ձ U T(q^| ȝe)ЂsX# cC*r.DgQG=Ho/j^=Q֣(k;k t/@me#!-0ސ֪p:Vvgs +(a#bZsX:VWY|ĺUISWkJS^rj M$V jr1#,J!I*I[fӐ26PPҴ) Aznl $ TR$2AB XxF3"\G^h]OύW2JAnZ9 V+6]^/ty2 ѱuEz{NWmOu5e-6v]b zr;?zЛ[>u3&o> WVDTu@Dᢶ @8eE +8EpGlipyiN9Q㉲ <Vro$e4V@~4,ЗYu8ෳAm7PɾxUzqrсB('Glp\hx8 o,-"E gsSQ(UXGѯ)F J=>R3BDN HQwwZbn#zz\2[zզ* MEosVE֊BFMr1)W 5D=!>йrYCD'h k)-晊pw1G$@mpBncV&Yfչ,ՎFx:*.'GAԲ74r3[]|2z]č /g6oZb# `j0{x0x(%MpT@~2(8ɷIg{9A>֋PǢd[yi^h^Op6ػ{3pK}D,QL]EbPey6?ʋt-PX)[g+^\}Q]V·AX8[Kߙ_e-g/[ոK%^r=2 78Nut6`lӇ~}(isW)̽Mß3N;)!sZ\2ê59szg{*?ʍsiT=9(]RI*5m[f8lČ]VMFqmm^祩1™ݨU: gv#vD6'`VK%6v;MCSf ^^Gc61!Iu{Q,N݅~g=Jr'* {*5 Dk?"_&03T"yP'ˏjB֖O4Tآ:ehh(7jξO0vSM)t,waHzD1)?9F, 2]8xg!SB$A,S?Lɠ'v:FXǭ)Vhgܛ 8ȓkCW N n,ϲĞ?UtF9wZ ' {}oN>%%y{#m3kIU^vpYު竞|}pg#;f=L;p|t1R_|qы[PJ o&pi ?֫mo³8W3i:=*c 6գ켏g[څk NHNc>`olgjƛj, {df [h];LxƤ6hˮlEQi(J0-Ţ!(ku];\rص}VW|c7tAD4R ӥx+'9'^ De(vsxQG9иDyAIиOz\lnh+Mܓp=!O{2! H~7ou!ǛV}j?7.Rk|;-{||m#qحڀmE;]o (rat>[kv= {ݫk1|77{5Ԯ=cg^|޽~y!O6g?mxb~z~‡*~k]?_]YGۿo^I_^;v'zv@}yYg-svx gG$8v.n;/8ODKUww̧{nO3qeͷr{bo/Nn+z> ǭcd7,IJ)(o>GDCE&)hDlM>[޻eUt<^/94i2nD{ulя{q^-~Zu}I\-R"P%[r[;B4J&CQ)4rmv~_ 2ŤWА )fD,BY!4.Td.J6͉bc1v:Th!q{gtS6]m !D@4ۚ(c\"khs?,hu$vh]Vd'OւQRd F +I4HBT &Y]UJ7 '`TcԚ1$kF l"d2*It0tB0'Z K"6Pd06)-!A16oTʲ΢rQXW= MHf/heTZ$96Lsu$T&YBFI)5 F`o !8}YeT,yrh.jgfkKkT?@†^ӝQ;xS\?mL2lCB'dHXVvZϪLNԴ%X3sH- ^4kr-PRP< Mo$?%@wc5!w"Ūt5}GYG(}-%fE6@Y邙2XaS8 (/52$n:F`JjK&86P(z4X'&Y嬼EO}hVb[j)WB&(lb&E hbx;Mڳ$I(GtD u;a&)CLxePƂF۠S]Bm x *Up sVkazpe^jvVqԡћ2X&N<䪱ɖޕ2A[<)M8jBX)hVc6Tf1i 3UC8йFơ(Mհ̡-32 _]i%RAآy,X\2؆~n!Vor3E UJ!-tFdZdC PQ(XIMa`V# JQꫠHM%0Mr˴U@2=*#h<W2zUC MT4j3t92[4e DBP@fEiM7RdJ&tkJ@AGŜI' sGPAA. ZgWLUC/pPRH 3.ED1,2ȂgJ- [!SѝF4GQF =~(l O:A:̠lB)vNQ2J!IYHI(bbc*Kէ :Y5($W'S@y"$$SA Pe9@ aΡ },|:omBq"3aC~9DvՋ y)}ŬbCu"wjD!LD]i;L?M-}gyQWdz|gT˃4 ܦLkP\ `Ӗtd髒ѫіUe =;)<ޢY7q_z`!Z yPK!2QӊF )Ե"az j1zOxH\t$#(Uw< om UtWPunR& :UQwFT-pi[&Kf$ȝLVM{t)x7GUD|[}%0+xme,"{$%o)z1΁6D. -r!![R4P PޘQ{GH)i*am.F(Z TBtX:"t5*ɮ$ݱ`u'm>I9(#ZMV5$- ^b8=$@(YEƮx63)H̔IdJ!2@!~Ѓ\@ CCX("m,pH#X('a'j" DŽtJȳE(!cȡ( ؚKd7`,I{Y;A5j% ec' T~뭈 K*©6,eШ JX */Vm+UB&!Sj1!hma[<֯Nt8گOV ]DӷQ7]@L3REK@358)Zg(v9Y:Zm*5fFH!#d%B ㈞ ofe(&%<%*` r(SbHz(Gm(ޕ"کI'%\`JtjCB6= Y V):Yu~5(J>Ⱦj7L1KN !JVP0+$!ZH ΀z"2~_H O8%0hTC%#qM=%zMHOQP }J*]\0Zn% Bi03;iSV\C sʨI82I8]}EWrlAHҪ?rԮ#R~wE!C tR `j?-zmoOVtz:_lYRV[~هT`hz%w~_9n?Šbfϯ[`3=YPiAF9?OE 4G̏1_oӋ9|~vqIw H~?mI\+͏//roD:bD8\iغY9;OyFۖWx|6m=φ=mzkߗ{r5}wO 7oB>o`\pQGKzk` b> }@b> }@b> }@b> }@b> }@b> T XB6<mx> H:z> 0> }@b> }@b> }@b> }@b> }@b> }@z> W埓||@=uR%N}@b> }@b> }@b> }@b> }@b> }@bR}@ʄr}!\||@֫'rhHPp }@b> }@b> }@b> }@b> }@b> }@b^_7;lIj[=l[?εŮY>p2I^[@2HkӏMc7HtzgJ(}ĐUJT:54 Z\eiUɈڹN#KLZX3 0S$æ,ˋ^yH%Q68^}_;Fe ^yBs=(曪y_wb^}EO)[Fp79\xϜϠ0g)|?)|q_8ս+>^]?X/\`2BfIvA(/^X[_.xIRQ+o8|FcwB~`w…szH<;}t ۃwޱBW.[^k܂5S rmѵ()]Sq[_7%Y:).+..peϛ$?>ډuۿ/׹0g3v?XV \ЦW~nጿz-"ƽzHYM9"JfMR IcʰegiC?;X fix8(^๩1D#6:a#7lB}L5P{d>nm7mz9Nj񿦃0_^བྷ%M+qOGއnw}-ךЍ.B|ڦ[6Oa^NY6K6(0x8 Ʌ05΍zBjz{%RozO˗1D"ҍ{dy[HQ$N2⤕^&f kr@iqPTKxbٮ+qgc`Y-kV9PeN(S'o5m<_цt}/ū^7nL)p~85>u{0vGfiO!Zr s8|=;-lD/21\ $Z8*jQ./pK[2zhr{$nXsKA /I ʍIöcEE |MƢ.l y֪E3e79KKXw-GDܕ{%Ź뚨udf&Zp2j^fiyZ\xӟM|g4R8Ld[Y#.k`O\qGޒX,{R !HB;MDLhcŢas )D%!' <%Nz)i3&ee z&*O[d+~QԻ漪KXzBU9f/M`8} ǓUE z_~V*Ҩy^_R-򫗷zu׮zK˗7[ pvbۉQ{~AĚ8`u kw."tю\H3!L˅дG+ЮPˎ7/穻TͮU7h{ bys脓fx#jާQ''CK `8ߍejurPccv!T iFNo~f{ y׏SdvI0.4U`v8Ez2"'dCkt'Iۺ6{H*9/1+4\3ג5Y/m[) z37xTtbdo<|Wu{Rgg;iɌ.NûFvkRX:4{;)ʹĂ,ٵ*X,Tyxm2ZTS+uٴq rȭQI`d18) _?jz\ [Ҡ3x{nݳu*q[_km=z!存[k^;IwK^[~Cwvt(kKws{jQYFKѠ0!e@g%Ie (hDhpv<8f<5%hsɥ3`|.J1`J;KAklhLF tX[Z1pcc8xҨd0{ ^M" @2 I1;; ʳ=2@ѽQk_|x:gwkv1ʗ=k Fjj1D%5IBp2da5G=p(;ԭ7ebV*õ#Fof#srͭ+D u%E[6 f6:1 UJF;LFEYad4$UI() R"eeA)6*ȡnM|Bca&IrZO\L +LċD ?2e@{>*c(ˁ1ÃzނZ̼ƺ&%B\Va.G3@'>zLSQHKe&(1GW]ޤ ^U@mˡ:bdG1ËQ?uO}w4? gh}7l7hc+mb}|:vquii>o%岗FE]An-%I'i*} 2ӊQyh.OXʇ~a7%u)OP8d"$J,+Q~ +6F}@=B8?8͇hM^KȖ$:i°M\U!`aCXPf} *jlWn7X cjOFׇˊ^&*y8/{@Zd|=֙4^Tw) Ív]n h {$(jwH|YeJP fSI-,M ^0bQXpAXl8-c9R=m!%em!%G[p>"|%"59N uܹ'ڠniIb FQg!5)i=RYp/M$(wqK0 H`= &WX kif툱~TX.6qb.&ZmZjӣ>"؍p[ M]Vd.r`*8d. -9]Šo0gicsQ.K]FR !X' &)cz 2H]ņ~s XbEd-";ZģEܜ  [풏م%LBf:)@*k90H〆ZDP%#0GmVhg|'0i&4w|(~Rl8;'xbSwuH{Ťd_( E~Gя֦ [^!oͰ`KlZ"r.vvPa18{(`°'0'qu_u7Y?b8Ŵv'y? Dꪳ{wmR G\F-i,aAmk1dp rh]}k.WfZLr.8UMe) b 7׽YG? 'IpiyUXc (P'B!§୍cTSsXIwo@+*p5w(61|z?ݧ?D+jpYSNa[;{w`x5Xp+ޔy`f;fƈ[0ky Wnɦ/j ZjRIc,}Hz.MѲ^ m39Ϊںk,r LcPgV WV#:(kA- t-/GDHKSjoj75üq-y^bE-]G|g5!C Q^;Gʦ 98U9 !u$0d(^&+d.8f7"glAVlIS.%eKu),Y[UYd`202^&<.iZl8O;o[ E%{֮}^7 #ݵ9\g0/"ŸՋI篷J/OLCaY2P&}d}.^dTSq]=Q#l@ z]Tvշ?|_M]eWv I7x>xgR GK>rҗODyT\Y(Q3kk N88/3Kq@]GXhi)p*ü%9܎-!NȄ̓$蕠SBQ}҉`ɬp?cun[wﺜb &#Gq殱9a"g8кczzs0fK6O7f!,an7MwAz^jYfW7]<.2S< :qtb8?2}REs^-o3IYy*4:Kwseܢ5DQKiTHFQYh|픕)(|H\y>Y_57Uz]Guyg?/O|yTJpBf22rT*KD I ƹL콝1T bNրNnBr?oBr %%2x̖y&erít^] Ogb 9^mI8=>Uځ03eh( SFA,TΕDj'jVAY8'*J+?,$ϸ9c=Es̰$00 *Dj9z±1˲*:(4eLt J $!w6y&gɔ ;CVnڒ!^ޥRLJ-x.2w@ie|JNiw5o% G+ !#|^EITґjO13Hͳ۶$톢|F8)cQ Ym\FPe7 z +״6=ҴK&V^dONlFvΝ %,W ̷AIG;bZQGѾADc!-Y4ϗݻ|qԄAMܒ$'I9*j@bqլd%Lun1Y;*/'q^L> cһCjmQ@?csqMoD]ݑlFrjGUqF L ()TsS[EhV]`'GU&=Z5D: '¯!ոK;pe{k ,(op.Q\O篎:z/;q&,Y3QKd"g^J@TWRRH!Y̫ޜڊ9[gmWqvꋿ2|O~1s-ҏ_7Ԯ>׍/y=4=S=[=:NA?:c ռ\*th< IiySD0di)B%)tLL2suZ ŗngJ/!J4:z.}נJGkh"tJTăP%1x{;H:}ūCgwyA2xޕͲ]Fx{={&Gs֡^-t䰚Ծ 0g!za,U$G ^*"X$Ѓǩ@e!Rѹ١ىz[u4Z)+C %.cBrlh"VPTۘMce&Pv4RSJ1.Q yqĹ #1(Z6=KܡZ }խ|4+Wėa![ĭVşZ>HOMALW 2ţ&76O˳o]E{ҥ3[*J=}mۂ ځ3iJ0;=*dP%dNi ! 1jޘB# "qC9J"p0BkML)iMQpT6aGH %*1xZ. S_c)@LDwOt'LGCt&,1$pQYjib3. 9tG[eI|q.@.SUI$%xZĜ-&Omcֆs;FD ]nHĔ U!&*T~U'>65'"dž,,TN5HC,ƪl>4`+H;Or;kQǷS,ODHBh4Q Hs||R:C*f9\{@$bTmD @'DLີb/P=W1F q:_20WAi}_76}ງf*d"%!' uHSrrzٖ*$@%HB^"x\ M@^N"Sff˜K* =׷iy~uZl}G!u'PԶf7t [KmMkW9|t7Bɩ 5sL ),Ӟ呱HJqD EuFz1~w1~ mf"=Qo\EbA4կ~|۷A|9R#VPژb/}vTP,fٛ~e:h\[Id.?" ߊL> Npz<44'OɮCy~2OɁB8`t23滋/ӣ/Gj. q߿-u-ݏ,Y8^,x]! A)&KdwJe9|8?.fo~+K}xRW}`Yx]-Lf\`6/mG^vYga˟TPDxZTVKYŬ| &z>oaBi.x3HIq$[şDټSiFIoYkp?\zOg6evG ǟ^^A{p?e;͖ /0{VIq"Ǘ?mmb ܭ7&iF\~~c=4wKMy̓bȾ†e]T«=·6Z*zӰ[.>|ٮTvcow85IJ#UѼ.8îNW]{c^<}\)jf?O&R4}/|G&-/_u^ih~nILlS.ycZeٲ0IL;&cK]j'o@tv:PKRsā4Og'oHmd ^+B/SO'e*@Fv ov^Vu9R[Ծ(>fyb i(H99*{3)*;nyގs7q54u'x-t1otlY|aðhr4C]Id . vY~TqF<͓s^E/i\|1WK;?y]r'-rF^xa;<(E9z9(YѠxyWIZQY]SԗJHs4·lCwyze06vyq)&2gy.5462q6τ68~֛9JI( HQz0a iIf,.^ Č T/O% w^)EUɻ=fr4D_ٻmep&6CO9miޙd< jmI$n&]Dˢ $.o}H1&l-!W #/14)85^p|=FVn$M(.tEi<%5ԃ1?Á4{{-9H"3kNJS6("HIkk,OfT:T2*}xl[}~>}y{ At#zqM)yؕ4ĝiuxҤ^U} $>>D]i? WdsNR!g%7[$(6; QNIӥHVw[ I":_}9 "ĩ)*c`ϧK9ҒVhr\:zWzg@ _\%D~$8'#F5|'\ a@P2~|<+UGméSZm]_m V-vV.pu+j?~@C4XMf4nFm#gspbsHI#IN=0섀s$rufLέa&:JFs͸Tmzz6.4`p?G/~y:GvUHO3u MprC0͝6a4J9q9ɽq>M=GhPcp|(t4S[ckZ~sh!}u5zK`H[ݗ-fR4Ka}3fTke|kQ `J3*^Z+{FK-3tW(cj$fJ挖3ו M "fVhQ)hi$Onu{'0D Ϥ7ja)!sA_wHD3^0ɵr B($0f44׎9k(0guiZ+I =U>P^>|A\ {G)}a.4_~C@fxS˩Lȩ4i:bPPyOYIp?vGôH׎JQUVGx+JPOJ7xlݛL:# `bS[1h-TuPZ"˻52$a-& E?5tRv̈́-BR;܍aUvo=7okXެkۣy:>9h=SFPLP\ioh!9BkA0c!ќSG,rNxneX(:!Uh%*wIM&@aE=V 1-oӌ}qD{TdWpr>k> ;~ޗ6WPc:b"|sU9PO6~`a [&hv,Wb)N1|;%h&ñF1+A3+ąI Ayu5tZ߳a+Rկ6?v s! 3QaSE< &``֓Ta* :ӎ&gS=<ߚXq, ES<^SliAqBblIbDnriV4Oƞ?]pT q5ǃ_.g! iYW. -D2aZ;eBxj[-E_qUz=ٮ` ַPaWnv7qH\0BNXKEr`C2CE9'c3VH1%ؐNj^xҩ&[H/Sw^"D} ,韶_: L-~ݭF%Ib% N&1Y#!dQQ*/Ӟ5'.aO¼lWх O#_>6= 8`Wс8 pp޳WG7zw_ޅGoI㯿^NOQjsa/lg$ge/Yi0V3IH&!l&DqahPJ2< 9Az(,ǿ f"QɋqSD!)Ӟ!.|-HȆ)«,^UCR{wo?H}놾5b瘻D$8 X~ɮB9Jp72Tw`8 4=( \])!PZHL$1΅yIޖ_<6=DC}Xed79Dӆx/_[M[zan_٠(I0.|y?*Ff| (~^{tHG&T-v8O~6D+ddc&f l$TFpY*8ޡY|GnSz*m$@ Yg7eVwAΠ[mV=q)&jFACꁱeڑC 3 i3j;'oL6ɠmKKarqYM9,Lp KNۑ>.PH DS2\17 cKiȒW9r"2}IXLlv!Y_6+'"W6ߌ\!2(CJ).֕L.KΑJ4;ތ=k~ z ׫j{Jn*av*|;J \%D#TU3L@bի@Lz/*#aXU% p"5%}*_%"^չF(d\-V%J[8C(Tu\wO f-Ky\'(%#To Qo,Q8 RAE5IЌ n i8;{xƇG'ߧOrM1<+83d9̘[L.+Qu%XK_^mJO&ǪqLiOӣt?Zם1fby C &G v ^(v0ϵ1"N-ا_64ʹx#F7ʳEk{eO@[YoN5^Ta$y\֧ioX+ PukoJĭ5C)ָ(CVzdM 0BD{]p-vr,;v>zwkqYzC,>qjE8ъE+LISzzK ߽bR84~?NHi!Lˢ\6VkwNo_/Bˑߎ\Lq~_)DݎJ!2~24@4_ZTT\#gyq9GeBCb0Ls0i n:̩VMG84-O9ɳia,}欗|ރ1cix,SkȃZc*H%B~RfzPT-԰y.? %~ӧiR {g75c79Åg/2ny^X6;?|`K1/G8!_P'(^CrI=)F>5%=Lڃ9Gm1 )s!(ß~حx ^&Z W~~'Ȭ9ͼB`5قjdFkawŠ_]J*<'o/!OfNY?N&a2f8mZiuǬIQ"yHNJ+ȃEAe& ǐBi4XYuDRBӷ* O%DRBaCsu>0n|cr5V6=;b*~:ǥ ^XȨg7WQfZ:f1o!Z.wqF % %`#VK#tp8(w*bJl~1cɩTj9x':98EjW{kӅmՁ"^[kCi u-?{OH;yCca陗neI>ʋeJ:RfdZhte5͈z, Pv Lh2Ų唊䋄K~|sWTiߊܷIz?+^%YeRy5KIU*ˢUFiEZLQFHB0&x ^,m>ۢwvʗ{~cYc6bvS6}n.zk܁\<,YމX(bUV kkL)Y{&ٻuU KY\fQz5Ю2(&^.w$! Z̞e\sח׏U__loޗ{׸]}\^?Z*jLPR%2PXZJʙ2EBuv[Q]h<9ZdPq[M0k9VΉ7)Ŵ5]@2ڲk'mMC"j-a?t׎ի1ScyjҊMdGxY:a_j vC YC+6L^Ceq;0o4/zp0z(/ 4Ǫz3!eMF7'}7JV-# $$ɬl_ :nR-Zb/]]vS k2UZ6W.چdc4Q<&0L l suCmjG|MGcCLv2Q,Ff;JQfmCj왲GhGc.rspמ/Ak}w*m/J-ntdų~6L["s4Vx -jT(Cr!/K~s@V[ ccPvQ\'MDA(6_k[F97ڛm00ck a;5ZtoRKs45}7y v VWdyoDCu)u6E`Ռ>tb-K ֎{?C Gfqi 2'>Z)G9 c{o-,AߕnjVd֬Pf2UnK.1*܊y0֛ ^5|NKmyHLŮ~=_{'%U,VѐIy=(}q& ]k,_vO 9"oU~CEu#83okiT6ÉϹk!!Nx).ZU}GF#-Įш`hɹ(>SW4q@!2-U$f/_jUn OP0tbE [ 2lsxrqcCv+۰o:Æ\)J pcԅ| b6: x] 20}eofFJB3-̚BlS1P=6V~Ý@2z'ipJgF:]^V^l2#^]GibJn߳`0.ԟ_ԶfǷ"W1? _f _xm \"pGs 40npNjc] ͵.5'ׄ|nΉ@4X< |6V?$>DӊSjNn{J4ӝfAkƳd^d5qB[9цɻj+KK]'ƣ~/Swe"\vwġ`UKMX[deYè 򸧁}Oݜp7ϋ2jϴ8wU^gj劘Jt}+C>lu+[I+4t鞽[~W8y7 \?I哟N͎ɉn{:%eFG˚'"%eGal8bnyG)AMC' *Z8.lVuilD1n;a]'%0qʖ?A׀0?8 e7 \D.5>N.]?>pJ05qY&Ձ. kj0jVm)SP[e7漵 j"}R\Ϸ)ݫf SL97^XpV<FH*%A7r%ZwtzB UɵR F"wJjZMI^x u=mGްܷ?V(]R_;k߂@~JCE'uEh mc5XX2ΰ?髈FN?$`uYӡַ#/%u>kiH;%Fi1 %B54Ź3 |ԗ׋?Ndl;-f8D`=F9)tXT%Z !d8paBKખL,SPxc8障I/e+Fu%-pp|W2R{ 1YX.thxs1cj}uG1%+jLBE~LG0d2vM.`o;.-&$on*h&_eTN2+JyArc)hVHx +dZojeQTjDCojs٤5&WE->)3]?iR({!&?iI59擶=^iQOSTVFՈ.]5A,N|`fmL))KeJ IST*$ {kz{yVJ+&s~{f0xQ׺F4yQgV| TWϬ_^+z_Z ~x=^\WFS^?u}v?ZA|ЛN_f=)]tY:@g\'%Q-&%j OϏ~80NUo͕K/-KkM~. `*\@hJM^V~_L*[с[a U0Pg|oxZƮds:u 4>a / FGL ۿZ׷uW:MV;WbCs#|Hh =$u y`d.F'jRr8:a9ڜn 씹:wWJa_ẇuI\H\c7đQc3IT?ayE 0j6#x+ok dꚦC,*xe=м50{pI`͵!PeHq9#F#J⎵ATUK9?42 ]E&0a6?$7y $CMyMyR^dz1_=Z<DG'Y( Y}=U;*%W]5)Cn^>gvvwԞ诺OުRQZByU8.1a1tOxAK3gc} nWB) Trj1be?XMkXqu H9s:r +D T9/BH*{?6J۽Y+hN?=I ȩ3!͑C23łػcƺRDDΫ6D- ?~gČ&[o_n?1rK9" O k*ƕG+}Wc]Τx^Es]ѷ}kc|#M–/%'hU9*H8O2~qJ,~|@͙n8/ьҒϛݏIBY6VSy<$j^)+cQ)< T`)L6^x_gʦE?A)e,T'"q8;I^ V8㑮}^  $_*"~[;@Jͱ(g@ e-5Hf1.D=eBvڼ͘R!R Oڋqx+Y!p%D8p*y)8ϸ45DQ2/{I4f*5Z KA0!}K(R>!CvB*_.eQBP]<"!SEٟ)*tdx,Q t2ޕ쯁)eY'/9 T3*"y/tcciۖ&X>:e cX`JZ$8+Ø Tx7ȏCbZ!챗tO.JҖ5DCGD6]jӤ*< [$ڬ-M.AaD ]΀0Mr_VQյ^HLx@S1 S,yOLʱBձ;Gښ@Pjӫ9F}- s@۟ZB݊z _X(7w9M;O7HAԒ2x8 |_!ZIFJ%qQ"1E~.8,~(}mPx!y~yeN~pT7 9Ϛ=dqlZ#8zM7p-~5;ք QFrisULҖT9WZQ@M^  Uev<'¼3F V&N_۾ GV5//'sꡭqv?hX}3ɃZ\48$.J꛻jAHjCo_x`JS:><>*(f%'=e3RBQrp43*f^[P^U$%/xoia+}M["ҽ.b.T)Sw5HATH5ۺ`vL}j *@K` 'a vz=YC~>JXa c ^(I~x1N)QN_&;6a$JO>ֱ,5IABP 2F(tۆc0'Y|iv45~*KVǰU /PyR!A!8#R\|iւoۑD`|z.pEP#tiTaܡbTF\HJIEу8Rs*>?{aqH7hkO^[v6ʣXf($_\X`l &?1 ^"(T"DD70Ps>.n$*1QMdẊ{I0bbV.$K#P d#C8ncdz`{)<.IodV>s'G5nf 4tuU0 Dbxu\S GJ{bOJ'd_Dkc[w1۞Ra(h?-cq߾D)m/&Xʹ6~;Ubٍzy2J=) TPtfD<$)QQ[29FML:2L=zHG{oq8 YN_o,rՎ2V#=X#ڍ]XσysKٙo2s𧹤ˈ9D&؁myu6.8$_Gz7{GXQ9.<64:%H`_O !!Nb]2ޒ[|O>IC =bd_#DSL")cU"s6.hG1]R iyHu^ u NΟT hQ7n`B$@<׬uhH?F/@r/K(Q:lhy_,[;0F[w|#qo&I1Fh/V&6#mD_c1/xO2er96e0_f|9Hh>5eK?~rzSN42."_[󐯜RR~qn$_/n^ߊQ>~o٤,})! 8[Ey<0ljzQbOBYJQ;V7rM~yӡ5=`cU*840% EHm-w6J=pZwONeٶqlՉ+a#/v8IyG Zڣ@B1"l}1ߊ-CN cȰqW3 u1qؿ-tRGsFKڲ,> gmee^ #-άS(l U㻬HCoL? _gP֒&$-q -՚K PNhR0ZPveaA_jB\KUBȟ8_?~9˿ꀿ)|YMscq^0mߖmz>8r"SA!V,W@Md9ZfL"eVr\ Y@]mE鶬(ݖ۲tkWQ:Aƭc43MqSY<'IJOKI!FY14͉H[EEW/!Kcu lWǶu[ֶnmYۺk[2M fU13ML@\$9t5j^dXA!!MW/!K}u ʨkc%E{A<7x˧\slgKIsh|q(ON1w(yjb;Z$8+Ø3 PAR5_@hޤ ZXӟ$)9- $%o$clv[S7͘ {9Ut=VиquH9 4af,s4.n\]0 R .;Vh_EENC ftH504>p3\ø8q-~kPbB̝s7wKg6Wrn`5J2WZ-ƺ W41fFS0?iC!^ap t3:1F(1P2fcjͺ4'`{p);](1Ұyםl2 cc5HjDww%!ApmFol &P&ۺҺKH7 )f}6VX)Qi;{"Z>H!8ܦƕrL}.=2VO&ӵ,̈rw3{3].}ÒbÑ|3k D \dvC8+BԸpXQTJ8x6Y}f9Rʄ3 t*G7'KY;h鑕׹GT璨$@r A#?ɩ)ǺHӷ~qpq?{W8Qi4XIqgsx݇oH-YVsLٖlʒhBtg1d~b0/"D,}-qpX,J]lQ*#7{s kZZ,f(MbS4[ W[T#EQs~4BvW,$`ԧ gDEF2LbCܞqa(ڲXY4'DE5;!JZwdk´>y GXHɢRl9A1aβI! ~;<E Қ$=*MG]'8Q婵?Y$LrGd]ȣ-Udݹ>C#@e6⎧~|(^d,++˹5)l%K 3 *:(Nu ֞.XhrUȸM袎^qTaWRQazgpda[g"04}Lq8&$1IتPoM–$f ZT,\H)a[3Ep{)1LS4oMUN=_)cņJV#s0(浫o8]I;eK,utyndN;-<&FRĉX!?y|i X H4M$l]iT4Ux$rŸ8AdfpJ} N8= ' TȬB*G%m, mq+pZT]0xDmpg 8y Z`Fa9|*d}}:"uOS :G#1 TmqԉsTM2~(OO2\t4z*ՉuwDH;ʰ/a럴tnEFFFP8I oDiK3)Gj\p(z_5ǩk& ,L{4/ͳV, p/같͢]ab a $kQ0/ *!2}q΃n;1Qw`QՄ5IߣtyNx43 [R$*Hh&2ЀUbߗ*VX3zX ١&uN 7^NPvϽKDy*":'TTQxMI5`.%:5Je.*ddE~{?> BF`Mq+SG_ 7x LE aZ^AcݹQƸeK)Vh>16 yz4 LQks ~g 9RpaE/{-/>?}  My;u#l_g28x/>mZN]Mʧ:*e~L`;:p2X~M#Ni6Xt~E c"+cn95x*$f]Rqհʑk^ >@$JtZr +ge`1L.%K6IЖeBFXA.'ץ9ӸU&&E""~[r8R# 2nP6Cah|PРmxW积*h I^%`5pkb4‰[%w81C*d[N ~O__R4 4m=&f.g#Nu= OA`1"kMmQ|8ڂ5Gs<) "S.Ef#u3`Ev:]737[8% *"=8" L||Y TjB0(ms88\HNFυ/i|-.ds0 h ۼ_p_7"x g8Ғb ?"kcƣUآ73c~GM$/6Y}-b '}Tv3}&/LXdgLncSv#%Ydd([æͿoR.+Ւu4@u: uyS˭[pR$ k?à˜N3d%K2a _F\ 65='hȵvYTYJL3t/㽯(GPw&~iA+NuE//㪫ҌW}!8B,%80iX9R'R14I1 D1t 7A/Xj1{Z|-# [2u#cq&ɥҙJ',GIvz })!|}Q Q|J|&;nN*BMRJ(GA$#C |9|6Z*B2,lITxM P`9'@׷"sR~34eW'Ak2H(6Ќsd>v<2B0FfmELxȈ?נAjn9*ĤZqcACE{ke D4EAJmsfKj@]jpkJEf)VnaMNx?xOTx:rK+YrV<OJ]>{,dzx8'fͰYWiߞ޸9YR}Puwj o%ojتˇP'5*_ճՃ;;_c_CHJAY3p[$ibU@2lE?XS.W18nKgUg;JNU*[i5Q mMAMwj >ļ3XNdE'qG]Ԃ\I*'Miu "˸Յ P!#Nț_|oF%Y!#̄u.DM`+45jpV++Nٓ2DTFp ["P7ڰL2_^$ڲXY4'DpE$Kj>A3$6Є 1,!`/mF "4ciw!ᄉ }f:D:\W[Gd}f*C:s̑-t-ZU}.F{ H&%&1Qs]Qa:ҔiO?|1fΤ:I{}ƌ`_[D%V踑Er~%Ȭ@sh"")I`t/u9ek=LKTb05z*dZJnq!Gb2nVLU)8X)+TJ\pVg2ly2~Z)v?jf*2Zcu%n'N)9Z_S L' m$EyEFZja<iE5^T* OV%by VSaX&7DTƑG񌖙.c% GJ|58V.-mɀ{ ˯Y BFn@R^KѭS6}촯8d`;OW˩[rqhAcJ ézڬDjuݖwJ'@3_2xRq9&׀r>jYIVYWuVU_= ŪCΊג!X0_<#J'ni$d>a5-WaWȸQi} Ɲң{ UbUBYэ0//JRD?U"e@YqvX,.B\ښNŽ!` |<̞FM*W©!!Pufmlˆ63 nݥCXgCz8+j4[;XgЫI|>nD)Ǥ}rBFPN}f82,5'JὯ1f6b"@c XSdSH$Xی^%v?UBd聫3" &2>;PNrbϿ?giҷod]RJwr\ZjUh_y90CRu"&{t.R$1lH'"%YDw:rA?R +t^)aG:Ghi$fuCj`}0 'iyK!ZQۏV0|~ipD>KsY8_ .fEf\00P /?NҨ|V2T&&E""~Yr4gWXji~|d`;N]\·b`<\ohh@0]!#LAm)?|0ԟ2Hϊb0Qr_^-H pNj>~˝/?' GjN_&*ʟ\b%oO.}~V:ߩYnj9ܧ/}:}$!6bpK[Kp2k0+o0/|cdMA(ۉ e$Pt֖ZiEwɝbP> > \~RV$m͞6YDR`G}窯v< LzXx/W-SĚQݞTcj]yX~y;R>8ہgw5樭 b~9@߶.[p& UÂ@3!/m~o7M" 9(8 M?} ӺB7.Bq&^U2.Sնqj !f$Cx\1 u,X8K!EZor b|8B4j-۶> 94!;7wu !|=iT4WLDTc!~l{1bxeMʗ6L) ݤs+*Ḧj=_ph_DYB0K&צѭǼ 1MqtVOJ!#NG/T75z!,Yu孋pؾyou7XSͷ >YM{w Ri9JDRT'i$Ӛ @UF3>vF8 ()M0'6 d&qK]`lx/G΁Y#,`V6䐆Ѵ\G\@u{BD3";L Qp{ ʡ<ճ SF WaT~r `l mKx@a9vrNnjAoG9wfvF⫖ {2:RW=(/!5+qvݍ?>O__ߩ㳉v=~:\I?&3ٹOOE}k._>4h6[Ϣ7@LH6UMUniQMN/d o~_& j.U& +ĵxP$G7jty%p!Aܥ[%w!$t*"~"'FN ';ZY{oaO5BR;ގ(> jj`F32cQ*{DKF#لs0nl p^/{'3WhvZccsx{mv#j4(B/XsDf^/]_~!37wZi2E)1ouH1 D 0pѴU[[ P*Y G>l$q n|{8R`-n )Ў}>^dR,܊Z&IMbv%DyStX啸r-4cƆWR 7<_Bꦫ}e[V z,-~{ @6[8{%0QL k-ЖV~/n%%'ѷAmMJ+srZQth^!'E~:dpˋzr cؽ uQ-g prF8о,j8$ yL`ע嚷yM'58iP|[ NT< Sb؄%s w#9Ch Zu۹Ru=ăuӍ(HT!F|~z0 Kݘ}]3ocM&8b^l) 멌pZ[+oSL)F+ `<җs)ݍ6T.6 06ycPW:5 wgM>]RFfeYd~[}9I)fs\XӴrPhZnoQ6_=~eg3>4pV\qzirx'M0Uc[1.8G Vǰj N;++W<q]to1M[{5dﳣRK*zԵ͐%CKk9X4GFҥu\ǵ}ɶI&[KWϋn 0Є[ oc&`PG5CҦd t3T!!B̙ޤ\74[R8 WhfN҅K%^ilNf/xSO4-?XRÐ,j?p&h{*m; \h:>{a|1:0P$#VJok+k] #?,!88ܴ$.T:M So`HqyjU*~˻y}uD)&U.w9z~ΦU! M杷s7a@/ga Q&fVmSby?ZсѢZ C 0L9)}@wxT*Len>-q؇ 06X$3cYGu 7)2Tל- x jN6&ořğN×m V`jv-t鸫Nɦ߃?8(Y][v@ 5l _sd"L"RڅZ!=Wɩ-ځܻj8yrE[ 0ALWou$06\3` ԣH"M[SB*[\oN9Bdl=;xVl@[Jf6x#N5\Z D"q21VU86&$,xaڡIc([\ p{/Szq!p"h69yzw⭅u-UbR$9O)8rLa7XnqC8󡖙DžRZIԗ$jD*u/{\h=F2E Hr3>M^'捅d:R%w^yG&=:gx:4ߧs|q kBĈ8N9@{LRmQ{DByZB_ou-#jPc(auh::y9uR;$@9aJ&wҲ^q [+ユ֊%"'y_仄BACYQ;'vh'7}%nhVboZ /q6o_sEPBV>Y]{-g9였Qylۖۖ>h﫺0WM̉\N;aE=TAև,`f3 SqU ek7 Xf @+_&Q䬊,-,CҘ+!p#i=lQ3tǂZR|ѿ9Sd_&_C&p.󦆭{2#4ja 3o'-|)H|n6Y#,`V;0]Pj7s!gR#>q |j`Ziˀ/Ԙֲ 'ՍpVKW}>pag8̩uMc0 H`(̂AD6E%N2l&:,$.d7:` S;sD&LiAmI$Zw`4hDIJ-E|)V7- ͞2t2]&',JmnJ3J͞g3: 싂'/;sI8=UcAigXNfv~tG.%j}`l տw@jOG$%돓Φp%'p0_:X'8[%)_;?--I:8)Nώ@ <:H1 z)# >q_ȉ(w<ǨDNtEN0lZ;cPqJ|k-TSaqJЕrPQitS݂Fc2t@hҢfC 7C'UX|p~|$Q)) MS~ ViR<[`l[3[@ 2ds%2Dtޗh0Ihc,RМ ~h0LRzAfQ]48ƛRX$[m,0AMLrQ AUv@)@NOLP ނCJ&k-`=NRؒj 0䜵6B Ƅ+@B\`FH.J3B. `kC4ĜT~~Bp[(KA|)` M+alך)%i4%.LJ8X`EL/$`3L 2pq $ t)^tqI͊/BtxG?;! }s\ӝvavX Ӕ=,Nry+y'݉jͶU # [{`jա*);eHɮEL\m$ hD9.%Wp+5qV`Pjrl#̰\9V4`"E%8&d L4.h)(`R9I0 'c5TVi5E L?(tG6/~VRf>Cz\a$x_RXt,h&%< RFP RU݅>j3^4.ϰw@Ž{&pum _6>3 7hM" E-kv +|  8W/gyh`b$EM'2%LA tMkE"* d-\hZ!]@kEuFuq#?GpI// J5${ NHG1>@9KKr5%O޵5G#BmR*3$}ووy t5pcCMm\ TTq&:U(%}e*/ڱ4 cRvY6NMy -Tb:0r@e ըOх&k[EMiK^ w9܎71;̃E_jz&.oJw%P[Vܐ ;wBs_Mm*Եu(|-kQ+_)(9w#8; 0AckrGt݉_r-rO%^-!bFĊسyB]'L{fs\|`$s.]orQ,_&ދ%;ϹM90sg_ujйeBw;|Q +_b&$U_qx5dKR$(ŔX{lPȗZNǥɵ*Aej H%ܫB'Cke9z%e 6s@ S]QgKTBO̊Bl4\Lx贻Y[q 8$bHހDF5G Y >9 B:p. fdQOYg;R4 /e}m܉|+6S"Up%s=Fu=нM$zsk*'c`6/۵3ʫʚfT4#ƌʚfnKuZ-IT^wl[9 40r3XWG {J^]YYz~͵ {^?pٮg:1{TN?wrw]WlGBwӼ:⟳K9srk>܄X"Dv#gakGd}2P}z {J+hRʩ77pmu  +!Z#`AGn~xz: DntXQ=$t༗Uѝ+*}ps?QCŰW&7be}I-$K q@bwb=tpppW y #$0[M*Xl豷&0W.9? NنmQh!8\<]opM5>@}m ;VZDŽo1qd;̔c:`)B*__wc݋>ro ʳ8 yVS.4uۑOHmt:T<:̡@03m7p[+G7 6ǥDG](퍱D9Хr˶E9Rݖq[%?IN' .tZ/F/ r;F/2 fG7z@HFOGptt@7Vg]{X%\uZ}(BF*L ؗ\fs.>JI9R@r6k UO\y4 T)T Z< Z/=]^"X=r]+lzW\-9ޠ? 9ұ||\= 2|9۳Y;T@}=C>s'^MkNQIgB ~xٖ9;Aꃒs^9kKn_q@hw/V:Aj35ƱpR&ݺ2l je7XJkVftt.VZmc)gю܍2%;2MۿaK9D9Gу)JnҳV꽾r6 lQxqD{]>;/<ٛ`=_ODZ(ƽLg.}⢧^r釡߶BW-2oo_Ϧ/[):;zm1+l^ c}?;zo~N5ɳ"%ngg'W~@ ų 8J3p"={*N=P0?zp4@FHmbV[Fzu|zCoCj#沈Z#r֢* Br+U:vt7٘p-Grkv폱[7]l`z!aΎ:"g^i!3>t}NLvATA)wlðSUcCTNܜ7RMZPU?dm˥|KȺl<&N5EĠe |r)ZS%K`=EbPۖhC}7XUI *&.O)Qt5Fmܯ#Fx@fs}A?iYT<akUE/}?Vr>^sn8y=Giy؞sX )6ˋDUkQPTPNHPTkD8|&9z/qǯdۯ,|29dg)lzj$E*ɔFOjB6@\/ًxT/^AKG3=~ZƷW${xtXOD^-L_⦯ ܟ}}=}+"g q+u.gWۄ1(2(u|X{m؆! L~< #"&mSv 钟Fw-Q$I`w,Dťk;9mMO`Ԁa}=EUJ%u#JefG;(&jv e)xS{)ȉ{uAЌ)F?ے_">RW7DAv6p8KrT^muHw]`Uvj{qɓZRg'+g,^ rbG)x{; R/ ^: ~\RQ2'Ea][0L:՛FQ_t0FknsA;VM˶K1n2^ΘFbyU޷OʿWYd04бY#gԪF|5Y悦$B|ҵwL0ۍ^"l^\d?:;G}M '^@g1iTWϩ஛Sg<9ٱi O8X-|N rbNG3g]9HDaff_t13)k=)3 =p8Ҋb ;J6+s%7i['>پt'dh3J7߾ygeȁ]}XVaBQY2W>ʲP@Vj.  (\ga}bN7Xc@kyvř|O2~z=fz1:f=*t&am;No6cc9̾~?;kb^||H޶?{9Ve>:[Ξc pǶϊS_<K̗3R3~%*Qzr|ty g\Xa5(sZ{|,J݅سyX6M.*.##_[WPBgW~o z.~̱DZi=y_3xw-PD KGa$#Kjs SetAASV#](*]|iwkW)ff3PTJ_т P8 =u,cJ^l61ꏔkMJ|oPTj[{?ӥ_^;uu]^~6};J?Gsђ8T]WJu 134ojm!nU!u7x ~B bxVD1<إ^]ӈʵn{#`_a k镪^J%DUڋЫ'O6eCQB 9AQ#nj"E5}.H*VGbQ5M 9HxcAIP (q #ݸx/A`U̧s~Tyь1ǑjqeύPvpދ2aR|0e p~0\0茍a?"Vx 0.=*8mAAaƒ -gRnE?t,9\=b1.9{lK),1&ŸLC.Ns'򣿬|7]" .vr)vTԧā'T> Q͊] 7]-0,vqJzFX!5|&z3㰄 <:vg8/z8@J)q7)f)~"RޚHMra͠XJE2U ;O95QBQ4עtڒ th3,eiݱK%u! ?ojEu5%#54]: + ]s ˫sɇYWDždߚ-C(HAr(Pkb gsA!̚|zL#WT JZR};ڶ|*KR()T*F0 h@Ttin)TTq٬zUF9mVXaTNȒ7lV]gЃíTj|ckR+{Smi%B=>v2:RV$UTםb \H8m{}bMk.ꝓxNj );e~%bWV]Īl\:H Z\␺BP J=g;G~>+ݜ7ͧHwK;A ڌNft6>3{'ק}2pa5&"l]!ţRKIt&$_JPJUۃ%1^Gˑ@؂P*`uHeQX3\N6B·!m*mVj_;Yd/~=I(WyG{vcxu T8.8)^_za+'?G+jem{z9Yvk}lk}gU?Z53tAKsr!g P{֍"*b#:5W1 .Uୂy@JՋHd#Xz7JWP%* OY=c(Kw'k j͊Zpnb ϡe/yJZ\H;7@譨qHC[2cR%*Ne x?оP}H@,Vw*S'+ت\R =6_-h#HܔԐqnS9Av*>-i~Zh-ڙPX}4֝T呷K vVĀ9%]oDH!dR6Q+6Ы*|WET;w0ǦCI7Git&YȦB oM7,ǝR_ro(l8b>֯kB]WGbh$)bU|nZ/R sʩsmV2=Vp%19I.#GwNuїe\~_擲'nC:s25|ɳ k i"ISD)Z)"H. S!O|wrϲ4. H (u*Ę(vF؜$/?oJbt^Te]Љ*tK+JTKL*Nq5zfQ&빀7MDPm93ݝΠL0%Of&&:BL3ANL0EN3.y{ u$SA- _rCpV((A.UyMQFR4( mFPIlɃ굩f(z.B\HwBו@ת,mNwUr讏`>HW媌& ˗9 $;Cc8kHD> YөR((wtIRFIVRVbAηɃ}WK(JR;IVnP"YAKr> ȡ$4^M .5;ˢQ:b"6X]jN-4:Ӈ? aK<3CTh~P l#m$t'xUGdm<KAnSGJ|}uXU]m{lmImDٌ1Ws-rۆ9VaHdb+TSSM :*!tj+̮wg0G6Ma^SGf`T&Bo}&kO SHUnzJ]/.y2nH0vwS7})ȉ{hnZI[$0ɣ>H\?RF15Ζ1K%29iJHkL>Ȥ-ba)㓅rX/_gY%L$FȑAi[Chrl`jOEX+uHN(\9.Y9=1%Ťiҹ@=VZI><4#+#3oC>zlH vd|VG8-Nᯞ5[x}z?URgw﯑j]'Mfj-IζYw˜1RTUؓ"RNm"6IFR$KωF Hayme K}v,4}gbpwu:orn%Od7KUF+|`'l3 WyOFpsp4a[w8!~BN*Bhef^Dž&<#[Vp=g%t+In$@so gS,ԋt C)ΣњT?rC0qE)-,ösX,X,Rz%a*!]%O퇑U{~2[ '?m.oͰ֜Qj(R.̅1Y{78Vwv!F&yuw[YhNO[Ѕ6i/~|CӲ1qyJq0ۙ6t nzѪU2Q&.u:"[re[:h/ޡåXPv [,{j2qlEV'Va ,AJ0瞻+P20c_kyJϺ bjԭ-^RȹȕS}%dUGYğS/}eӝڪDf+}^??_׭"[j%.;=ޟ{8!EI8E,T[RαD\}s@7ٻ0M5+6f_q #*(rKIn_GmxPĆ[oUG?aq,e%(1&)5fRAgDKj>۫|kbuP߰% + ] Xg?h:q: \;^7O)$;A!#&B 3){1oDE%>frйl%aP{vp#a'du3nכ^JH w}MUj dEʆԍ5bJFV4(̽Jj{U_ wɇNXt2)+=W2=V_bL`ٵd2*PvcՁ%s\uمRjR)чmWڜ,1uU4CDRէVkdZfҽɺhJV+3v]KWEP+0zs6IHel)>Tz8t_< ;'^As!JۘZ?Z Dl 5WZ&9vLѥU tב*weR]!G_<#z\D. B8eť|`V_I=}4s)SD7̆YӒ"]J .wTjHt5*j$zsnؕVih݋ˑx,Qмx_oHbYtkFL8h2 -{ُS -:E yk? : :htUG\gIڱ,C |P>775|2omI|b;H~7 r_߲Z}Y|S*Lѳ6瓭]/I䗋!&^#:en:8Rg,#F9*v nCiF|nrv1!\|q~9WQl|fLQP|fzf46*f^M:;"-x]Q!<%V:s7'R?,YL.?z Z0* @ ,'ט>Մ޿vu'ӻ'A&bA"D1L'd~0HwlI_ l FsE=;t;!!oSTaoᕽfxeG<,=-":bDVڛ](nýv-*WH"f,Em^[U)&ª9v(ASG'N:S_?\|w^Og6dk/Bi4(d{gĺ4B5WV1 9L,$lw𹁤j:6 ̒u`^ޕ0k/PftH<$GB>iA8/iK W~2yHhN_6vnƾ"%t#%BoBmUKn1UrVp,z(M^ gB?1AoW 0splLPԧ\Q,q`{+ =eB-HKbP&)ltYG on=ϋLo?9y2n-9jM7_>?{!)$ިV1;z m{MBn9DE]SnH*dT7]$Z-60]:bU>pż*@Ď[lPb@5muFqdXǁF"roBaNp* '(b*m뵴.X& ̑!1nIa`I& &SBMS0q9ry7ت;3r㈘srdxfEHmQCD@RnLl8?\ rBPM#>^g0%G] uoroZtLvjq*@ R׌^-^󲑱ٝJ\G /8)GJXU(5r0~`|!tмfs*ZuJz’S/z=Mm>]$Elz_ J@0?[)igRX> L#1I=^"Mo_ mDא!GJDQ e98{)kf )<:w:;rotn|.P& ~T:8wuΏ~3)Tl=e6 m4)PՃUUM"RG):vKkGNa:-2:'G[3~;3p8 |lԥ3! 6kۤXUF_}fy5BQQ׭ K@WN^hfR. D).ķ DNf;8αC7{\z9x +):JJXT0:Dqbۨeՠe\/ FN&kXʸ[>#SAsF۵w7{F;㢉>xc]CO7=3)+Xt F$ʏ1nIF~;wH:{|K uXfcj$nC.}9gN@5*l$fAٹQLjm!#zNQQhp_98R F㗥8TW"0^T7//ٞl`yOaA$0.ۗ?:xHۗA؉M)bF`Y/3}d,qҒ?EZj鿷o>y¼\LMM9{{ ?wˏBݸ28iptúKQFŭBTj=B9뗎)szz"µRqQsV.|+)82Sh5ë&KuS?A)2 ?"2V %P9HWl 7m'@KԼk&00Q%Ȟ}4Y~]Ir]ɹl,b7:K)w7qƏs"y﹌=Pd~dwgNGb.!,5a?\CY|Jpӣ]=俧(ђZrZh쾽}DuUz?prڭp9IZv&@BMݤuvI15;CoigyQ'3R)g&nR6>d>W[b?>Chx& ΥqٵfT>O%~Vfa9ճ<$$^b ڕSQH-8)-sN 餐v7'zj v4Qw㚨QMсn lzKT]g &|qDEcq0U|x~= g{^{8XbɊٶHv&@$@) n:ZXogw@ȋA0%(MQ ֢.`8LVM̘e8rBN?H9o}J*k5TԀ>Y a_>-S_]v%5UhgGHAP(qgiir TdɬGQrfC.bf TFcֻWD5dw?Qm2zM<~qsbvh{e,Dɶki#'kyN5rX먍B^{-iY'6@sF!^% GXC:y@ח6u ͇w1dS@S#gFfA&G `ٯ]>;eF6P@_LHJeU $T*RY{GAYpp<([w!yv gG|4z]UGg;W;\0{,Z"D#aUgcE^h7V+S(I$.'vathGSe٨D \o)T7]6Onr䝢#zE:==[ހ#9[,x^UyoБG=Y:Yq(5>};mؔpu: ݧ1c܄B _D E;zAauuA qo3^{_! d UI&Y%a@wn1 IMdIMdд'c2_nx81i%l~eL>L4>X˦vȍ(@ {-2h E-C[,H_s7o2 Q0KbOfqtg@@,/bDS߾]ЧIJ8mX,fo$(d_8߇@R9)cu^ZMF<*eH6U^5Y caH*4|jRuuEka%@H6"'t餱lE3' :df :?Hy q Lk~:do1Ѵnkf6uo-y%bjTeݟZLд;ߓUwPW]T +`*Fv\1P)Jޒn#!9bVAdM%,FaM￯f%43 ŚlȬ4|esI,Q+ 5?ÚȊ0;VUT l1(G&R*";(r"Zյǎ >1J3:<ZZ6܃ȽHy5]#{ha܉[=1QKl>}vx>4[t PZ5^$ge/3D,I k Rb<9d!3Hc@v)ؒ>1ıyG-bm;a>]+E_I`fS dFַ{&|He8&l(cLdb9O GA%BtFPhO% WޅUX|k4C}*.`zPʤgv59J#&Glk'2mm&hWЁ\BX2S8p?&ؠĨx1yyT2E({0)(>Bb1SfR2j :?#6K)jGmA2[ڲLbQ+tGVz)j;Em>^3oz#V=JT啝FZ]੍c 3ޛUE1V0[Kx6LT.!T|o$Jл-e ^H 36f }֬}ë| }`7N18)Zry_j*9ujqB !{ £)N9eakwDN{ &f:F0M Vw:.>7]!8_thʳNK[x Z0ҪY6A(ya36!<ȤTDb95YELulN)2ߴ@!zvIڔSw+B9iɫIBlItg@?SZiQh (#,m#%ȠTUufVB)# ;L0ʘD&#wkzSO'lnkafM-E>CמDhݠdn7Ч%GO4iBPC%3ןEwɷq" E?6ZW@sP`X?vrmL.fЗu N3%Fpv|c3IHv:_C8mgtl%Xyr-%v+Lک>H6Wjpjx5(B   x; gvw!J,|~'}BT^"05Cp]`JXUv*yC>?%񷚽%[ړpF64*Dd$ Xz!J挄!ju`#Y=085^jD¢ّaڠA^p\fWb?L5Ri)&'$) ` ];NHpTƈ y{(h~&T"tuC+M5Ǖ[w. BGfW11g,vf3V2olVs^%,9_ǟcO DM3aZoEjZ:Luh4$j)2Q&tgE{"tw ^;Y'om3dm[>f+zN֧Vɋ/Qղ⃧ ˃9UXr:fŊH雁&@ڎ7^,@^)URDXȱ\af5!O,f1v+i&]!7}Qig &|qg/w;n)Hchl`}0Ac(;.U/XDV;53 *Q8W)\}dU=Ϫ:^Y+ۼsW}Wzc&։xsE&ֲ"cAa:>H,wckrlu7 `^Hv(T6Ck`|2PP(#=A*)cDvK|,#d%2q \I1Il("++!*GQѫkXYcBH.km$dLl1mTaBeYFȣhɩ.߇ݒkj4`4Xlnw3B^iġ,Ak#{q۫EPmGo-y s]&79CPLvCEC&NWM̷/w9۰Y~-AV*tp=>jsS{ ~eS?Hox)\za6ѿX*[=eТ:p ׇ2P _C#jzƞpyب~mlop}"m~ ?k /Og6ʻۯ>?jg'm_~散EͨZ|}|<-{|$ ġuh#κW؆'J VL'y.lL'VMSL\vG?r} ( %P <@=99 jrF~g wFLI+2U\NITO?etrg NQ]Zttmw۔Źkxxћx8qb`jj<b wcǝZq#0d&O܉TXBMW 0T<u.zLΦ.昜=0x:`D߿—Mj"9 >a`ǽV xrxFZ>&q*OxWNZS*bS-:'0JRd/JﮓxF֑,0E=$%ȢXL/PŪ28/~5*ҕLY,|SX}*dЁ30&0/gׅ\*;R䳴࢓۷'o9QB?wNИe&Y1_qOfNT^>z/1(O}gk˹-Ԥ;b$..'m)y7do*gN+augpT)z*6  4Cwq8#kpRje21yw~o#g)1JΤ*>J;| ;2MZu W핧Yp,{ӡR\u\++usޡ=q|EmK{9'ݟ ~"9'BYu:bx{Gh^7e>xG bpql؅Rܹ"dE06 |+Tɸ*hy]3xE Fv#7[_3!`0h̾^`e>]^~z:MWpYni=?ҥ}~9w>ϫFV: 2[p# ܸP^63©:^1xԖ己~x>,:2"7hjT `ZkD]N^10A^mu6tIy3̇d iĀ%X|QA7ѝ?GKlujҒ׫4s^2e':C*C](K yf}72/@13ކK5(Skm:i% +@J&U*&; TT#$spX5?m}.?]EG{}$<ڞc5ۘs4kŸe<򏄹zMq[ u (yz"* ŗ80:&ww!Sʖz Ջ /gG%y'A 5%f<oo/~s6 mLE5zcH(/C<7–JkjQī&s7g4X\H<~Pu.y!/ BZja-k,EFwéU\ԙegG,3 EЙp@_!sdWp/qԥW*XniCasIˌ @k 9.;.M =g&}u [P!E_61(HȔJS '%!3ΓQNڥ\Kܮkf[έ# zt}*l<-.vJ|L}=DnKlC PYn5)L6+,)P{4 Ekе9pM>);8sз2¹ϵ/\pj%%js KZ?ҭ6^RYQ0iivsB ڥ/xJL1a Crp1:'1ptS+_,zHIh[\9qOw>MMU󯏓h3W­TX@+@1>c.˔wf(\dˏ(I;'%nG;t0e鐡C v'\@%N9vwޝ; !6mypPRbzΝl( -)n^N6`C8t4YvC qEbVI#8G%,ñZ7ɲfٵdC("yKSJI2^}"U]%m>3.sΠCvշ7w嵱ʅ]͙QJߏQs׮Vl Vَ­FLBEgX+KehLtLKvZ8@Q1wfwjQkE%{;,y@yrwąJrYM\NsV臨 \:fPs>?}(/hx||y쏫kmeL\OfD,MJzF4&&IozIaQ|5Qb;LVǧu/^We?笸9u ,|kf,*owaȘrK=FӋ9U싄X tBTK>'&\< ukז5/>іп$(+BQL+35mXҩm zѠoZKٿ _<0ūnχEĕEr#/jzd T`1_`? *ҫj5kQ;A4=~_xhԜg|#pI=: Ҹ] ++ 0G6*@ܴd\>W9HVdʜX&^V#Xh4,`,-P> QvΜg|#+]wk|o tQ2D1jD8m|)3o|LomDQv}]2Ww>EPNe3[NP ؠ]6Z w Í|BoHv0gklǖ&c``p/Tfv}S޻Jjx즥pbWp8 {4!5+87'N+jMJ);^VnMq7v-xUɥ-b˵-#"IvM5Ԇq_ "Z/dV~G{~^jk鿯 yfThJ{$AS^[^x=/V뒞zT9Noѣ YYI4eXe)&uR,pړ#I3E]j^"3.'hgٺ gD-=/"C4LpXX97 (EH)#ѧb&tZ s;-Vo qD4hnSk)&BgQ/oǩA\«̲|Q<((Agf8"ʐ3q,4H A9"I_ߦ:pK7ƍ]X&eShIWH1e;e3E.fʦFw_T#ptl,4=jJGh;rfK:BˊƢ6; `r -7'$D<ٝ!B:Qm,1ܕ߁0Aw3GyPaXu*$Sq`y 맰|y8G阸&F"ۑ3TسW agUpL!"o5?muي]?M@ h򆈶Ȍɀ j͗W֞5Kh^åйs=ӻ9$PnZA̗<^W*C](%"3nd^Zj[˻/#s@.PI56UCKAM7Q.d|9 ͼT+x,2]H\&8n܋O}w}+%Ot.ěAj@:1">[y0Q*!׉oro)ҽn0ZIBXmkNF)tpO(Pb4ж7"/k4 .ReJTkXYQ * E^(\I2ϵ}ϣ.c4]iM{J;bE)zu o6栫YG^(P4hʐ{'Gqf8*(onpcI8>?8e^CIvؠ_Њ~vYq_<{ /PC4XI={"K(q|vq15v H2& H.+O [=IUEtL߁I-#2AR(/eDmLUZpk+|+7jk}oZ..EW( be.Gma}4  ^knȣ& OΝfw^1W~x,jӞWlƺ) 藪^ }P_>\}Zu>Տm\Xi?_`kl?Ź牦XAAAoy@ hiXqlmeQ.Qj I%U{ wũz9Ftv8cŻI$"idjgu4_84S jN2Uj-pQAOZ 2fᓲ<*U7*o~ϧ:em!ӨzxK6G4||$ H&`cI ĘL F9rfN/WSkXQ4P;fpF0UPX'FxO7@6'10[M䛧qBDpeꪑTckVkg+Ome\HR0E 5ƙ؄t|ƺh7 !UTcPUAUT/@W'9BSvw\0c&  W22 ꓲdV|vQAst֒PlF;P V݄,r:rqqJPd۟}>{Pԛ! Lˍ7_~RCcP? ;&*_Mxe?޼-=|{]O o>\]1_UY q̕˻w?~JMNK^MNkٌmS^:.Nf<)\H(Q|lccv`&TƲiV7"Y ClRtFɒr b g8\3^qg|l[;~YMoXJeT;cRvoVbmubݭŊ=5sf6ioT.jo@%V,MJ'+AZ)9<HbvŪАkzd5L 6^/(S@LQPyx1Eg2s΢VFD'lD%Lrh7EqW_UWULZt{wߎulZv@qFB7qbd[ ޜJS #GZ $yJHmxkcqR CZ٨|s]0:"ȣ#/ȋnlIH$ u I$c6I4! i֪$$d]GC4Їo,͘Ȃß /C''md۶Slh~%wsyKcm ΨAAj?vIXnSwiO> Vw?7Ŧ]0ˁ2tgStGȔW L!qnC*苯q#CR/3NሥNE[qт).\nweuE>cp}[tPLΜruVԫ(ϻx< y_D]%&[i]A浘rc l$-i-[7n 2dNh҇ ޱ,I)B%)mevPƺŲǀ\HƿhvkqI8R`{KF5Z(Se]𒩢!.\ZD*e) F;e; /r"76wP)|C|%R Оh АGu`AO2(1A;>|B1&ieՖs0ĶcU,9zͷa$-z(-߉3VVSr؀f#!E{|aS6+U_.vW)ZVZЁ_ؘڙO+Ȇ#Z9ݐhʍ(UmR)YT9=|*B+uHdMڋvSaTqP?5@nOu-ڡ[5%w5Jk(] ]`<`1 ~m=DOcfLc4X=%T;S$@N$֐oF2ô@$LLA /{A""#"1,rg!7M vd,*W/UzKUz3gPh.q'_]_^Ŧ8BfZ`r%;SL+bK;B>%\eo۞%^֜EcyxhtE$vb[]wƲY1:]b:o?)'Ŵo),2X֛2.Ng#X+"!I9|v UJ*#+2eDi Q/wC)"5Tjj/l08 ygRMnиFTC)u钗T>_M~Hďii*-%5O <0!V#d>%WkH ­H TyVWX9I @BtshGըfl~ bc;Ϥ$ݢ$}ym qL4\lDMl7A})!)|Q:,ߙkVjNQ ##[W#ik92s0V{^ ’[uŠFיAj[3d;g.=Y5ge'pݿZ 7i.r: _:7 Ӑ8"'ܻwo(Ӄt7ŤQh (Qr߀E3KjE(Q}nDZmSIlkufsGPҋ-O%9][ߕыE+qdz(^CBTK!0w \OR'vP[c 6~NvxJ䊘=ᷡvJBiYFNQa5SM*7&iBcCZaH/^!C5&C[?E9A|)]GNɊgr_YϮGupc92Nzc)+eg ZWB%)gR)u5 e{=g3/7q%x {f94)B92|8~re'E$SLb.fdsY J W9=3:kɎق'v\3ŀ1-vI_n(=_Ic%Ƿ:Fel[Yʊ}⨛nf|լBϭ>h 39c1 ZJv瓷k7ߨ%T%:stљ96!5ΞG_z s?}_^q|ctw77w>=l<^Fb*+8!Gk+U{|!' )YcD+D6QtiRԋWkծߣ;76;Ɏ/ >L%=Д$Iu-cܐۈ:\F(IA'Zl`lgvÜ#$C=zP5^T DQ4I]53 uPi!L qGԋtO8si H…@$eBBTVÖ(QA0a>~3;⼩?K>յ}hHN`d6_X@QR -egttӰ,@C%nm3&:jbۺqZji:? L3TZ}7`F]^za-Zn2FE']Y5IgvJ~&IlV/YZq;ͭ@m.h;kY o am H'BYB9{21!4PJc̬bL;J5xjMd9XHXKٻ%Ucj> A ; c=k[M4ҨyJv$UuuٰƉӎHf{O_PtL0lwbC_Ȫ/y=`άqMpQa[lZ2>r_BS~zkz_Ol4I%wS`S2r{|k>_WŚt8CK~"6ؔjrk)cѡk@7.v)"`?(V.&=5/25@y,ZE/wʾԻtl1gcfyhk^yrA$-F9OhFiOs^BHJ,8v AX~& q\ȉ;g0<] Y/A`(%Ц@U v+qⅿ.UfAa@A fJꘀ ƈThSGw׸4qǙ"9&R#KPrz)_^l+Q5T#ED~+@' 9JʼnL%#B <9Gf46*)X(l C$5PdQL^Hg4XØ'"vbؔ +A`[I4L0̈1hd0N:7uS;4kCYK7zt`Ȑf_[j,59bťfgLRSA^ӔbWOTJP֌]E#Mlw׍$v uOvs)$٬  V*@疫Qk? LA;ҤU]~*ghZv)9Ѕ];5v~ &nk)Қ2Zk,Sݝ}ߝrwѻt1`.z X JvauuVQYJ. .W(]Z:g0ݴm(xQxW| Yi:x]h.#.ظ> Yh/,3QAa@tT ՈWїp8"MHK#"'aWQ{w*1l?c% qq,1(_^{Y@X)eWX:KX4|0#?4s#EDs|;Uz˹!ocm?WN ɯ'/E+y`ĻOcA R ub8$FIIcEtnxe=]^~ofk2R/LBLفvU8GrN 8 aP#X Nd٫AXx0a<-|8eIȋ+%89N^ns珗{Ӱ?X+K~cɯ|,U1\̱fIa +R ,6sXꔴXk-rIcF+m0 5}ֽ'X|"48kOclA{xn&@4McTB~ܔ'.*8k솤lEu!ۿ-DKܶse}hC3aUX|@zQWi)Tp6 y8ef]i5=c0TbuuoeE|%]/9"Tc?4FsN~e8k=Yymͫ:g-Yi m u4 GjT?ֳU$DD-fRm}?KDRN;@wEWWg>ͻc*zɼxc?@^Tw3oo?jXsJ^J 2b#mB `CN6TRBkaIMb; )O%I>]^̇ TR{v#e`}#1/N%N#YĨHY;Hb=S7̂ Iȑ*w4`]Ґ +j+ךf1alթ!#: )sPNXfL9f`Kj!6uSF`vU‚^X 9,ƄbGiP_JgʮʭoV BөOna JJhDeVЕ♖wX@WAKuʻ JR$?R]dĠ.o X&_~JmӝAn1P:Or7YOypaWAy8H-G)9||m;b#ð2h9Dk͡Rryc1kEs(*s՛C*#L4 ;W)R@sz᪫ g F * iK6;cE'On"DXfMtGMA0޸ nasw=]ۃa;,DrX(M<"0D$b(O$JaL0 a5S `fȳI4q4j$ȉ-EYe5'KlNHq4H2:r%4f-Ud SqD$RS'jZKK3$VU1 aX+};iXÐ۰DY+XBHZLeL6s&DIܲB0-/GD6kB7|A, 3 鵎-z2Ymn6\ՆwKx`~}J՝.ٞcMx{w؛ɻ?[ŤJ?ͽ~qNľ+'lTx0Q;xqQ$rSƛ,R8_xVI ֒Z~6BE ˻ tiĿ1UfƂ 9}V^jS+wA4b(቎|70H⃗ ow%5m@&q]7⺐2CWVk `tX4Ƽ#$/& r[{B|)MotL^IY5NIQX9J(amia2#(v6 r^z]8eYEh+61ttbP`6}D \:eS$hV` k`u%Qي5c2ؠ7fg3L}Yo(;3z-xk9@AψDWʢ/9|yafB05|_I KH#̅nu~́7~+WJטѻHѐu}MMhrmffs|VX {#ؑ V~<}ͽ-]^eO}ϽB]MfGÁ7wg:0t/IOu8':eVyb2gb.3 Phr/k\%U^"ӛ{f6E$;ӧi<OiVUӿqDsr|ucރ 0Exkca@b{ûy= 2sWtpl2/W(/>}|2~ R =w ƍOO7|v?? enfS62?G靻.1lqƋj;MZx]ݵxtr_g`<-}|t6|V#K<1>ٚ}Z_=]*/nz3g#/o۫P.^=ѻã\-/?}"0fkc*n+škgyQ4UDa6|BA2\\0PuMy~/O"bjLEd޿{+^riIDR8bQ2%F+"c5 AC^bn_@~֬b9j7tqɚ󕁋eofx{ ݂E Y2XqEfxgQVl.˗;*@HIz"N```SdX1aosz?an/_gO71'LA:р5^LN5oE:i*ƆIWqS$ E3YVj]cs{WV3 #UFY[[N,K~fږyٯ/`fpL*7 S]UcOT`n1a'"#iYw=l隆LQF/'d4҈%\G懲8 dUluJ Zp|TcE7o;`@Cc#up7g|!EPv`͂ PHRSXr$T\9zt+o9[2$L _="ul-bmJZ<;K("s[!.I^]DGr,CKPCn*vy]Pywp${8i9V3YzC${8e)gbyb>ItY{:Č`E&AzSLS6lٻ)J-da"8Ey弽 @ǰg ǟC+a|'п0>rSy~|FÚ¦.aid#8iL$,x}Mlkg 1m_CIamJwoSNlTC6]YPnNop,{bZK'Ur =mXcV:$0+jΤ@& L DNm&-(ܒtOdJVܢ[ۦ^ݻv BANXWwݯa%9*qz+՝ݯq j[?jxg/7.|:nreȎV:Qúmd uG~&H3}$ðCa ; Z㙒mMsQVaT0vJ<.'s2}94A@/+9C˙-O6ӥ1_ 5yM)&\5Ni#14}Vq(*k.R7[AcP>ɲM(B%![%ןGH\l% &7@+Bf&-펁!z{ >&؇?"k L$PI H*4F#uq˜s<, /3dXJ{4{*zp;nN蟾!AqCb(b(qJvQLbF(Ni, gR X/[^ 29*6ܐl_'OhPQYft''?  Һ94 Mup\]mn@$FU4n1LxHٻ6&W}W-}Xwmi"d8C^Hy(,ls4{>>ݨ7\ݱUwoFd<[sNw}zS]KmQaQ89*~V1NO mES+w Yr%[nK9 {זݝdA6t_iԷT0̓jaaU(RF,H(f\:騪_ 25 ӽfz#Y%fIJ<+?¾fpqq1lo$ xd?CΕG*+Eh񚯞Ec 3>^gFH<;޻8dgCh Fg:IkeZv0uEuZ9- 9+G8=2}sם~I6tޥ-O s[԰a UiU@3w y2A@Սp6 )聻^6%9 >KeܮmͰY&p_[PҜjM|5G 2˔ E9g8 bF^kjE4[fڔ/|TRj&Q3^7up&;!0gѐ">Dwc[s `W&N|Pq. #WҠP1 6CN&ח #cw!{90`J<)N3'NѤ<=]N0!=^D//: ~D6&V{4̐O;0ॵ{ )_VKJ߬&9=1o`mYbEʑ`B#i HE-877GyJXL dC4+0 eaN҉f+줼ʯu-^ W)Ǒ| ᄞPN8LhjoX"يJ ݯY3L͸Q2KwQAdD! 2"fĵ°%դ&Xkcأ(~2`ǢCHC?0OIo-z+|?Ym|ޏnN5'7 Ƅ3P?pؕm&2,%VKyE;MIW<ֳyu9=\cvxE t@ix0ZbGR TqVWUY,N?/M]U12~,ת 79`dÅ-W-B{OމOǙTJ=(gӅdU`*mpa y2R>E|WШ*/*n>ڷUy^}\FK_+9 Š~2tB ǯ'z4};ye{?{x ޸y5j |kf:n .i(Uxrntms]]\*b}< Wi*_̒ݛ8՗A׬}\CUM尟O'>ѓUivdTQz?-}y3;/{01m<L>%:U7`g1& ^ eJ3GBx??_Z5NFUqDSFiX  )oFOW򳿏_ W/L߼_F"]25+8NgybPG/|}%Qy2Z֘" Y^ʋE(EQM^ qY~${iAB),(ZHn5KW[ g%˪ltCõ1"|܆UF.&p‘ZbK> ,Y >KXŒ_H.<%Ƅ爫L!-BcoA3=@-|Z(y#&yCwNk.+ْNJNќ{Nr/p탶1{@ `MT=w:nAT!$]׹Ҿ/iqw*W;񤀾ȁ*Ǎ/܊,r-,9ggcs3gyNzZCi&iJ 8(^cM$21.hcFޓ5{5Zd`8~y])p̕q:Į9I)6.>Z-,Hh :<.()(eTP w< ,D_UAV)s Kyz|N?K'ߟϖO_P97瘔G!<;Bbf2E3YE"8oC;mwQ1x)Ui.)4UHhtHqJrحJnÜ `Jtw!w [9\ : ܦ4F؍Y᳔a,e>Kϖ3 _@A}^r,zm*5Go-QFfGi=jaƎn;.L=%^a[g pEK7aPsʌqkA1w^P\G n؊GW']R'l.h3Bk1;LƂ1 +f&WN7^k)6Ͳ0;FwҒ=xxdE.dL8ȃ3Dcd&i0]x p8wV̀){heZE"$IcRqtL14"@ߔ7P̬48\YƔDQ4FJ &umy5Npzs|&?;|5} {*_eōSwadA,wo†-Z6JHl>a|Pq,i9_sMtmnי\kf8/&|JRy{0d 󟯧_)t{:rt'^/2ixYgdN^-thOf։NVԾŤ_4IhPwHC;C3wM_D@c>CcNrz1HdlYsċ1c4KZIQ(i&Dt[Ҋez/⏍"3is"_gL[%"VPO^XG?PgK޽s,fR J13i Fv X_?ߤ',MK\|JZkU< Ҵt'L.ʩ^֧zLT`ܫ.o|3d05UBJN%*;WSs OQ̎ߋuNa$8jv051" ظƦA2=N BK.F^00F1Kt.˰ ۮC1@0`5B$?+|R),A. ɵ:xieLd ,!#L"%(M >ˌXX+ډDaNV HeZcQ9h/΁NF* F\ ZS DMXK<23'p`>2OI:8a ?@Z 7^_ߍs*4cQ:v&fVЦ=pi!l# =q3pGC1c"^jr:{@ $wl9[qM -A`H!Ukom<WILX*<@VU0&X#j¸fkqM ,J=OQ:531gf]mFgz8_eOv_ ۻ %0c1IۋS=CIÛ͹p,q9VnhߝvGIVb$g>W{LlZ.;Pؔ9Oj>x8XR^AG1.OSq [0^&*_qJt3OpPz o^]>v8`p2K4^]4 4[.7{[&J~a#SXwT f2LXhqt p}{!0AaD1 Ry3)u+4(b}~VPX84Bۗ@pݚM)pH?qyH}$yNFP[-d-d(z?73x !ke@S'a*X hE8] H8a_O)&As/Nʁ5HUݚ‚Gg`?0\*6 2@Gv C/50~gjʧb.C8Ҍ~2_ ~䢉j5PV:S&=b~sHǰy0ic*3$^>B͗4]рy֮^[rR~†tMsI;rv]mPÿaW 9Sr=+;*=(z2lQ]E^}GЋ'5?fwd3@DP=4?Z 1.YhPfkژQݍ|Jj%jk5Ïn>.Jjғږ($K):e4'\y&6 gjˈeĐJ#0;OhmCD1mCt:J[" nK Q!u;a娷~s3-;H{ BlH}Ajޝ2Dj#QȭWWaUd"z'\ #}8Њ!vI1c@RB5C5{pw7 ՈAD@"3#ޜ%[(PJ{'Պw*>K߾vhvvf!_޽ h'OV@RYNh.Y CA-&JUS) ]iBr,"X1m!ʕZH$i3m3rTtc[8ׄ0$[M(Ǭ+8b>rR2ż|b>3ӹ`z|;] vnao3U >}A~n_J{7Mx޹/S(?g%ZI='ncK eϽ(oWTQoe=)[֠ˆZni*wPv8Ne]9=Cxts( :ҡG{t2bwWAU}wsL+@,a,%F#]޸vyُ;n%]%i xE5viSUrRˑxjUAR;W?;5nqw\,r"@SĖ ~ᇔ2Fdr56U*~;ͳF(`&&LmjU0#*$TJ,D 0Fv<)c!#~`۟\\c+uXbSd.S sǑ32芷ANx^'ȸYn#YY"E :,R `yY e=$"qIi ?q$iڠ]]п$|H'eEJM[> %o.Tظ?a2[wޝ^{wzuݝ^xf@PeH^(rV4/ψ0ya @(AciyWBrJ|2c5m*YeF16fmAqNwl7~u+ͷK*4'T=7'׾$DGE_>i]ׅ~!sKN)(֎Z3|dDɼ H"zWa>v#ŠG w<#L +drv S˻x)"}7 JjͿסJnt8w&[4I@cQ.Rѥ1Ті=l&BI*Ch`fOe\"1Ĺ. gOJ*ZRX&:uIO:Wk7th9'DQM}]9SF+P CXVehsEB0n(9 E\ &:mŝEz6m3QfRAJ<=1T-obORsx FC.R,HGث@bw,"R3gLsbTVcl}B~FĮv Ύl v:^ˇdpJf}e˛j9ϻ >)²S֠Qǯfgz[CKjI\KhRqnv\\V"K Ee\_cHCrH::Ȓ3~{{ W7y^ǟyΨh֯b]P^m]PŇ *o}ħ *7Ck =MO `LWY];2ۏ:cLjG`x٦ġ 1.{|0(}YkNyo39bp\~_o8gsgFSJ'eOB@gʛн U.}ri8S m<[V?|j9=5Y.+ U[N ”=5nuz=k(‘ܤ>j@}kH[Z !HWw=MX] ΁Uln۱?w?8`[g>.LϝHk-uVYRfȬ26 J.l&gD*F7A(~}sg׷T5VsƕvH+ :lT[uO`EѼvԓbl[j3V cpIBEdyX lU)-$M 27Fhu1As[Te:ªѲyNTPFbBRY #QIDbDB&ljq&l`i b\Xcjl{_t[uOI4b`M9M2߾y5}YQd闟˞{|s?<_*2=> [׳EV{.4flZMQ˙{|pt}F"!42*RA*P Hb^fK{YHS+5Q#RO⮒GS\nVK7>Wl'rqdrpWN "E:Al vwdF=صa뭪gx]."Z㊰)cE&l!d)ی@8/J; ^㨾鿸_vW9r0z*Nu`Uwd1CG;H`t@ŧjD!'tWr7vd:7EfxZ^kI&c:CTe5ؿ狛.W-,]g<ę`0MMǍTs! 9)-űSä`v9%N^[c wp{[UiIPc-G:=$d#zӌȏv+`X,~Z(xKNr5vm^PϗH=6Z7jKH 6=mxih<0Ο.v5.ڊ8V"g<ُ- Z:,p GAΈΑ0cza 'xKJɠWrA#^zKU |Cu_ o &eRiCIiX,؁eGR?-Ch0=`59C-E0h%4=Gc|q# eϡK°'K3Nfx.y#BΆG  8;|t*^ V eHXbϰ%b#o]6VB%S1`d{}VCtC,pOWupc; e{K|7wI]ĦmU;LJXػ\{/ug,qV:ۧE>Z><-__%_|\._ǯWsw,qm_QtF#);V"W]@JCg_wQe]K~[*ڲbNjW_PQ!ReǠ+$J*L$Gi`Wy=n 52Wc@ [v֛/Z՜k 󉞅0wU}ݱ_U]Y>lQ /&ӂ>"oG>}>z}OwisdR k0,Ihx%sf T0 Qwq݌"鳮+ -^5'e^z?og7YXivmM},u(me:< pqSyXKdK΃ A .ױ^SF ^:f7un@lXPY/,b^eJ @^إ@LPui★D* ilm\#%AxOI[D @ZePGuZuVf@tصnϡdf[`D@i`p99cVs/84K^iErz5U$+)@nFTn/"s^D[g]˟l8t35 ElFݮBϥVd ْ)ɘc]kZ,=&,-Zc  EC+3ʼn0 Q q7{R ΍$bO +sd}Qm=)V[ڟ֍}5DY#vckĐ 9tx!>U|H64*n65XC ZCΟnIEGĜ n$N* dވ=UUQ`(6RJj ϝ SƸə7@iWMH?U0҃c؈{j<\A[žM)@G-c K05}tcJ[uߟ%[i$ۯ7{xoo]<-ɛζFi-yS n|FDdz> i$&&4&"cRhbkv~q_z{*K+) A+- O|SM6&̇o _K $aDC&BQԣt"H9R; #e dLEtĥ"x9ӽ)r9& %!SA & Be|eNKp6nYEvb97U/W l]S3tk$'jre+ pT¦ռLdnWNֻY'*1Rw.h~,6`}0U_=:8VETe x50l ykk4i@2\gPW7L\K4V6KUc+B2Fl2׹`&NZ9^'yhYֻ H;9?o6^K?gEk&RK93TR3SVFy]fޟ@1 B=<ojͣ@2j" ⹡$1"NS%"$<ՑmD\Iw NI{ORg ۸s,_: hySfyo؏魣(ǫD*Ѡ9r&COo?-XՃ`nP%h5zT c9cIq^t4N5z1wE\({ў5~ѱzt1%IV\uu(i0h-/hByWT h_}[GjdYڮN^N,ed@P *H&d  Lx2{WƑJCo;v}Ѓ!y׆5Be-[o$nf3Xzư,*#82cj yʖz=~ps)߻?&qCJW̃4j-o'%'I/wa2#؂pJ 0uxZvjߦ/sm *My}Du[Jͥ'>Mպ JhL[x%N]Q݂B&1?}?8CkfX4J[_5iNS=/A*-FkDG oDi][?א\D d b'RU{A )`~WQ X|ߕ]"'Zm11)y4j& jcK7;hj[}u{0˱0PYݙ|bwKzxW^;ht(n_FEI;*پ*z~v 6).8ld:G@\kPKkb^#oe\+Tΐ^\Nh|^ΘJ;HVN}ΑF˽s#r3mJ̳ӴzN?|X|evӹ;t_Sid\ N'Wo*t>܄y{tX]ffD&$Qy,HHdtN&$1T$zW"PcdaO0 4 KpSB&,1:,uYӼ: Т⁊&N-8-aQ?(Z#fSFbD15:E9X/ p'J͚baꗳ7)2y`Oݬ~npgS3zWhbF.trrt3hS[PfVnoEزPɒG=.y{dB; wwn\h4ǽ/zSM4v K&_D"pѩ(,!Qٓ~]/X2S(W߰~;>5Ql$Gg\K9wvW$g_T+NG瘔% u;JMܭ]ܧOe_ǝx@c,b=]4="ֲ\\k}O+֣Nh7̎Tx|w+<6My{Zvu ۺtЎb۷W#7Ua(XG #t˩cwiHQ%f ߅]/{* T{ܣ'yk õ67;Kƶ+cT ̆-'an99?@(-X+o~'ؼf9k-fMhO]IwH H!rAr*MsrƞF'=@}?`#($ajE뎶hpm۠%v:oxxl?tut(n_Fm#8Bĉg/bVcB:>4} PkϪ(Rhvb Z8Ҹэ:$pSV }^'tЃ")ƃzO0q1'hŃ"MC4 t]QʈHD lc꨽P!8*"AI)c!k|IK%;[QEQrX Eɠ9\Q1c%k9^8R&pT(ι,Vȳ_,W JaoUZJ~ out ^ˌFh#-#uAg)V nƤ6hbk}%@a֥HRaw(Py^k( i2'(=Qp>UxsׇyX9_$Ou0ir쿓\_7 c]f*K$+ח9sPxItAI>GXhv'Qh 3"2 ^0(@Q@,&JAYF,(g-[c0LVMI#Ky3-m?71mhw=н4%s{ ED(Rp[i7CDVʃ)F#(sOnńjhL5 qnj7(t?v+A~v;fAֽibBs[ E`Z4[cÁɜ-7JKI.I$ 'ZS%L[(4`=؍Kv<_MVi5GRK]Z)8r! El4Σ*B;8#+<_khO>=cωMo+!:ZrF4:{7O 5_3k3c3w p{ ϧn[[B xZ1H \L߹Җwr >!tM8ʓ3SB[#9;Ԣl-6[/FBxtJpp@,n'T l\{'9Ga%BE.gR2ӿ_mXbHsyYe끲9]3Wy9<]{;ZQ 1?bv̎AB _;I"\\0bD c/%]ԉ <P"F e3*rEl~+I4oɭ#\ؕhޒu%!a+>zHGˑ^dha5sꏏg9B2Zw4ʔ֎!fb֌dm^,8@6KO;ғgY֥L9ByyÕvvӽ}@#1[v ^߫s$BLC-owԷ~qt&n %xU!0Xx ;N|ZYet*Z2}0!"b( x 5%Ke CQ+{MLi+.-C6؀8H"ة %&+E9g UbjmbaanEVL1>e v'?$X2,a-x`F.U?}-Zgy7rJ_5L]B#r+:K &^́ӽehTC  bWq9SՓzϝ+`"G pC4@-1ħLsň8 L`\F." 1C4"Lh j1evvѪIښ["|4(" D)* Ȋ{LiT,jLќJE+LrQj.>Xm^0dI. n11Qj9#wP[f#F1+Ǵv qf7vefQ!*_?]aוafE1@4:Yo߾]WEoG;+qC2@Wr3љNIށmO0NrPxiHF^ntmHwPMʧG͏P;?[`Á1((?(IILHTApg6 ʋ"JUEQ,-G^UW^ҝ3 [ iJ;9]P*c+*;{Ue+fTIM383\P=:5I 1c&*-,A=W҆YLw<ż5G4"9tibSOn?*' t'I2XQC{Nj!/ۑmR)  4!UaDbȭb^mةp{QntL56Fҩ@$"-\1kuȆTQ48}p?#܇F]9ʨϽQIgrp}CPJ```gA*k7L_<0S;9/#ɀ@0$q""rKBjp$!UUV'Ip0] Aj^ H##R }%U\ d+r`5RyRmkSC}ǣf(E)ոBPʳ( l K5"ӏC!&+'m ijRXGEoa;+?=)Zh5g L|0T"=kWEs)8Q?Q詊"S%%zڪ5b jIZ,,Nl/*qQiWs 0kg03 a>N7&1 <7m(lasAqKÜ12i,uDl:QhP, )y4lx<  t u;7gW~:t.4Mj~Z%Ŋd`KGà g?+棧p12 _yz/.AQNGcaL=P%Sh ƨq߀HMF|=Gi ET0"=EABg?@3@ėQ-HO/bMM|}0*O$`6 c"3?WpE?}?gP\3vnQ/i{M M$v.63;__ō'lw^7}7H}Œx2r7Jw;.gP>Wv@+tQ>gf> J2v0 =/1ɕ!\:"D  YIaҌ[ f y܉T2A`2\ 6? @v $+zZf`V; >njG1ADt.OL$9X$6_Ez7y}lzQ-V(^=(ZvJ#P>MV V%&*[ T4xޯA`K(3Ab1t=/Z7\UW_9E_6.b`M7yTz#v$:Zd2J$2UIJeFC4C`Xm*2 Kt43rzwe8kYa-8%B˵frt|sʐI[aմɮo%C]<+jhkb ȓ@SP;!Hou}_+E`@ֿO\LhnIeuUJ| j{nQu2$&L',SM Ì$…xfE*Qg~a>Ձּ}]FCU_ֳag| $*{+gբGբkBZk㤭 Oy4*>8}R837|uxŜM{(Mt[h)}zT3R5K5a%Jnɯ_v!KQ$~'G%ZS[Yy(h}ȜDFfvevh3 gŎ /JRbL'5nJ曒'&|VN|ّ`c+}sQJtK|/kNTo xĴlw619W9;息H1 ![1:r@Dz?ǙBhKr3l[?\)!;ĶSulmeT ԭj[]T 0)d ߣ%zo?y>aF3s@i6u7#gsV, E/Atk/o[=DC Zb(gսXodӼX48aĞȟE1hQ8ж6Cb] +fG݇u>_\&&@6^a6 {~**&Y.=ޒ,Z;5cQL=Ǯ> 6Gϊ!^z)ӬT' 1UFkܦgv*lokʝ޲IBOgaּ磰JcQ8~ڧi};`Xb'4mrs ˒DT#|PnoVAIw9$.gg|(GWo/=E(,^xymVekPg 5s ҚcI&JZp#:jHdae:Db$u:#1'巔U>31 'B1 ƎCK.ә$YhjI%v(,e3å=(|}z ]#oL͋4_;YYkSW8" 4V RP}Lƨ,mK1"Ť]N|X6OE=wpdMEPk5ik*j55ǚYO?}ԒKQ>hQciɥvG5UORgp%8p4v4?-XI+82ƨ46y'Ulr*\T`yҷ^EYhټH]t8~M,| ͽn_m[m 2[)y(B>x:tp)d58O_>ii{^HTz?/s2M?v:QmBV5wy&$32)%'[] v}TnۓRP[mBwnMHg.dJIj7V*[}DV]DnULR߾[mBwnMHg.ud#z{$'bv;~𔑃iڄݚ\Dk'@a\Q!󣱛ium &}6 os\|"@LI\EEAJ\_|E!hXW;?Fqǯt-y}7A;_ _fp Êlf 9:LDSP[XD=XԎq$X,#_th YSzV⑦>\3ܱF>MGc؇KuA^Pe6ɺ si$1Yf:+뾏xg=9KM\TʼnQGRI-qsYj;fLE]?W4{,`.Th<􉞣?_( apɠoc7d6Gj,1jynGqT12Vܼ@|Q{M~?%TQzo&צZJ6Ev| cXOSʉnw)>A 2rSQ)p Yh0t$%w"݁5\z+V9q hԜN]gԲ1]PiMEkz609⤱[F&ϚmۑO ƟKmRHa*A9JU$Ij~IBBA0}`nTO#!?w*%%|;KƀM%1J4 f X0-3dS}jg]>+"UP\~I'<>R>RRk|FXlN4Khs+m_Ebd}C#:~^<6cIGݷ,J*E9 I@_' 0 +6%9RrAYp %3%gL @b F!C ךbDbDbmk-C^ksiŜԂD<-db}@$Z! ,9Hs" Qԙ#ny!Mfd.xk3gR^~S#Yzew=6zG~ Fdfk_^GUFrG |6]_>: B8Ynj^XxT9ƋitaZm|a٘6p>?mg63VR(@BLMf&:b':Y&#x ͬ7N䐺 7.D\kY6\Tӵoe.8Bz0op"o Q\R[J_ YK*hݖl!T,VecC,%XTeImA6~"-xk_ZDU/ӁzvҊ!1j_><)Ɛ՛h Teц_M;RbEu/T^\'yKɟ^9y^ape%'zqlŋނ>;n P}׻XuB>{!Ck߫bD t Y}B?%ewZDowvn0oEƬ(Y6' öH0&{Hr_Lyt(DKߵǏ1%xiof髱Wl뀘H)}Uɩʣ8f7q& sA[\R{$ ?i.Øa8a*5A{ P`(d}F795Z>7͎,_T}cy4f@8M(UL3wuxp#!޾UT6Pj,~ $ ,R_ .@ r8~s: 4lib%B+UYzKoE{kguȯ_5ڣ#ɥW&^.\ :.loU^]j/ʜmXYNGfq܁;?{@~7r;*dXRAG>Iw~@]6!9bpH'&&:X 90CclSQQdScY SA@9낟DP7~șEع^vYFز'{zqVc^D)$DbX*JxJ{,GC{*(PxS^i%gVҨ@x9.[֚Ui4gZ6q{?wz7K̉ .|dA8E -`ZTE- tԗ$[OI+7װ6Ű&T{ZKEDīQfA[ef]^uR^`5gDwٷ! BWsO0~2d2w~>4tl/͖-='q<ljRv? .6[餧7GfŊ=u5eB}W&I^.$ *PKBݴ:c 5!\zWscb(_'SSN5-I$ \c>@nT[_9;0'4DZbBs*WwKH:q{,8kEɭٹٳT3 Ksطssʨw)8%~v$"*IG'!% 1T`~1>GcCJ*PsbY!,*kO@ 5GIYlLM Z%l<.2֖&^I(x0?DaRNfC#9D[i`"Vn?f{{F-Nmߦ]s@bm뙽'vNqBυϼ\ћcx9$Г\?\R>pgKK vU:arBАcSӁ*Ȇ&ZX+i"%=z!@`(- # ^\&edv*yJE8 81UYZ~s=~O -z!'~]3AGh.EW_HL%񮹝-vA,] ^B9HN?2y=5႞rOֶcN/rJ$ _BisGD1;0RAWY=.;PF{5Lw `ZzJt9z͜#7 pt Bf*<4>(@U0)?bG!9>OnDF/Y[ nE,Ǐ`BHbhj0~,y}{@d-3! CJ n|^.9 lOG_gj'B̹ 1FiKOhq$](wRZV+}I(>l#K~z`'b]XkeIX9PSʣVuHx) rVcNe`짒 _V_t/]\cQU C!&*k0v)Q&W$cg~02;K) 7(4`DnsLC{ H e)`ie( n(P9q=&J0tG kC`#I/8m܇t#D%Pv(DOϊ=~"W?Pb9b:ݧuV~v EDlDখ,@DHuht~ :'!DN%wځ YkN =d}z@{Lò6 '4S p= i:V\(& v;tsgP\kck>|;XI b/xDh?V}-V_[j4L`jZwz[n&LhBhNH"Da<a"Bʡ(ai0sr{Mdufl>1>}lf:d pu:]66)5֩knFI1;mZGE( Q,A[G%mwfi4ԁii;!1>wz4?2wS݀ =cvB Wُ`PkM1O8p(=VJ (4Y ;$84, .9qR-<=էZt}|VëriHOW3U5գ?_ IE&G+eDջ*1KkD!"8!X)}"prwG7ͯ3/\|1)Dg mqy#CZEsF}=G_oUVNCƂ)͠_bU>Z~5 ?@+̢U1>Lo.:$de)Ӊ6OS#'s=P>ۓdju_Y 0.n 9q5bhrG\/`2V$(:r)fC3laD{Ԕ8@Պ7̜{ёg(|ǿH=lQ*[;+-o?_Jƫ2'7»q2EcLaQ$`V pSBT(*Q!Y4Wu@BzAw:W@CAxAG@C(<+\Is!=Cճ36bME>vcAUo=Ӵ= qͅf1@\~! ~Ѳ E /MA{sSF'= CكIFrNAm¸6|=9 .nl}t< Uxkآ01S\i 舮@S.F(>ei4u9^9ua}#;Sx:_D{{b8'QԈtMi.xHD&.;`{ NץYBeWXz2F%Pl6^=xRn4K;U*MަqMBF6I$зJ9 @2S4&*T $|o=ó~dѓE[`Ϫ] -e?U9ã$*;ѱ3Y#/D6=A#SFS,G:#ZPUqPg@:MެX\{}~mxP3>1l=>.֯]ۏAڥ|>A*PUO0{sI29ZuC$p߼Mq4b&uQ&#LLAd%eqlyha<k.?}͓s*(-ա,R`ȫ ϊt"%BtŽF5B?0{e/]IGaNku~֢f-,_i|,7_o[緛΁ZRYdu9JÊxuUuTDX`+4d9$`k[ίwAH6ػOuv3xRli8lG hn7`Q S"_ R ݧ7ve+q[z?8T% {M,F sBP 1YI V>c?Fdל ё왗戉R 1Q`J(4 +B m RȉbbK. a4v!P•pQJzD1>6&&aQ *&{oJ] 5/>Iz TbΕZ" :_%A{ rkGK J+#B @դ> ڳHJ4kO0\v#A{.ODhwL!F iRGfuU2 1R #ܤ}D,#@G CDžc`rF`QCŠ^hIRc%YpjAEGuk`9/<"Ƕnu {NJK}icd;k`AVʣjf{qw?%՘ZgHt)kNG[ =0.__ OWÓn.osx3);1ӖJJ/ gмZei[ ¿A$8U@rT =enJ[%ϐ&RwtbT!_br~0Zތcܻ't8u:&(z`.ϒLH"&1n}(@zC]Ntwך 8(s鏖Oʷ鴤-Fdfl.&d0SRŁw<<5-%|Ζ>5-c3P':I3I:T&_Onq^+?[}:jJZ|ғjur?mqGy9Zfc-< !루;3ɂZ;:`,1րR?B:,?obU+4\5B.wcǿ$?+7MrS~w9a7)jA>8&.|bB2H¿TXg0FFl~ZB}5z[uōr6(yXbݗذWo};VS]OJCɃtI`ʰ<3c'vPI]=пUKɿ,3|LgL^kfڈHC GS]];:%3H!<)ԢN rZNq3Uz^]҉ra8\s3j9fюȼ'U+J"#3G k&ݴ~k)q'jP+p> je`CfNRcc)%.u( 6<`1N,:]$DQjCѮ|Gp =% 7X q(FHZ>W y u=s@OGm褌6̮İNƈÝMl/4ѝ6{jcA;VG^Ň_T?,fԟ6ݸx: pܛ`xcfռoVJz¦-{rhIB"#SR?Iv[,ri":hbŷMe#ڐ?fgoq>nsn4Hw4nzZ[6ڭ hLb FRbHPZA߸pԉ@9StОFR`)3MJ0(!=#q0!]FZ(]ZDq4Knե@DjA2 UdN R5#OjƼ M58C`֒ Ҝc'vX˼NZ+kufuiQ/|}MԊt>3녣xmkSOmfWn\^_}I=A^I e" %S 򂋖O)Ohy%Xȧ]y '&Z|̗Oy!i"RIWOWedmWPs L!$ɓ_߼9ߛ`(zBUAbv7Y%BQ.njF<҃oSK)+`%QBͽZ< 7z7侺=${ZeNځ-V-7dzRx_zFaKj-$@;϶KSdJHaFeL L9,XfvSpiL=l*;a U`#-͹QTi㨯OzА LgI< %m󄒳9[%TyȷKml爯ql'BΖVdC0UʫUh1jVO׹B9Õ-S*-jGd6J,Ƞ-L\%3"oV@Ѫo@;fZNFcݎ=eL`wc+S8}g?c|z_rvrXk˻ ^%L!^8D˂ f҃2ju>ΰb !c}«~C  F^wη2bĝ~Sx/_1/f<>Bp4se!h+3;kW`Pumv1 >Ktux0ٻ6ndWXzw]؛:9W.Ra0L"qN忟Ɛpf(J]X|__&M?ˆxˮҦ^B)x/pa;0iU~<#N!,׷ͦ_&W'jfá_3xzx eo_..-Ƙ܎wDywMQe2%9/~/r(G NO8W`RΆF;b蛘UUDل*苪sķ+$'H9 `|)R?ݍtt#+t(ZWtC"E}QtRI5AE7xRTw:=ҭIvL9^)Eqˁ{1SWCZu* )59`Pz/!b! _;hQ_xVͩQ]fx*/ O%7hg;rU[iLŨZpI ӌnQCQ*E7zTTNT~:UשB-zuJjIo֔3TO*թZӽMAo/LsC@KUIWvfn>C#Yo,0 {)\iOEA(m*s;._)Y%̶}bhӨ \΄sG-! OIv3gnhiNfl:],Z~}OHN jnntzj!00g!43>;i)unS*HPj*y ?stGsaH}vV+qʋͯ[ւVWROoNk2(<1Ըdo7|'WQ>|^E7M361t@"d!f* 2C*ZR\(*8J^^le/z=X=Dleŏu~{}T~ \WV f$EgIڢ&D$ѕ`jތ\x &|kǺ2^NBc2/'F13-sZʑ94Ʀrbi/I 6rq=+Qz3Qk8H$(Tq^yuJ Z%OV)J5<9jn9z(rCp6n <ٛI ˅ӭw+i]#Wx=pd1/GgjP`wv'gkԾ#z~82 &vo8j|T,>]P۽'[q:\Xɖ!_eI&|GԨmE{5re{>f?5Jƒ B Oez=~3":ebrq6'W!`^vx ^#;,`d:k14G۹d',/ۀ*3ۻrxw5y}yq@MmZȾDCj'!pD&DBX ;E:Pugx*\M.Lʡ!Fk÷D Uʛ߿m}:t:v=h~>b4yVgK^g7>|jٷ۝Ru涡+mAi7zDmk o>Ix'J~$ScH=OB8)n 4S |zsʼn:RtYt9kؽkQ)\duIǾʵ@*P}G:lH" 8!e M9Tnw/; ((fUAWq]tUA ,XPzNJl&g232mq>j$ m_eۗoIOe>$ưʙ6R['v91c\+8)kT$:4e!\s:A vE}6oa$aFk4j!Yb @sRI ^RbpqTSL 熴$\MTxEg3 P:A ^ 8f6L#I;M1JnP!.h]ax!%4r "[BܳkMZMAJznsRHG[(/@9.XΩ`ZS c %P{\nXIs^@ #{²H |h٘Usa ZAIɟnFJd[qY{٠ |mgRvX|g^ҜcEuǷ7-Fgp"ݧ5EFlUM+Igsy}=-} 90JډW !@̑܀`'pS @.׫$){>f"BL~y ONzO4CPT#]FQEf x(܊"7hg,g,ψuV[Q_)/H=ҩ8I>dLIQBU1xd)A{eQ8D >OfHvr^VDN} ר(1TB _k*4̰΋l׸GR%1\ A)ȧ5[I}ZL<Ӛj wv#CK8It ђ>_% 7≃2vW2O}nIKU\ү~U]ҫw(jh,室J*6o@u!̚ -W\8*P !1;Tm_UۗoIJf5LɊkS-A"(oDih)@%*2)<~!ZQ__ ~yS'AO6V=#o;۠V/NS U O- 4TDT9HCn!RiNݤ _ݯi#pXyвn+sp+,sݨ4.i>66%) }:5Ĩ9ah6`8F;-+rgpPZMQgʢh .X`4daP. LxTNHoBÝԅ$Qcaϥ8w>]͏&mKuO G!ZavSrv/]_,n_znې_h0V:巋׃bdǃl>Q'-kPNl|Z|l2@:Z@bKԔmRDίa!8BJ=my|9F&\M.IBko=d.#53=*Guz 5^lG33Gfsk!2^Y_ ZZ"s.1s("!aB+ ᩷Fj'A`\pbyp+rf9wH/v!+;1jSJ{v5vX|F> }{Ga`090wj8<$AKL%^er|Gm'7?f y~:_?{Fn K/99H@qs_yJjvݨp-$e:= 7s3CQ^E nCMW4 }PiۏRR3WJd^wJ18 ڙc̮RsU1[J4ysxY޾3ƁZCH ~6ayp}@ptQG։'ZOn@ P%ΟY8٠ܸJU>J7#Uk3}fkEyx6àE*w %hBp<۠(+";dcT+Baeg(#NA++IJ)XvI2zς9@'y6&ڄ YExk,6`jYdy\>]|LR͹"5Cxܠ]R@kVy׷ͶқXGpkkd6Tq'o.+5^nG7{Z<8*;?-)%}rэ./٢鳂E#W{h{+ֈ\+p /-/%j^xB^.0S{%2#7R)(WPl0n<$jP+A@Q>)E]\^Сx9.:L {LNxIX>~Rr]YhbKa߹,JJtפ_-w\RhҚ*V6I_voq~݅fڠO킐4tӑ%`{[_tn,ڎCh1ۊ(dCiv*(>,F4>q"YJPf9^H|jL]NHg}7:M!Ƥ(eɜrYr"Վ䂴1d)(_ZN h#+S*=>^~IƲ.Bp:qeJitVx9 ehLMiy3v AS`o n`Xs-CBxٔ>Br!R1Uz=k'6y)Uz.F/TV;-򠃥 8a=riT!8{Qh@at>.˗0y I&]{שs-o: "OiK:u}Q^˧j-j.`k8q  N1Rp.٘VU茮Y!!d!-Ф+j\7 `j{2G/Gwt@k4ߡ j̝. +=ؖeM`X-B+oEr6f̕J2Vyʥ>Y-m^nm-IsvwvAF?[EG\:ff_|qB ێnhAy{Wn`Mh4*)ad9DƢNpDI1'꟏YFa ίܗ 9Sɻ0,PPFI줔EO$h!K%mqFx' \حpSWSZ aCQf`]F"nԻ?:)G4 F3׸a4z) c.F/4Vi,mh1 a=rif0:X}өly[axm0  NeiXAEADR iؐ! |md /4wBc/IԲK$Qϩ$Q[ɕiwVq|$Q+]XA{DyDqغ+Ioֈ*5$Ax@e0!da̖M,E0Ix]Rt؊Cٹ<ղԼf ЉRVn~Y*PK`lu&)oH+I,E0q@EheO!uMЙV\ ػ#jAkQZP]SZ&\ZJժٜFb8VjEmfHu'nq@E(Dv0.b6eDW1/M&# E;9fMAgh,k34lkdzF+$ e8$לJ!w-q AA~ʸbB^ o`IhQ+.b;Ďر`RF$MF(kN iH1HqJz &Xc۵Nk{ct _0 h " \g JEL冀.q.`1sB#9ei N^RBA$-t,ѼS% mG&ON褬1Y^vdJxR dH।8LvCtF#KZvV!2dA:˭"X\ʬ`IBgOǮ0RQxuT4?(@>-'.0cɆg0/U9Jsuo􏿿9w<.4R:GW$K!NFn2^pkշs}2Y5j!^WѸgd}nz'% '/'($G(׊ꁭ먔lMZYgպn"ȋ6ŋ릢IwwιOKCО -!E'~*uꯂQay r*@uv+uO|uICк'u]* .wpO%/2 ʃ%+o313 jȎ_{,en"gQ2McG@1B%)'A0.=4],kǹbj]䍐5;c['ĔޯPs2z\hcq1 J e*0$v6zfTB'Sv߻\귫Rsn7?݌?:EE?&.nogi-gb‡os.C^U[.\?OvH[a uUێ;oO`E%d^6>8Oo95uc^6kŸbONJBz QUKJUI10cBt'RXm y>H`H, "lcX˘wDl?>vۻ½c o&g'u$kA;1?B 7&dщlS8mz1pڠ!lS Y6))NaDI#&Dt͂1BfI9}YϮ3"1ԨB'#DZM,CɑV/dq$tMKK8.l/1?o։!C=&RډFsSMcu:7gZBtL-BOEa\;9nTe5lM*ng4J혌Uȵ@_9j TX2X)D-iढDJЗ+ PbQpT*δM~ngO }~Bo~#J;1gf0j)_덲D?3)kH8Jy|2y(Cx9^6'h7wxJ^st@RV5}uÌTZ%<[GU% 7dLi=W]T%ނU}`B9,\~(K6RNĬ>B)j:ֳ㜭'!^gԱ|wtKJ}qI`X4_R(kW>Û\C,;VKhJԝȥzN^d8Cǭo8GV; ):+;rAt20a䥿;^ 6Fh`ʻV8ja8tV@kBCEu"F6}tL^M\]+$Fq hZ` /kEKk7k@jܓ:/p5sisW FeBA*kqkPu֣fkHVVX$dmQz/gX+0=,μPh|ѧqGF}?1J\fRA@8t֨!IOiSD5I#C'""%*GΩ@$R:s"j" 9=3=mMxRPN^SP[vwm;]qY1$Tyq] $h$ٳg'F34 :Seu~:BV U#ql`b#; Fˤ`(N(Uy-C&8_%R.N? z)dg פbSkBYOY3V1);SdiOV)U}F&N% 7T iճKUOy(pv $j 0<=Cu-ʳsApRRGU/C!Rݿ_WHԅe1f|$|40m,^~t_R>CfÏ#{7a@u:P|]x#R2b5Xk&X"Y :ASCS_a4ow3Cj6${r Wp 5Y,,B!7n|4Fqy!oǓ<65y 1Ybx9|xX>'O#x] F B\ӱ7!Zؑ͟2Yѵ((+ 8vKM߭>|懫~UP|puvMi3ҙT8讕N"rƵ$!IǴk˹pox ^(n 0ZS6!iǽDdc+1Ԣp6z{)Me.hZ% YLjsF,ʍHQA8PYN=^fUAPwiE90S*3n3yV"#sNw^ʹq$Ta1 +PMXqhvݎrʖozş/apן^M_QmO,Ӏ6jh%PXE8xq s ˈȃM:sr4v*Lbxckfp*BsA |"~sPT0ڟ'"Ƹmhrk@f$┃0Q>}p3ZSz*p;$H~oWg!] 0)^zȵ7cPI2D5;Ì-׿}V 0ӤC(4S3fD5'Rk3l@LZ> 0R\ c"4wXZ90Q\knI+jDsk6})g}1gd1gQ,˼0d/[J%'?oÚ s?M6Lih ~LY~Z 6V=z^M|UEgT?J"x-=΂dzږ 7.Ϗ~:??n6M!XFk+R@@{EE4J|lmݔT]n6nś.xbu[聖nUH ("uØtºbPFtb8ƺ~8-nܒ֭ y"%SBt~MnۺVhb1(#:mc[g Ҙu_QҺU!!/\DdJίo[[[YwĠԢFdG *ܺ[nH RHt>ǷeE -21֭mƬ[@Z*$䅋hL֭Xm4RndGE`"ܜuhȴ y"%S5gh˺Qκ3ĠTB*r LcZ:gZpͺ][e"Lc1(#:mc/81=в֭ y"%Su~uYX ʈNlX6Y聖nUH (;} }i_{--.=ityc5,Me+Մ-T=9Xioe N=><}x/bxjo6Ӵg+٪R"o[ׄ=iO@djA9i/H R&p&Vg z< }Bc޿er)\&HrsRr)\&@`ҿ s1W Z&S1W hҿFs1W 1ֿ jB1֬w90H9cR@NK3AJcN9J5Acٻ3ʔcN9*5c{cAd1s{x?RcN9J5Aaܻ3!cN9*5B Y5SRM . @i1s|r̄H1s %]-S9嘫Er̔Hr)\&Pރ'{S)"r0d)ǜrUjC/ s1W '׶h3 cN9C@J9Ø 1_ Ra:|懫~5~z2 8 \ A}&|1OF% j ??O p2'bzReg 7fab*CW1eX(qaQ>D3E(ÿb dҲP*uʅ p%FjnaFcWh&p 1k|/? ?8Rn JI63co'WL93lXta#/d!.kw1L&*ԯٷ]C z7n/7?aubo\ @O䊫k`LT޼~ϯA<0ʁ⟾sxFa{cp_tT|m:|3G/e `%z>ބooƣ?6\kE1M.SGӋh"='jRNrTPJ@CR4\w/R>MIr}>pjŐ=VtliA9`;4 QE9z7`= >=Vy/OEJyG(pusKj?]q"~4f*A2sALGE%zAhoZ;ݟmǧN ;"$Ѿ2'H:jWTSiYwO2GcRJPc h\(9g@aAKXlQĖmVJ=}bk+aBd JXAC,:# bDb!CD_ڛ+OxbbXw|D3M,yϿ5"'_0u*4ҴB;3)XKpS-;)D"Bx@qaQC(`F(̹0Q$2֘)͆B"x*1 ÙYJ9"5D|Cą"S(7vs;X7~$CIegj5OO߯}?umI$B#&DQ@>I\Hy TTd#kAL_N>YmH>y.&. Kmi ZP4_foFŋ{6:l/>$6T^?}v9D2ZU'> j/gǫ!Ƈjt JćEYݢ80 fyޏ\xmۆۯ\zՁ86&NƑ[[C8id[OG銟L#7Q ܺC'l4_j5̶ڜLܜ*F$B-ƃbwp&\j+v'v}fLi%uwn\-1477s^J<֧VEub>&T.o|wnCo!}Od6]oA}5Tj jD(?r5įHJ9e⽓JN a1H3Vq:'%Dt|cT}PO hM"׼%KϤ8xKwe puA4B*\'Rbel}pNb@LQ s$sRRCD) PXXbeYmdx8 o Hɑĭkc]> cR {F T6j;h $F Y{D ǬSҭ؀ صJk  `qmGVm͘2X`[zD#1L (Izӑơ||{q[\!ع}h M>G|mSjr[+!tv}tyW!+UFը=`S%VEEsp_tGŹ}I733O_ĊGK+ٓVJczmb(Wjs-\ ,8F =sl{q%N-)nJp<l|Vד<-/B02Y@Jrʤ;g˫o!XRN%09Zcs @d~3Σ!@fPIb))i rxkS IL96AYi߃gJVd['z{ߑ5!j?nt%=s|yN?b3??ӁK=s<[bIj XӢRS*2܄<LPrۏ?&}3܌BX Q0\Bnտ] . 7ټ,l4ij#2{3 l04x6dUӰ` bASwtr ʟ!~ݙ ]fXSj9xnYnC__e1 ]'vȯ6ZVȏytߌgװ7.QyHo2S hPuޮ{]sSdOyىľ0%Q:%W]Â,eFb5Ϯ|^ի$euW&{`az$l4+ŖL?^*glDRYsOgZ i|:az;G^dg}X\>ÇrEe<ɬ #)ξ.pٮṣUV%e- ٪sA_u-Lw҈Fm1?Du Ȼ;Y1qMR78$Ltuϲ~<01%Mr A*b\tXXl*)MT1PE*2hʖX-j`m`[:s䜥A L:Q^>Q²93GHx[z?NP3sI Mpu>]uՌ(}XouR1b#]{{r B]?; QWX,XW(ny* }+t,}+P57*Jm*FH 8kGںzc|Tj*?}vgZGfAwCDJiUk}; |1)gO#e67/K9%r9pU,xRh((DSg\@D[۹/|!:gp_1&!S&j f'0k .FX)ьj<rQp cNs)eB ųDih:͞dNOV#xԚƓfk*f՗]2n<`FsR]l|GSh#*}x3 gSh~Eo!K  GO%Es)ӕ})Y+致F)nɤ[᳝4MDv ewxp(l8gigRF@ Ƥ6!HS$Kg?Jj?b^ϵAcRi[ KO1kS[)䏎$&X(y(Iz U:gR N{DY+ b@U`uƛH 6+Ip` (TZi@ЪPɀ,Ev6! ոr]x߽}z7+CfĀ?ب@Ft')|Oj8. )0L/~=|yrs|V4o Q ,Š%:r(t4͗ b!tY~GLT~<֭-B5פ"Nk=~HsI1`RIN7CܭemD)X=@[ +~1 kMAyG,Z#X Xy=1̡ b!(tp$ZL"K$&k$?(Ǭ(TP C-HH#hT[[F(`Y A`ptqZ}!KQ.Q9u5/gfҶ=w6y HvCFN1njhxrAtrk8€5mTdtd#xۯyI>)Vs擂 bBU~  taXF0E0!üQ8HHBfm$JW6D .k lp`Gl 9VR#c5q<p1qKgzc 6>CwѓFK1Dp[ᒛlD阡D bU8 88%jBHPD!VӍ|bLu.7"]ƘnDXR`a1Ab_Fܱ*Y7g@L019R(ݸdc!&bۺ(0Jw]q~@qv#cS9bM=`0N)XR7-~:3~z2x+ԗI.`aU.Ƌ!hB;"E#|~H3 x&cFE"4R QVKX>ULbⓉͣSR'5| i,l $U$Y&dŒ3Ψ %8U?w^E;$W#99I%dBS^e%lsEoՎPxN bj]ctf\AV;3f!PvI2eav 8 @`ϩ47w,Uh U*DəOU-@MqlFbɲ!DXl7?Gu+2@{d6x7혋~N=9UӹOy&c'<9pFRrb\nv2H猫8jSHF1NUIv1LPH%27?8j~T4/Xxk-;3~id(G1E c Ps8J**({OrS%[ntuL!Nn'8;w۟|mIE|{`nQsFDkB)x4\8RP|"}mU fe{(ڜjÊ9̭e<J EdFLqMBi{q) +jA_oɌ_ίw>n(-r<PX61tW&I#4yhsKjk@hAy厛hMSȔesjh{e~{EIzz+Jj@E x~;uXaeE a”B&i<:ader` j@pg8L].~{!_Y%$)0OT.{{2X܈cm#Jba'zǠZh1&)Q"Om,%mӬΦƙX= oLl |r~LڄxBiLՠU"Yw@oGq '„7XrUЭic65SxGm,F:E̙IfNZɅ.E+1*/G]:<Ы*!fS7~Bt΅im@F eUldV #ONYytVDni2z12ϴdwvKPjү&"F=X (j[OL&ɇq$gu4|w`È+EN1ɪe9§İXL F>:c,o˂]۲7 U^ICЮ&o}ojƯƛj,G}œpBf 1HJAɹ4>KX&ˑ諩,6Jm3ϏO_W7ޮV\^]bkyA[y֭5`HV؃u΅K`4sࡇ xyg-_^$dƁ{z#ط6\yr&z@H7] &w0?̮/s3PG/RR[g[aZ60̎{O#R?Q懩/s/TQr'} ȚW@~w^)[/lX,Z`jK8*a$Nq&鍢#=E9Z)R6jmfșkHc ["8X*eJJ, e-%e$S9(Ϭ-#AT]T?k1&3Damhe"45לS-AE]U%v\dYAqߔZH1\:sad]_J# $NAAX*o1 Ze~)ʌS1͛ Hv,&JLHhU//?>/7I燅!ߦ+˧{<В~A פM7Rɹ P_47sf T?|2a8'waݻ;Ң9&ξ-$>C j>c5_apJx`(?mydF̲ڽ vۘP4k ZL 5Mq_K ҈wgm7BA'bc!TI nF̨VՒF8nN 0- l)A֤0i$Ԃ̃$ղ~ecLdJM2&Ĺby-cJ#,}xjppdGS֩h0ÓFY@֪h;{Z2 f_@$ƚ/?]xG{fWGqUό^3i4o?&֔7d]]]\,:l/g)Kg` ,P}GTǧ̿'䃚&?]e1!*Ň,P١Z\V6ҎCc).E#25t{݄l D'۟Oy@#bi>|&ZVS>55h mGWšq0No}GKBu)dV1 qEƊ=fX;â%+ZG"ӏ91!ꝍQ106ܙqVF~6 dҲZLPWt(5& N-9KQ'&D%a0Zd ycuYfye00xu.)pTԎw|: k6bqMǯX ,1Q]y4]+]{ʡ:/U1q~ё isb|y$1UI,)?$X6} !bjsh%# o6 ^>_EgWw:Wg.jDvEW%fcW>Bquwn]|Tşs!(ݒyTŵt]Y^q-ln0 2<6R(#@j5Raԗ Ɣkw?_z7yUon|NE+s][0m-JVŐ`BL\vqJh-C{RJ"@{U/b|b.m"#To6L]-bx[BY!AKv7~`_0-~"!L p[l`[o];q$*etﮃ.4A?5I*`- L|O R{$*b:* WNG~@!2f27$* S*A8^$_)iʬV$ @])'cQZ#Ƃ i'Dq2aL,hiNT2h`mٕ+:dƺ%C.3h7tux<#?>#|üwh_0nUs%}S @fh ~<.mvs'%^u%!|Y=lKյp5Q@s bŽw<~N q'cmW9ȢwmmB @qQ"%ڜWټdK]YRQT6N*483l.kFq@y֟-.H[;z%%`OG&N0@.D sagȑؓ9e!/^.IAԈNl5qN@H ~["i=!f$ tDIZHk)UsY%@jq[fzgK|uKKH3lEb`C ӢiED~52Du[WO˹&"e+ 뫦^%b^\Ivzi&)wnW#mi-# r Ŭ[=AWÎA?mC 0<4UvIfҦ'Z B!#SW1_6 64Fm[z[r zV9uو/3΁8XAp:A t}u}e_JJfxglkIq@˜Ika ˣp/j>$nՁl}Za9BYuIxF%̈́\; rUʤep3 }_a&ҧ?W*.Uq9. a4s-bHF76;rV{# d5ecaR;PDRS%3لb+D{]:TVٓLH 7Re:p|΍o<#{8c oE-t\Z褶&'m0F+pa#[^0w#/T O b1']5tIQEӛ 7v=AFˬBgs^sǠYE( 3'u(;SlT']5}05gh ِ6ɉle,d0:Lj*fs($c3- ĤLULce04#*(SIqT9OdÔmƠpfh술oA&u(2d9ЬEШGH5:R.s'dbKwj]ڽ *9KxJ;atsSu@_D?<\T=L>~)./>?=Pk|rm܇{zf)MVNa\7oo}CKbRʜL\Zs wrň%l>}Z{ k/uKMV~MZA(UW_ |Uu[C @<'("-xL=.9˜`'%mBoxhwwmݵNg>3ηlw,_]0+&ZޤL8" D &'N8PYsvBD-_Za&kEf>U,%x"-YNָfq4]F39)KEÜeXK:< DJ4Y&[?8r<(ə^$»S=ў\!XgCE(o'?x 3w͢dOJc.W;c@2S*9d1x,ehJ>}9Aւ ^ }W:L{Z^Z4*}zV=b5FVѪrIfU0 2+Xe 3r%.]e %66bEi>!c13v;Epвڦr8Le sҮlT$Up [$vmL9Ffؘ!l4/(VpoMiuC R-Bp/ @a} =K rSf$J%yF(YV !Hr݄o u3tm d=L,uC4eJ79N)!Xf$~*ufAaP`e_M6㒈`yA^Ȑ%ŀI1"QmXkGAYȖf,].D`rE"0,pi#%ȣgӑ/('A[oX#*K5MMza5+p3 +kgv럮U!}5R2r\ǪZX-wҗ܅>7䌵 r^hXWe5kҟJ졚}9lL]rPI*,kIcҫ3|_We$ w^g(dhPy#)HzN)fF: -}EdCN_hؿ;;=d瑱́MZpCؗBq,,[sTEV#anH*-S_X⸇u$iHp@d3j8X$  ;vD=v;4jpkA)OzRt+Z wB~Y28SO[ ojc19K RdH:I ,y 2fo@9ۼXdžUڒOhW:?~x~CnɻdAWÈb!0jmvQd{&VT.P\K7 1"AԆ>刉-"FdQXҵuÌi#*TJ#9IsZň3^6_ͽ;!g$}e,JJ@u*`MΜ>LbŰ ?B5YxP#R3kg ~=9[I|w1Y%FRkh]z~DfVs ȴVӰ(]bW&yeW.ʿgs!3??E@ R3ݙ̞+UD9ɉ §h9Mmݫ*{J%l<2s\'#x On{“OF(LVyʡ+qzw9Q1z$d\YӗL%YPmb^ڥŕovx3,t>DEVmTYO ÐrDoB#U:剷LaA 99F;"7xϐmaVF^H8^{T Gme?ww:VSSu3"E~sS,ENE]dfe:8,-2 KCPuHme4[s̥x?lTjP U!_$Wx*9wŃ>Hu fOMmLN^\˵lhlP8h l`U8d+ѱ 5B";.J4Q,d*E>X,g#jG|ȫWr"xC>r*R:֎ [)o,W]f1909)dgIg d mFaio~KvO 뜝jkS{BNuDC̀[v`s2ZS/پs!f>H[?~}d`(!X09"𬽎2,d 4$}Ƌ]. H6DSKΞ))8H?sضٽ23Շacw?=0| (mSFR+2J5UV"iwfyL]:IXn+MO ?64M+p>=I&}*S0}oc"HN+}IV‰^-ҧ<ڎSb۠O]{($0^H@> m"5m(}+}+zʣXbI/;:}{{:T¯'eZGԖ+75>lY~ǵ)`E+Q LH)c`X< %v:}` /]В6WAEY0et\QeDH!x ZB]źpЧ -#uj⼔! !d-}v U@wA&c3RHd  ZZrD L&?{׶Ʈ\u^Nܺ؏I1`0НHYG{0((\Zvm" (|e Wq x!sv5[ɁUk?dU(IZ"-ɦX׵4$MeWhPyV7[IybFA90 j@?e܊ǝ ߞ8L)a.ҲIAwL=_q@Q1 *Ӡp?o6RS/!iWCK3)E m b[?8wc"^j[v),צ MVt駅spx1c^h3Fw=MO|D[qV\Ļ2-;oC_1ں1O/(:Zyg~c55Xݛw:A 6݆~RM,0 w9fwa,8=97Ohzw ]Α[<'Budm#L3\nIgw#>ƹp7gJO q8*gj `$zIk:2]:eJobqK:aLgc*d)),qq0+ $3Q!A3DozcfRzJFGMe*l|P7@`2`qs65{!T-N,[L>;. Fxkz!mM&U q^IhڜquFQ3#s+=lR޼B_XXն76b|;nLo~{뗱A2|_+$R۵: Azgq+߾Wk-Q_1Ê@KelKY fC>$YiW4?΄r<M\e*&^-cǠW %5h3^cp#ALe̓CXܨWuVWgr(ӛgyw#9)OoŪbZ?G-sIԋիZ9HV&c9ihOϳOfZ'OMin?O-41- H)"33c3`9}8i<+N7o9m񶆷" K^ 1E1V1hlLTL])-cG&<InR%ƞ009HխpmCLwq:5^ڋVd2΃AZMC`Tz[6z8JDu=q_n ObKNCnZrbŅR_ ~9nLrq%'2bS։۷;KA슈z;NOSL:@ 6δɫ50$]jѣTÖ"Ny`,yt64U5ᗐR×/Kyt11c .ypL!N~r6+Weʉ%{ -57"T2@Y[?n!JӔE=3ȟ5w@Y5sg N`-L*{v:WWaİLKt׃Fu5}@eLa HnTIdnڭc4*G-a-r aTv@_1^:V㺖 @'m۹]Is;<0IWȆvyY5t~S-`@lo(#JEs|e)&ZU80`1n}Q6hiʭ 7 4p@u]koRwK!֍0N7 tޔѸg|Y,ף\EZs~1T3o lg=7Fd$~v=n) ɒvf [*Üm#Gk1sWz7ۘP*NLWaNV5Vy-B. =8qQHl'/neBQժfT4X0B 'mlCtMi.KvC+&0aZGGjD+ 6^KQS:qg-̪s3ۘm4R@:,a E#j̱o^": 0nmJͳe1\`E\Z.Wc8繢;%rϕ_ut$}53,r0[IuI&Z(c?/oSIF:#o`̤3\F03`PDc*; -iKoWU2ILeZ43ËHjaO(s.x M6 sV)+|򼴱[)΄AdC .ud6ES$4?X4]^Z !41|81Vd!֜)(z11<70ZܽJSعG[.A> ̄HNy\\˕O8PJTqxE4wQFig5҃cisɺ奊 g9+bM!,g62;iD&ic NoJ[B) @.єƸĠ28cL*$P1456*Sj'Ĺ+.;6f_w񋿯T}K;\[k+;-QvuD*Αs85cj|G)9)7SnOl7U_//,Y^ۅw˛o{UFLGkYt;Um4p~oߗ˻_g?{IpY#7#)mfO}*<픇8FFjVs-\.L_GCg6,IZ-ȪQɂZgf.Z<]/3go2mǬ3+]Id㬆\Wl"F\0y^de y&L0C<ڒҍlp=>amӃ HsDqhUQ> w!ֿVp_J b'@;Q5,L7d8o9Bb*'ml!tMik{ zs^BPgz0U Qt:@Vu:ôL1ؘPOxu7jǻpNPmb,us.TdMkE2>= ڢʜz1t,͑zH]k Ӱ\kS1tvB1mI0Y 1}Kx)]8Tt/BuGvBr^LXv&L($IfDhqz}\lnuus>ڣ} #"u$Uc n'\3O0\)͔pͤ]~N2UNk$sBkjLMr^a[bP?c/% QJZlN'-Oz#@Aa%X-rKn 0=8fnj\=/mB<.Z$ڧ ]#K퍿 Dw>nFchiD`y~3{ӰvQL z8fr:Bk1Lwg% -s8ini, m79HT}%QZZbs}CԹL6{ntM+6RX%te hpmh\܆kd9oD{)8.{+WETSZ?t>т-}vZ>Ug\RN%S'r-0ʜB%uZ.c૰KBo]LwBW?.aɍ"0ssmԻ+T ,M8*C1LE 7/B,Ϻia9df-4v&WOcxk74!1Vi rk IrGp*Xף~{X2.Blٜsg:E(P"/.t5^Է=UUC Uqtdut]h|/"RJBG- !ojizDqW>y,c\CȌ "P-âMg~aV*Q PI}ӶwCcSbXJRj--zYٸ-0|˫1i*6Y78JƑ*3×+%F?ZbJat߰:EPG7a8v d opˤ&$dbZVtFX qE*M2+$ S / l*@%d/mlWJ‘Z6.= J$Dْ"\r ְ5%7H8y0Zry؈&Xvy D;[quÍ̹hEz`.GƝ9xprTqO|0f]p‡[ {V7,W1-ۉmJ)Cѡ6vil{;9l1,Pz~&$rFbW,s3qDm 5iգ*1ELU^hq*#k˾xȶkKUb*&6o7QcZB i@)ח ēFd՘wB4K!KRuro{OfN:Pw4+{1>,֨x}|܏Dt<v ʼnxq~aĤV8ĸY!d> 6 6#WSuX!R/I:,( VJ%\Ekxzo*1kuAdʚrș s29KhvДSN-IUͭͽ/s4'<Y]q'm}4L L"挛\tc_\cf@zz F;4-f='޽05Oߗ)08ܶysōs2Q7&=yJo|St[]?dejki#Y_蜕'úl`hI&16Mw?cl`a, x3UuwCH!T ;<ջ>`{Lw9gm N>g} ?\x."-\NC*}o%|3+*_ߟ!?*3Hq5+ɪB_ D,§iX&R* < T7 Ǝ^-y7nc§wMo[>(DQm-9*WJF3JP^-9ƣ͢B}7Fd[6,/#b̒F M-| AmRyEj4uyr0g=mk 5> [„f \FL2nG>ҎPD5BPrIB \d2&Q]a>6 1raj}D{S+Pv80~!3B_ yk%NmaѯqK74/+TRTUq R )CDb3VP+ˍ:ne0$X n=JDyq;%&1ւ*zaM϶eYAi^+;=艦J6o}ͅ2Ry͜8R+W,j5c٬a 5c>SA\ w[BV` UP ,*y@Vap3@92uWW@>MRZ(\i}mL?NuXvw^(+I gޝ[tK%xRde: TRhmw:ӟEǻ~HƔ^$Z{% Ml2bdmzֲ$r;UQBx0F9NkXiv&Kĥ /')P$e_;T]ŨkFly@"seb/ Nx"QRGgZjl5̀VjX0Fh1Ց掲 ⎲/lHV*R^ĶxztzE]Y^mzruf:דa趯 tu&{ >E9ϫsD*D_5)"+4؛+dsJI"?+`$nj= J܎EK-r%Wח|Q+Dj-U@DG(qKNčԻyv h |fΓO!\4lF=)OZ8 Y\HSĚ1Nc,W2;7|1~~,kVHye ʌ= +&vKrNWw'`VQfh,{}?Lo8lw{p}F>4:v==o57vwn!st~ o7n׽D{ :l>80ҟA|f?mڼn Y6r@^< j[NLp{$t{T(~嵟[;OWؿ!<"~%ŵͼjII3ajts̛.:N 729:Ơwfpx^xp<,} Jwl~|n} ~֗pdPw$=$Vga\hu4?lpԍ~1D\>!`L/ʯ'J^t>z _N5hVh}#Pݳř~' rg(f8=\Y/\7urJʜ}[wN<^v+]Rm;wOD\~Gj8oķ]`Iwy.7v95Նz6}4`{>Է@zx^ di{9{4:A"/v;9K'~'GTfzvߙHj)Y{ݳr`# x~L[kM{r ֘=uc븉?:X\/}A3`h0RZq+$ uGԸ NG2Pm)<}~t#C Moooo6 } vlNwkQv0,4P ,J`ٹ T1U*8V.X4ri H5zJG $!FkP'qN(p(pQ'"Mc$ r 5Ae+LfY* ` ~4HRop+9 (eQpM:=1"a\PF>o~cL-8à=}q7nZSz;[/72dk.Y_WSSܲBe22 ty\ب 5Saf*lԜ.l4=/"2J-RbL8aTaϣT HQ#WJc4T)Py/iV;&I )a3%1l$$RpZt hh6W>zg#]hxͲ ⎲/mUR W"bsEmN6Xh,"$*&`Q%J4Z1-3$5&6:h"=pA}zы RMCZjO:2DkC# Y9w;S;a1pmbX*=#fU -I luv S~~#ע=G{ͷo7_MP(}6a~i?l/Q &I~ە-Mr<@S0ZS(I@bs[8:}[2,עьoUc೎5- RݷX]watZO=ý7W) XI7݀%\ "cPTbUE֥Bfa$RJ,RJl:%Q,%\º}1Ǽ .@QB`6Y˜'@ȹ%(f*e26ZJ( %X!J!/0.{1&s>D105J޿`S G'\*g<#<s);VFx ShOXz**E+l%4닧IR,%RR,%˦iw9ľ/m ) LF$H,C±&TI}=pro4(|4!)#4`:?3|MF~V]ƢĚ$ _B.4hk&Ox v69:o^]·7 ?b},8թ0%*)DJCD^EX+9Udȸ$]/|L4^|3 Z/ ۥ 3;B W[\@>%XLaޜ! ñwmqIW>bz3##/a@Ovg0*QHb}O6).]lˆ/bgɈs"2D̫-jWߓ0MgdDcqfOY^Z@Co걈HAqpOq@0lSM/WܤWwpںd^MfaJ=v 077 Pܶ@R?5k#ϭȴ\AԳXϰy|}Yfh$yRs-)µW%Or흄VKFPbVs/i ^)͵$'}r$ۊvl[v=Ӷ ˂fDFQB?Âmnz&F.HNZvnI 7VFIvsIr*ipH HYK$y_9]tKl}n%I 'H JD+Q$'`UkWK\"j?hNnLs神^4n|rGCЍ]NS ڭTqY-F}UqS70=S7Ux^+/J}c}5E,6 O\FѷAg}"قGJZZs-<>C"]6&/.6=F3BVȳjyV [!s+dn_+d M~Vȷ njFcoW|1 7 ٟu-ӑ>9ɶ>7Ts*GJjw`֘]U}b_3FM%X縃Iz"w_ƙ%,qNf;KYw'q,:*q7/%hw4FdW|7_RD<=g:>ɶE=njw[w:vÁZV/Ёw\]8<ulNZ96Ɔ!KV$Q;l(ď~?*8+sc`qu2z?Ydp뷣5 h\Q- A@o馛u_jI ҂c}5|E mU/lxh'tB6r`.Ɛ^_"mI&WKGeB"X)d>NU۸̬f^ hV@Ь{!+- Yˣ'Xt .lmbu/V4p;8I|k~vÛYHmCݛ}6?h݂"$/mE. _B2 KW޲_Ps{?~wh Gu8$V?*~gt9!^D|G;l KT-8kQш׾ei+<5%۔Kͻl}vԸO]9_}ۇ=_|8qxaIR^ujdvnx `v6p.U !fcDkgn5ŧb|6b>ՒV%fk*! ^xQ~7wXBb /8SEkv(}Ym?C zmtnah[ LrI'AHAƒdAY:ob 4רSj9KaI8^R HD'FɬGf jMN(s @N볉 qmKC+7+1j-"pX5@R*XL{w/̪ʪ20ry:wX~/x9^BFü6@MbD!+ע*v1גcd$Do ⍏y' f+kﰪ >|tvzro'h(’e_xZ:h(dCG6RQK5ܭ.R99-b! tT-u'j'ЈP/JR{4/NOh.$qqV5qSA֚D$ThW6JP-|:j3j%.Pj3djj3j9$e%o(+j3"{5g1|,~כa %\m#NAh;j5It6nSUT̀Zv%F :Fd͆h-h,>{oy޿ +EܮD5c1!;Pli8$o2S-Ł x+d nJ?N5 r>쥟zt|GWoՎxMkl.k҆XHFE0k^#*"O#:$p*wgF?^1֋K?C8:|뿥~:a}P>-KoWZB0 #BF>%Z.)}|@^?O݋CR wW~y`τwJ/틿K-LSq<ϣ׷=څkZ<+VDd{\VEV=Q`Yuϼ230!kةvuqH*V ˕=;v H)Ƈ^+RΒ y &E #$(tC`_qox/4<l;̠ .`r2+#1P5!EY$=Ҋx$iF:ցJ4:HF ?W{(-C{< "}pM^p c|w<":!ЬQ;D{NRrk)-Qin{$# }* ?vFDn{DzGx[~Ⱦ. Y|Nml HZ YțUwVMWf9lխV+k"mG؂H$i*A.<,6HΏh1i$F "Hm It2!RSB$L!#)Ļ-yi1DB A<";4(AhFWؚ;=I JIPΎ`aSh$F Ϧ]&\,BIǘ@r˳㘴ݚori,{jIZސNM ;H8 iB h歑e(ێ@<4FCXW@2Bz-@x/4!yeڑGy0ߍj n8$:kwn}mhJ9;;&GЗj&A_A#̏&϶{Z!;JU(9}{z<٤x$x_Z,oij?Fuſϓ~8opx{,U7=$64?{WǑJoj} Ik;_ֆ'2nR7G2h H52Ȉ"3梌^R&ytfR{٪zmB"Jއzڿ{ G?tIXA529,WL4u!hܝ=6Q ic˟QǞ!@3{4&8jڨN0Loǻ]Mjf-;{g2 #;g'G4)UQB%Gpe]@dxɩn캒7w#Oɟ^^}-=$Ʋ-n"ۭȤ%٫l/2[(2L*{pEsI|{q\I\~ -R)&eD6ffuKU9;^˕s-مs_3MJ fni*WGt#BJթwcky YBsM8 EF,nLbp(qk%{a𯛽?r`QI*D9?)b̄j X5Ctn3BuAL LAmI$Dɬ s9+>` l$ n@ nuރ8TWFL k[!܀17C9TKJ -6`(^jW'2DK" |gj ^)Bʄ֬&nG,""|qy|Hr}1?*bb!0oADw JO <~ar[QN`Z6ru\ kN7nyOٳ˓&0ဤ+jLOXIOE% BD!$H@6*H ːClzt"B2OliZX(E?5q̮&>yGeߋnrv=M4WG'ubrl~(NhF?q+U #(mAи2W2~>dKK bECb@v|pǀ5JeO~(#2X$R+ BA>v?HV2C,f-¾!%ۑ<^R(=z@6[GA~SPܲ^SlvLĻ7[Hp+ ւ2Ld}X^I$ 1IdoEcxSaM#b6 j'y?i. vHf/v*D+dT Us2s1ͶIl!ޭ Nw*JH" 2qdD( O1At%C ey*dFZ03-8_ u]ͲHQ g"G[ r@d&fzi  8( m2j hڱ@)+!KZ_Y`BX#q$䙧GI84iN=ȌY{bƮ+`chB`g \?ڝoYSq{.AI>r^7`,њYVNJKJ]BN,I1fkp/hn,xr6-rŒ~5Oae7J5CTl;B{muZcT,~)H\K}嘕AWN\-fhw7yظ(yɳ*P R$׳_pT%ցe͝)`٤WB*Vh{xk։bߦ3"_@&{KDy79ӸTRH^\]تHq=Tb_6tq2ClPiR{7RXC-h@rJ-lz,>붜$<{`Kdc:^ 5!^#始W[O߯ixW),3n9[xR-ZῌG?\\,b h'Wwv !-a6FɭR\$zϕJTiM Kk,?7ijo d荁j+hU4mI%Wѯ<4쫈j{Jf\-~J76 Ws\1iQ!a 0 f{7?ovЏ RfNloaẃK$;7ɧH(bnw&泤l[U`3Y{}Ŕm92+S}9b6Q/O_^ _wԊ@Wd^Mb6TӰQT׋DZQu pv4KD8QIk9O}өp8]^Ԭ]L;n*-e~U+&Mgsj~=66B}{U gx0(dpy} cqoy`Ki.2fo}bH><^Y-gBhl>D?T[B UNX7M~[O=w_5nJ}-h7uJ ŭs Yr=Wiw-M&ՇxUt|xe {5jJ̀E\ RA,i-n~PUΘ>=DL.Ft5\Ɋ&1 ۦYH!f|65[Z#ݯ1\]m3ԔjAgwUqe~‰,Ӆ7E8},zXaY=ϫaGo!vY)@@yMKL|R PZns"v[L//*bGv_lf7v~Y,$@>LvAf|I`P"9c&vTKTK9d,Vhu-i&.X}X˷SLeX&&!H36ZWrJ=ILۛD ZIWȢUE=Vnu%" #ua-TݼiK G_ӭD=Hz{Y[U| VrVp2a6[ 2䥒6 w߼SS !MaTuIB@Z\gf@tS}r)ZWk#5te+J30CE4qYGVI3eʳxV_LIܘ'ؙF7 Ę};ce_ bl`yE S*2C^Uٗ=DE.,gn2k\ʲ_t3[v>c־x߳/ξk_C v#nǏWW Ekݰ3s{(/ֿv02IA"!-b (NQ !/482$g:09Qr[a-7HI\NP4cgVJFgXgheƋ+ٸkm~q`M%T7AR"9#R„Td\nbUc>:ŷc$Q\\Rlǯ_nQyưR& 8!; N!N2pUAGwq.@%H6% n.ʚIZ4OweMޅZ(}V= ě1-ELw  1K:dmr9O!f{5\KɏoΕWSd8SV`35Y|>Ϸk*dH=ַO7U e--nd*>>_=on*EW-"[d#-&9?\+@ #1~zC"@X[KtoűãhXkyHc'o/n- M!Mٽ8}z6?7?;³`1.^䥅?$^iTNHHX[-r@)]JZC#\oQӟ4OߨQ&֚v%R>vS?Ev-S=CϪB?Ťfځ6&sSi hRw):$꣎#ȂHݟ:₥F|bO#D>0F:^r eNT*d?HJsE6~{W^ڼޯAw嫔z 7oW+I LTUti߾NRe!䕪4eN{USx3 ɴ\CZJ )κDY"S$;Qn‘6zgowG w_Ĕf["h3' N_w(<א#Ky3[ܼ>]ij`꨾"5$/יÂxn _>/3zt 1+ʖFgW}ysx=o/-yfnn?oΖ(SՆ7own31Jˌ*EiTcϺX ֽ$h.~뙺/!~h׍dL֢rx- $q"$\+m->d u!-a{~ kE)O'k5^}[_U}RG=Vͅ=M D_^ $`~R ItDэ.\Oq߽#SY{<5B(㽑UtG,X:soL'JԑCm](l [h>XQԙxc (`l+):|)o]QxE/дcsHJ&֮wr %Xc>A`@eUNlxȠXŜq,4-TXLHko2x8P  iV M* $3NH*W8F7-#)i!dJ MCa`ԭXK+Kۮ_G2[g~/o.!^~%RyWG_w|_ m逽ln[mF P.Ӷqģpc`Y?"FDݾX l,ey(,H.gZb|9Y)؜ߺG{u.EKQR*lx=d4'RBjCuyDԎ9%*."qZ.$ym~z })pgvM>^ZNH">?n=?-X PG/XFs:`Te6E2̶~ݭlOUՍJD;7*&mjfĮŅ\:#Foԇѻ[5L:8 JDj o庫e֛R>-Hĉv>cP0CRV ": ҷ/yFˆ"ܪ]P Pa?*|A2㣴5Y.<0B ^+ J)Ic*3㖴` (v zK kj$}w/&dQ]S"5{AT$s:SXNG jHU4\T7+dS(%$D+gߋa:ӂeT vi[3&I{}\$ƕ{RjXxڏ[ҽi鸉3B+:w kuY>t]]Cj 63@rI$ 7P\a HʙaJrMQ)M(OBst+ 6 `n^9Ul\.g~6zO/O 5‡rtxڊsa0ɖJ]DҨlGP ,J$zeU'􏾁彊*YF3E Q V]B_iN)ʩJC n"V+eAV1O\w1qg a JVgLNB K|^|ߚGQ. -n0[Z(f6lLvaM$YܸT* r!no.oD&S(J \sQsf=2kLʨٞgl,QL519l!(bUO2C$ty+T =x%2V8iy''lK$J_.'h~kՉG|<0B|>׽׎c2@2Q3%S$852r026DGkS67A$j$^ e(5Hw<rߴqiLOd0,G– I`@<(=:t NSz&:Ph}֫ <^<:"g$7Љ 7u-N{'Z.OHxzHd^bB/~JD=64z "م4{ /YCG'#Mņ] zc*q;fՃZ~IĹ:0%Y/dF0񜘿9hEz#^{JX=jC~@Fl\QkƤ>).Z͠S:R3dKzi`Rvӛ K!c&NHsoXK% rAIQ%%14ZT(9#J*]j7RJfoVJfoVH֍u Ψo)PwW+l:^#4*afq#&BޞZFNuSX)$1zkUTr.V~Dwk8[rמdI.¨3]KY #*O.|٬ dʀڟ3n^O:q>W'yE058ZkDYc 9O`Xa$X$М/=A `PuWq֭~\P T7'^ϜW}곢W]|1j"rQ^>E!<B-QZ{Վ*U~W'<Һ*^]/ 'X U4W YoyB@gūCDUa2ɦL.мResվoUu.21ޕ&$Af0+COzh4 '+@`IituHv2Á/*ń; ?reF^$\rXhy#$>_k;B:F&VF϶XaM!b>7Bo}a46f-]iL 'j*cJZ8i$ڀ J58k>وL4;2)q HsQ@_DX՞` DZMZA>/ =t5G;< 7>0J+|TQV9XK cY( ̨@B%?2w9ĖM5k,9{| m M7ۀX@ y(2B{qs]kެ5e sLeL (z@LLj?Rf]0AmH^7b ȾRL6&GL"2,obɚ>vA$t)ж*oq7gM#zFFF+xf +1kna{n8C'54#ṙՃ^DZ"TՃm i4Q4$ǘFA3.!M1,,4RՓDL>$jjI6Q`KC|"| BQ U8Pr ~G p%rx3I4֦=YdRrhTG%Q᭬{U3MT@Q("x kpAb`>ڍ ʑQ *1n c\ͷaj,EU&"Ck#5px@46&3*0U C5 ; \ߠ 19ZnePBFcDQde) Yԇ0i*㒨$X3Xy˄MxHL04Zrm i47~D };L p@޲hVFL.$l.jJDEco"1݃t%ͤ:7LKusnpZ#4aMH\ k5j*s^o+o.K)aqpX>ʃ]+bK6. %mR'XK-Z ~Wq`ׅN_x Un~PS|cE=u 3اL$MG̎eԵൟO;3O;iI}қ~v`@=yP&k|WEnBCJl1_ F {+r5ٯ{ꨥ%%gRInPYGo3_y_zeO*0 >B z*D=) b:!ZN/*2m߫*JfED;,W ;]|\)ߩI^RI6SY/QkS~cxվ&(Œ͝f}TK.W[%kl:Zܠ7 ݃(ڵ'n%%ڙܦOw3SA&O֮;xW ̃ždDs0ωRl͆kl#yf@2Yec-z(wO֖ ئlW،>u0"B5ޕ 'D7|0Q=B(s)JF#b  7!J ec1 I_&aQ$W \XG@:th.4R 1ۉYIXFtIv "iDawaecjd^>@8Mfbڛ.je5˭$=AgoF0z2}B\ >+LOZd![x80WHƌV3O@hg$ĆB0!m 2)h .mY02Vۂ)C*c -l٬7Xf} [[ {VSWs$y b/0_JEn-v^1ޡ:[ Y%IEfo3-UyIaZ#  o+|C 1_I| t3)2':SMeYݸ̱/HF`}Hu&yW,yRˋvbQ<|nZϖPL3gioh0HC XqJ NL$c+_(8FeF[ݷwB`#JES2crݷ .X] Ϛ:׋#\=M1"HsL*hXTP1~8N- B$ F1_XIG!V~U e*&;Yֹ2 $("q"{h[01K2k{*(KR:?XiV}(6;%WdZj/-ͳf+"I7HSkx✔Xq.>zؤ <źx$ڒq P#QW{R Ӄī7XͲHg0(^A յB\b[s2!0S.dA3bffO& /vCc>c9V?:Jj“S7$ie8qr7`Ig’HaA/wҋOn+MY> ;mOW577<^TY$^z /b_]I{ +!5YZӅNWezlNć8$ y;ujਊ,teK7n^^ QyN,Qw_;}(^0?~:t$A>OnV\'^om$09YC?av Wq?|KdHmi,o8|1~OU>{br_52i+?- 2zrNMkJi.yfF=m3M9e粁L>$Nhݎ\޹遠 Ea0a.{sOsWq_0p՗|koKVn 3"p/Vo\AgIwO?;30rHJ4q?0Q(1*Vi&8FaHb$y8d)z$(MkAUVWyxqExqQI̜k"$Xya&D EdN]Ph٨W/Ss!oX^F\Hnun^rš̡*H,Z쫗DӮ" (IYu^qDmĹYZ] * %ޤ]5+JݕAJ{yu^M:r ]'̭z,I3&댳$ ,eYٌDDe  ~X,4cv]~p28i& ~.3ԮnQ o9LCt2>w[ݡ9U8}tm$wb.-4T/}x~G;)crN,DKԘk=ւ`JJe`P\f*Ү͵5ZϘW8KQ GcYE-?r@15~,K4~, L_/?Ӭ!LKCfXn`a n5 2&&"X)e'ݠDR*NUr_Q_F(2g!\\'C@%%Y„/* ӌӋ+'Asz!AGsG?L} nucxߒilɢlkro?엒>)RvL7}k7~+|b =?>3I%~VavvyMrs /s5>WlU:}7i12E˴͘K? RЌiYI8}2CQw%)'E&;fVf"^p݇nA3_lX(7_IVnIa?޶20/\͔U?8 0pcBUig¥|y샜ٞq.m8S.ܕ rfB𿷗3?}7=w_`tv(;{͏]PG܈2݉_Z|?z5>|'wMaP)}MOEerj8Υ(H-0`kQ"PBXg˜42ʼQG*#,S$)a pi#C$4`"$d$PY_s"c kDJ7UB 2AK&?^x!AJ"5!Mx&̯_1,|h XA'zpԴ(6~{ w//2|a;C`T,axqY2cVupMI4.؃.Ӭ0'nwɲTd%a%\N\6oy+[mVEoHT 0U ?M2 jt7?FX{2{ͅ[#WD۵YFt>$U JYm&ʑT8,* cBI1cK+ yOIAq-^n**r݊ ٦G4?X 0ģ;W^7N4k:$SR5{c, FEu>^>u j9O-!_HϝȞlұ}.|cކJ斐*ylUh^66LsRatΧO_vݓ=rөu̇i=X+Zl[Mţ'?g#Ĝ RY|̽fv+w;^'΄xA4@ưM<ڲi$JU$8 i(5Ћ&=y2YTч +s*aU왐'mL<jCL!9+O%MAZ "5;mިO-&q2!2'q~ܲO:q8ɘ+n\Z'?qj3]ϕG(/RYAR=#} Ϳ_xQ7ĝwwlf.]ŮvؐT+C_voP] f;V)hbך0ݸ5/}֟^>^NڷAa5ec {,:m-%[(4[<ښEo6V0EB4,-8TE*ڈ? RE.KEh\0#Q8n׆DH[v19n;OmWH1Ƴ2apVҭ4;GwES6e)<Nxov~D۽]voywHTxo7ۍvxovo{;"D !iR#I(B!)- BvJG|TtvS u?pWz 0lM,A| =/?z6]cnQh.4Q~yrp8r]:~L{}3׹C"Nu/S[e?{ ~fg]E;{U6 '>=/~O`%" 뵥uf׻@IA@X²(4t4L0a&eJMp3ƐWɣy+B!1u<{û6g3ܷl fוYޚ.в: %^NRϼOGr<3bP!)>|4b`(+/ wsoYs o.<]TɨFVlhOQ7sqcW\gAqNR%R U}{4M1*NREU5}m5y5#UFu5^G5~|U"N6]OͩkÛq[NPYsJڥk.U~"$i}Mr,R>\$gFdˆ9x5Z/@9BJ^Ny^p3ƔJ%JR-* /;>>4ϫcQnX)x[)\Rx}ttH]{\餐HDk듄Ĥ9acXD%q\隡iJvrӯAw/ZA+pЂZW~5n%MÑ |Ŗ[8& >O[%$8Gӑ'#ro 3OF/V ՛0~b:8_oQ+}8`xE!ڧ<|{+I a btzwpvH1-nulavPETFsa6< #`^z{FA(oO \ȇ|:{S=SYb#$ #Lȶ--,Z}{řnv}Lb9#rpS)&` 6ҚhئS J5;WclUro"(^T"]Q 5E)3&![ZBC1ʢX m $Y^׏-Xoa4R!]d*&&28DB*\gX(ʹ QqF f>y[k[#JJe*h3ȡ2CpI$M 'Q- )m¤&^%LhΊ,s@:L%ᩥf09| sC UfHcRB ]nH1Fٻ6r$W 9÷*Ln2̧uF߯(v[-ۉ$[lbbH:y ƠsFmv)W9N-}T}8]A,Zd*1_K2:FVj?@!XО+{JN'P9G.#0~ZEIA4\R܂ nAhMV9ĶMr lccxi Kݜ?2{Mf~_bQ#D2/dJR j dmP IYO 8_E} v )%HۋVlvix5j:2&ykxˤpΈ4ckemHVL)0tK~TxHp Zm`Xer]zi-a쯗+\=yA|M@:-Wٿ x(7MNgTGչؤhݫ OhS_KZeZ_KZ>xAH,`fEefpΫm6晼K6;-<"joWh{*t`ubyI-L2uU,1;ob:e] @!ݣSWW&Y<}O]><Ô{K<8L?J+Pwܝ-hyv:"ꕶ1܊z^'e4bǹ[+ۧn0gK~Mޜrq w@;.w(A5Cl "NVI3Q<0^QV_/6r5+۷M<';޳8A]۞:ڄKe[a]7d"bnHL adl)І6hFX*F}l,V9h\6h'RL>-Er$N|G?7r|J Y ( {& Lq >x\g5]h׍k{ǻ  NO<1y.K3UײrЧ pu;n'uI]m9m WM6̅OWƫL9*?z,bTLJӼRQWIK\GW=[Kh@^xWȷ ò@^cVÞEa h )}j%kՖnW*i~ꭷ; $|-TPH%- g)}c&-R틟DN3L-崿r2BX߽{Ο\܍l%Q$V,{Ն^rfĶz.<w!x>>EwQ !N"ɓ z;sx*xjnXx2H{w_d%YV1Y u8qK5Dx|z\;23 >wn l7}~>nϳ_Q&荻 )7$Jg2gGӟQ̞xC-+vA4q*2+TV ܤ,:@jRn*mݿ*9>:".þRI@hX]1^]WN Si oyow޻`X(}b*49`g9'N)`LȅƘbm#wz$_F*L zVZpBI `~fdK!{UG5pA`B zkGc=\>\=^aEI Sc\n{rAP= sO磹{ӹ3GeQ$IE51.eLOr!hp~n&7ӹ6ތfz_.}ѽfS.<ʯ$50|ȣ#>iYZl6[Lag-f(r"6;xT.G4GmIޱ`l"5c֮L<;"W8\ҵ]6` (IhuyZm vZĠ1,׊UF EB̊ Ə5cBӸPx@ȜFǰ1kI{0.{nWU+z*۞zZOnU x3]>F\++Ć3, KTC?W Fn8&$"쨦[i:'Udt5 r^H/\׉VS `BKFJxtf{eQlIjž\W좰bb] dHJ$'RKNQ@2T8PwRlk)7{ Q/{V!r&ٲ[rtL fnyDlQNdLS{dWݯߊ?Yoi.X0U0s?ߐƓb4ׅc-f1z_;n2C,o./M}#1HCa\/I{"~N4MٻFoQ&4H\˶5cNbJo+m`E餓RGh2彬A F,j2\%#-mni^hgILEsc.Iu|is̲hTȩ\GFB4h&}'+PPjE<p)>p}rvHk Qre6-u~wD\c,$' 79Bmj9yFl`U&Ƭ2*diY%\WS`P؎{x76 B1Ĵ})2 M.ɆxS9[=0,٠+dB# a6vBi AMT2V(gU6l+CC'ZP0|Wɒw)iQB0- @p Ξ?8]C8]py7:].Qւj;diy`}2E㷐Cs^6:&8 ÙHQz<:X4m@D0%wGapF o~Z>,cJԪ0*SQ~^I5yZ),pP"945gd"HafbҵSp])|d*dIMYbFhTT^SSZz95Yt$v 2C>Kmru$ÔD7fMI:9¾V#OV IX,}kFB8 1fSdi rZ?:εhbԨH+0gZgj{8r_ezR/*8\_vaԫĖKN6^kFZ3Ӛyr,Y$EVtݧ_ZQ[#$! c= [4ctZt J_΁VC^4[#Y"L69/ϑ2G1 ùƱ@k67rooհkV~n`^s&0sMcihFs ;ǚ3uI(ul :(2W Y15T$ԡ%uiUB5Ox@0\q% ِ$!2iSZ. :BzY2 R%OL :N #]9.(!xi/T|}`iޕZ"|t. (J56L1֭zmp.*:8Gɰe}WJbRR") rb8z/x9rNj?sSB,R⒡+C)d# Ioe 1D/3WK3e[ )EA [Q-=UR$(,(\- f#3X VigK"./+LH aBsCjI1AP3cm* Yz>@4+rT#ΌW8%Hyȴ [0 !Bk)Qn(Jˑ1 \봙*'ݠS@K_F+n{(5‰JY79~7ttG7_-q5DSvJzBl~7o|0/FC}db&.9<2^U>;k+}|8&,V1yc)Uj}2>a 3Nw}sޟF"էuNzõH7Tμ<̦y9FHGݜAa}B[{1gDR,0bg @/s6*[~YmO+c4|*ڤSRM޴nKѺFuZ1ƺ5[J6|*N&ŮvTs  +-3t mvJa:l /tPN2+^pɴa*khۜa(2r2xf2x:p:HaZ8T9do3-4ZUt=YgE)?/@6XTf FkiF@ݫ /ŗh7j`f"V/V͊UK`Fn}(p7;TQ* fzUqRO|jD[fkcOѧz*,f`c}.8eೱlt3N{¬>g$L>|.Q[wP%j><}xSWjGH4h4CoMQO\ɘ ^/QȭQ'ax.u#aܶ#\=uP7hVvm924GٛTf0.<]k;!Xsn'߭ΑZ񹅅8ym(iJR0.qK#5,GPZX_@FJ!Xh{ȣ&͒ߟjjdO&`,Y\.gE)ȝergT 6P{& t.c^AG'eI)")A5PM4;A9\-?;6 OrkvDWj4: J^nC%k UE4'NoX7RukA괾cu;E2Z̺5/ֺM!_6T=uuC: NFuZ1ƺ/F#ٹ[Bk:e-grj[ݭS' T)U1F<ǿ]׷%QRR](@}|xޮOrN-|]+pvI[&;oHmy{8N ޥw+P\ wss2z7E $q-K"^c$oSM"59H_8< Bgbn?9ryuٶ=c #^̯ AyD? uЁ޼7yte@*VO5m wWR?//Ȫ"RXo|S"BnWG׾ ʺwå[Y|J>؏MU~S|h1@ڊ2ܝnWX'_bM}WTCu9L&Y QthMK~^#&s"ŔbZGVF#ԣ7d5?0j3q{lѳtK`.<=e<` 谿EϲW[N#g(!u?r|=@v^IEC?ε_SEV]9~Y()bQYeH|M$v;/6*h㫢xD]چS"feާ0E0fgÄfPg;Ccf/^Ӓ//Kr|ܱE/.%k8\/w7i=y_*[ډ;6d$x!CW/Vb5@e՚}Yz%:ETQY29,Xmd* r^?\ 5pN6̹mj_}sc,F[A$u \pȌ\؅>gl 0FzǜC|TesǕE$&`Is2f&'e t ,ǁ)e E53ـފd\q`D'9I,J51sʮefamISuKb9)sPLQ¬07 5# v9y+ˏ(iֻ)P9b;VRHM e`$ LQ-Tam-!=SQP9PwvPfpcED(Ē.ɹ7cXVJЏ!Q1`䖍Q,hec6YdVv7=we x/D-щŎ|zGqJif;\Z1wY]󚭗ճ8@Al3?gl̡ ;BD,{ DZM@[>t*?Jյ,.31\Jdp>V욬bM !drB#ҷeF8z 0%>">[iI?=%cT41!|AZ8' 7zN0/"Р]7TMq9R2 [`wgwL%d  ^<}mմ;QfY-9B詰 ˉ9Q]O_KĄFz}龵U 4 DtNkm)vxOfZ}/46_%,`%l̪Cd\od^[S)3(O AjS#_UxiHUH-KbL\&OQI/ϻBZ 'ݠ3##wN9ԩtK&$דǥ߼CTvdp8!lՏ S>X2aT3LEa>w2)eR%dy=YsmQʋ#BîJ~ (*=o 9!@ΚDbV}2Z. 15s@ #ȌIpA {{2>!+],>rLJmgGؙ~Ψ.V3G?7q,G<%+8}*f r,i02&g'-)mbeWVa~֚AٕFI8Z詑elp`#嘷BhhъX%`^5kJ@3]I`u hh&/uyڬrg26$ 1:ߏc*>bG<6U 郴s#Kec?*p2F1L%Ԭ͈P{Q!=9@)Kzi}Ym[ۗqK0i9NdQL_sipNI5ѾlJv4}=Uڵ 9ؤ:\msEh)*_pQB Fgv"Zp V 49S&Jv:`ӟ~ !Wvۏ'iߛt2i{ܤQAt/RS s kknL˸U~ٌf_f*[˦\蛥,iH:Nv*}EMfSc\Ӽ 3H;ێAWӰѨayXɚ[` |vUR묝823ΦӋgGK,0/^83P"`ؗ`%ɔ |:{gׇkE$W)wxt P.&W.dxqE@,ǟ(pW(V6+|WLA֊#ZTQqO VnmǛlk]/' ӑ^"DIc9llٜ4N^?u<u{ֿr+_Є:\FNygu\'aujd /SU~<;V qǭ i9tx2ƥQ,$0 j^ *}|7ͭ<[t%rA,zT2ʟc4$F`eX'ZaX'n1:^fs A;ʵS(tV+nS,jJˤZ>O-g,Iy־m_aCS;t%f$c`?s+'ϋmGY3Jt4ݰ{n%VԢ!PQK}Q l Uk5j,Uw."ֿJm0*?Xp1țm8D:O)0ld߾}e˿xGoW?;+vtl'W;unå7GewpIiC+ tUHUP\,V+L9U`/OwZ~ 7'A4xI8zI尃9EXPե Z4lx-\1zס \f)Wc\NCPJ^O>mw RY+(3h(ˋ4YIOȖL8G2% _]>^ܚr +Z_cuBe1C.P!e:a)M(OJ$5eh:F9"?wa# ,O.j㢙$p!7~ϗ9柷݅E2Ba2BW+Z .CzVs}H:lN )tUw\\>dH$p,:=+X6;ƃW`!FQۛ!q qƆXxm@ "ŀ/ ^ZU1`2$&Fa"OI2?{ȎI fll16W%&s7+_vc_8bP{g?p9 GtG'ĦGK;ڌhO v*-J[`Dih^u{ZO.I%_[abu!> :j3[jToğr7Ԛ'#v}{6Si!t0Z/e揳燗ͬ?|]c}}kk[u%S~*pYGfP}Z=au`Q'hR8pB"PTMObKoH&\`VCA`>57)* T0@@iB%)&U`8eiFָ||PK>qh8U%`h <=v#\әB9vZގ.l#v$TL! u~dփ3 GEͻwݢNNx%)t0D 1 ,Ϥ(Yɒ “cU!hvP 7YQzLؔ ehW Ad#hSp-H&la,rz@1B+<ƈ<g!B3tUS|r%vrwhiI^6 b3{:1nmg[)ڞIm8NϒgcU+ βqn5l}dۡʺY9 نj=8/ކ cڔ[_m1:G>-I 9GMxGńޘJ{߲{#0 "*Z`G#jJ偞koUN?wq Ty 4D)ָ9TEi─Yfd2J،knF|m*>'՟B?Ck)7}&+|ʅ2J9'r&$99>yBLSA/B p`=.5pYEEliÀ+j&\JV W:%_q[Wh8>Ƹ^lL$C1Ux ^1P|p[.1b6{s<1:6UW\8{BR%θDYYpRR҂s\0-rNɑǪՓ+ S@ >/ %WW\/%'eabz)$RZkTr1uX)=<ÌHzqD BV {#T drU;uĘ+:;bvxMvkW6h#pDzJ4uV82ƈ!wIjl%ZKZsȯIjA֋dd?2سR]A9rD Da2Dce_'Ҍ4T,kAښ ͓Fe*,_+{ؼ8A`bO5g~jVKfhM,bɤ0'Қ\8iCyekaF%->kM6ڔlRbOٟ2!}V3τ$Nm N`QEfIgzح_6M3s$sdd-/A{^*5km ËԪ:[X/?]!Rks?lT>sw `/{rݧU% xEUbl'reO)?Yi?b0ѻ$Ox!Sq"_ݱͳ{2s>j&Kۅ <.ғx]1E]br Pߕ钚|M.j҇STVduRQ%-&?-OeZ=u|]{ǯWQ;!bo~sL Y zƱm+xFSp:ɡi< UKruO ݝ mlefdsy%=uᚢUQKELlgptU5֔)uN!,#QO M=ڐW.'Httˎ?]$ށ}v{Bjn2Xeµiū](ޘv|Le~bvF[eGuq~/;)O:2'kUT'(J8ɹI>1A-@:ɀ(jW5 :qX߆"Sq%@$F_6˗k̹}3$.csJ@RuIe?g3^`d&9A*@8ml&{ߥGHQ)MMPJFOm>+aXqK/kC0!8&B[)AQ@LKO. UOWBkSR`]F[ŔłY 'a%SJupZw{$n֔ibzw="qsL i D0$'8^!ւT(([  F vx:Tr{QFȗLBʫ9a'\1qU B+&}m5^q}wLB8:$[@CC,{WJcBNp<4Tx$`S!L%=Q*+ AԒ:x4QLb` ˧B`>X v:"Ln~4NI:ђr 0 Ga6'{?~\}~*Z*JX(p:ApJϢڭm;"d)VBލEҁŇxAy6zw1C) wo ,х2t-Ϭ,Ag;OޘjGYo)o)5:~$>~eSgXn;:̞)ᅮO?I@U'̛_A&^[$}jJ/x^p2nǸ DT2mf݋[aAl y0GQ6 3a563++hҸz@JF1lH{^f&k[-P5yLS ()[(n.NcQ{TZJW]9/)gdHoX@ٙε~[,I&Qۛ-n5Yt7y!N~JN:qONiU*c~*P`V+_̿}B=[׫() 0z듾 t?cA`b1, 锫-р~R ߼ m'7$:A3Q30$79VgT y$"⿩zKEJ)o]&`Gӄ$N-j:5 + uNgdĝ@s"p|?MH }ǒj5k8x0&hOs>ϫ+ѱ?N''` zc(S,KKx  }$6Rzg-& CsCx@{^H(q)Q`#mA3vwJQXs^򄊹$q131O ݬŔ͗4DcJ36Fjl5\a7K. FK aE8ƖDQJg$n4Z֯IHWD1tH8D 5JF;!4X@QC{ƜTf $LXuQ9i;A˕8ꅟ1Pn'8!-aSޜL,8mpwxS4CD Dg{kndE!5Su`,k@!`eTԠmϽ__|ۄ:ALtmU9;C{~KWp`|Ĉ,XȬs ;vTmĔwV@|$ԎX?`[5·٠sg)P o{LY c`:$B(==$ 5xDpr8#ݩtO;q ha6M GyKgœwgNL. q vkUP}:тe\B"bLjy:4J$wFݴNc%tʿub_H9NN涸^m o{իH=YSC u|"U݀j oTssߺcy|5!#BReR!1{E%Iora)7*DD>1y>R#zY;6L IT<>@ի{xsW<y~=;S"ySt|ut29p Žh3**e7(*d HRܓ_|t/;;NYk!HD2[ }e-PpF5pTYg*H῍tVyX1-bC#yxgLar+Sb&/#<$PDJTYITj$@} /p B^B ifnss=˸dnJO'[x@Zǵ;xx 2mpSƲ'y&dH# c2B)/]$_$CV,ϵSu Dydo2aW9VysqdvT?\G0 B ڗ}>鱆ẓ%'s}$f&`V63Yr9lUQܽ= 0p9`Yb}SxǹDtET*N\ zo_5Py􅜾 tR8oi@! ]tN 띅f|3dέ3IN0+ie` 㸞V,0i;5Hv޷}V[w= +tprp.krI[ bEcrLǚT&~6QA(,t5rDPuwZI(*[Nȕ2, LXGH%Pω..rv MJ‹zkx$Ԯx R.ms0`ߏn凣gq j_GǽAdzݏ^tRi#Ew>z=Mr::_n'Ξu{~7^bz׶=]~Q=?W/;N_6׿u췷-۳Oo^z㻳Uwr;vwM=ӽr]O/ܹ~2js .9l^ti~6"mL.vgti⣷abx4H>{edzhld<n[Q?a|60ϩI@`_Z(qU:!_p{ Mxr bqdRKmh'SZpe>w*$r4: TUR:{`׿; ;=fv>nbۨ}{gGW~ӏ k6~> ^Ȥ3^z*iCM;I@wK g1G/`O/>l^Y/lapz_9]$ba]i3LJ 2]xb3{9<<}z8{8@I f|K"~vho2&79:sJr0~plt5-^};Z9Ȼw~~ŐTOoSY bx3B:\:;9a?gOTYq.1,F>zw!bR 0@)IaEJqmgU̹L L0`|0aN-p)a.7ss ޹ƩDʜ9j?vC65f?-{\D.t'KL}|٭"fߓ- j`%WK2νȁ|;m9]@ ˙c!/ s2sjSH tSiSI` aVjd0Nu%""p0Sg!_!^ͿoUlZ*k"tg.^?=qȐB>}qٓ^w /X;wW7T ɫ:8`S1 ضY3z1^tA|άCeZA)^͐k}_1{~` ͣF8 5xDltbNݵtݯNF\+҇I'7*m/zf*fmBY{P]Ӭ o-V5Q S}M$풜K ' m1'wW/m?V8jh=:j6_ck"2 C$& &28V+!CFQcp(CoU|a˞/wb,o 9bťgu*A_~/y8_} If\ב#9`? S>{f\(8t\ْ[mxG)ZUD-1]!, Ğk;W[T)D`_a "CRkG+?ӳCln?VFdX Լ9URtuNZ/Vo{K<ƱnjŘ"CK0W ec"!2 z$ƫl5bSZ1)u#U.3@?c,Ǭ5Sjͽ菇\*D arŽ9Xܖy3.RIS Q#Uiީtke9v< ;|G0ltulQ"æOM=Q,N@0qұ! 3SD8ՐE *,BY!E!ie-_xw6#i:7b`LDd8$1NS߼|{ ?fG}tQwGhl ZmJd(AQ"Qhi%Bm,7ܟ?^-Ur)ۏP(%?d ︮tXsLZ9KٿT)qgvܙwh'TH4ILdb4$Deą5NId$"c#UVDA7T(Q9PiYcI.8aL ),t@ KI"pXC5JJ*G1wj(T:fNZDB cnyb2)BҸ\Y%T8vBFb.sn%_RQ,=E(}*@ɩw132 B0&[R'41R8b41Ɩ tZc0,ոQ$?\ġDQ,8N!g #aB(%&FmI=|Д84:iJQc(܀| "8PBdZ -(K](y`@L2:A>0*E:kT<&1Iaae9Mb wttHBiHT`֊X2ʼn0F+*QNKS;,]{Z&cL5L[a1CN$jJN55qky&%UL$TÈX[PP4,ĩ0 F4DpR@崛qێ+q"WUw\56'l5FH0yɿ1ҪERS" ♂HLf 6˸b_|fq'\<ɽVJF˼|7(-E݋20,U^ Z\ L1*x1 x!{M0qQ<1(ҝ<@ XSAKXs@ PKid@ؕ2;נF-9@UV X 9+RrK.V~7@8[ q/G8zK@eHrYwD,[8Yg FciFg2p}'[Y.{v<{ys챆 q# (pwA$ .pucED +~XC 9wǵ!@#JtCcfs*~"0,8Gp R*]K[ \>PYyȾ`hdQh\$ !A֙ga'Qb,[UF#+%$<|(Cj'#šjwx1V[G X] 1/k|YKZ"%A{|4C/U4JPih׃TU 1 l[  lYډzUU`hY\" abPڝ#F.8bHtH}Hǰ$|=dDKAg6\Q n #]j~<.?M?JyY$ZJ uՏ#?h<9h )e+goJ'T7 Gzq*O*exe,ԗM@q&,Wѣ=a-%Rll=DzAe @N8@zs@3Eq#+k ?/P{5/ ƃ.*"CT|^CN8S[ɆƔj|Ɣ#![Ii^%f>@DTwnybTp6B4SHgͶ!yl^^]˯>l$1okכYڕaOQF9Ӭr_k|dg TN~\.2 QO`z` a0iZm?9"ObzYvXzve&4R'RbEpy /=2Cˆli)%Dui[M&-Z+}ok-Z<#hEA!|z~ABpRV^ZKF׽P Sm8!u\=BFE(9E>ygWzI~욃[7/MzNL+֓c'?XL<#I?Ԥ gVx++>s5} yFr%!ϧCm.k+@<_@ t@{ɏ@'4^lCmxk}K3g!Y(Vph¯$ԛjI_p aqsal%y$dؒ_VKb~L]|X,VCR;:GYx^Sj£\]>SH^wptdV*@XQBdz3yu#Y-F2LHǃ]?ݺͬ}F1qN2o_[CGG+j8*8RvQ>:&>ut6*$Yrt4$F RRVX--p(LHK"ݴ%̲@R [5ꁦ"g6]qzWG{ jNduFkS3mgNcNyL.P+sҹVfjhԕxt`Nr&s1'W[޽g4M X" G ITOP:TSsuԕDZUFg8:G*S29n2)ӓDrva(sTf:UB*@g)4C/J™342Mn`h4'ܬQE5!;ƷU;/X6C -~9E14(Y"[ťQc4i4dF MGdpPo2Ek@vsHPTf՝0YK)gA;m¤`.J2N+7E9bB|>9b @ Mz X8 t) )z;A\ dCigizRĵ. Kz RE;`]ߦ3Vw1B$ͥN#u4)Irql}L1Z`nܙ(Y_V,s&2,hi$vbFi\cfW Yjx-/%HND}Վ9oJ o.A16ݷ08o^0t0[ )ZzʗrBoɂ5YPW yNj4bnщ饇Kܺ;}zSoM6-AFv&M[i[K63.ǘ^xF!EqE;[dc-oFP"$DgeţY!>7H$%E᳥3@"P qJģV-vTp0apʣ 20R_Z&܆[D# $U0^ČrT`n-JX`+DoH 0B*nI5\+02l'O?^^,~tjg-Je4]?Ɯ6pܙ2~z+#`eM p1eN*-rdQdv2IzDM.B"n ۤRTRjvj h.&Lugs{%qP)"b;Uш(.NM($]~Ztu:m8tLgHyHH/(I$1G7=zqӞDR55dzDh(RakJ.4v+h$gQYE5_8 PDPgE[9MNM!A-(RP)(ڊ*7f= "!n(2cAc/!TlpkD87ExJpX% eIH_$K/<5D6`饃 *\%EڑDd97_>OmHaRXNz>_-WH>Q.)bVNo{Ңs֖Noŝ:{|yw/Ӌ.Ew/ӗ}꼸 gu塑\Bv)iT9&+%3d aBN1mnŽA9ޭ'd39XgLiwtc0YîL2uO9ԒIvrܥ?:޾|`TPMzD8VuRz{!ʹ qH0DPؙyDM$IboGgb% Dc]->1.=1o}0x4hK, ^T[9CO e6 Xv<`9w*s:qxSj^#-JK+p]tP :zgT|iS2iL(D a)"()ݍ>^~ZA%'#ρ@U9&)0BH Q% Ֆ{fE'w\.Aj><#2,,x1,YRcxr]61S›. !,ϓ̵Rw7Nj R3LNz³tm"g.kCVYQ%9YPΧ[$Lؕ^DT"zfg=i0) BUCN<р2XȒadTRYEH)պz:W-S6?T" SRkBxڿVZWߨܽv jxWn/6gqtzԁLڂާ'SHMyb$[!?JDkha SӶ5{<Ù Q-N2 a%41+d!{M%>_$+]phwf^h襒v|jI!ԿZ{Ϸpogbr ͵S[kcp~/*1j29ST8D(KZ5hpӱ}5λPCv<,q_KzM{r;?9В90yQu/ճf04g9: 8QtuJgXxp A"V9S];lD 'z"&9]aEv{oD.(]Y8Q21H";sPr2iTG#bw9@tp: J0 gt^p̬(C̰AUO) (UqP(Uй۰JyЋ9Nk[ 1C\h c'~^û,貏g'H+٧(瑓9Z(A)eMG!ݶt9ycR.>f`>͐3hyw;2{Ӥ-CTax$ݲ+UBȋlWp3ƨ2WJ湪4qWƵ\F6HL%aD0cHzbBc>THEf`m@1IHw7_i͂§irK@R i7 _'-ւ/30NJ J9R]0jI}!lkoT' QmB [Oý飵BwSEL `I =ig'`9okR` HҪ8"@ljtZ!;140Z!ID'H+ڻ9 DѷŃDV;KGyg9%iybe<@9ȅd- Ǫ9@P?*ߝ9@BPi oucI'Uzs_E ~d4{ǂftOޛ*5Ĥ+m5)!/nu2hpweˆǾt:J/3Fq+e !䔕8R!@YAQĄڅV%Sn}ߗ+'yr徝Lޛ[nZ\! $4v0Nn79U蚋ten.⟟?ξGkh!vb_m+;V+\_4_ r)Γ,g2#r("y<tv{4Tn$)uQdF۴`{b<׸Gzip(̲yIzy-/Ҿ9]|us1~):+/0+nޠצ/Z`'MH݃9gZ扒?n~k @/o,q3.|S_6_ s>4zZeȢ93BrYޕ5q$ҁYLCuJñ;+d{^`)&ʖF %rl-ѕQUYE YpP%4 n`!2hA;EVY:Tɰ"N3'2SrSkQj@8"SmXg,f(76Nxf\+pV:$kux0ftنBE N#VNjꐊ[+㒖̱vRQ feGb؀H`9،8x QFCKRkVԴP; -*(B"k!UB2Te zHڶAH*;]"f`) 5!Jc.:l49H23 tY%:"z˹m0RjE3p_.@dR*G `|;=k.NE?xHhϜy=W{`\(T_}>x}s#~?|!h<1m[\VUseƓoGpO-͈j-&.1f}M0U W*o_:qK9RYޣ)%5| 9 T>JDZt/vmF >xSsF;K͈yfN~̖AT3gEoQ3 5c3uOkȷy)(atӸ[L%aLdGƼk6z̚{&:Μ<z(W(EKޙ'觖llk@8q9)w vvaL1ݙѺmŃ꜃#g1AXIi0L#Jp5CH` ![ E/Sr-clw<_.pzpq.tuR#i6]X% Z t,Z8~uC3esk,ذ&(cbS8yq!uFFܰq+Daaﬠ qVECod3"8:IgV U뤳!X#,Qz: @ekJZm@>DEC#oD -j%mЃ5Iu&YdYI6dU q؊0v6G9Âl^-zbZ& dsX@SmD1 ő~ZNGD-ST QW8iZ!'h-/]u܋ovg xHCN?ii~O ~8<ח;M1 K1^5;@az)]`y]M:9]&ֳAWF7,Dc ,rpC|bNk׃iqD֒IG-Bl5nT?ju^큯$UfT(`)N(9<.Kի⥩8S:Ꝉilޝ_*|7M29~5MHYI+h\:^yݸeP7|t"SkV;k^brmǘR&t^a\}nxrD' f pzU-PӟqkwqNWj,)aJO1LjOů y"xRRul7zYL*w5| %Ҏ><Ε&I!ר}ۋZݚ UIJ,\H>AdAuջ>1A:84ݢv'(񧜭jbm%<=6{tk&UKF*-fVy5Ь&Mꕓ2xo|tNWϿ-*[pg' XИ8֥GY,YxzLhp`San+ f3D]MUFJ =A宐鲙 1Sg.qq4]3$Em~p០0TS;Pxd1-Y}`K?Q5 (S|qu#`yM1|;w`K&7f/gCLʞuClOcV|S|BF791\x"! 6" 6κā}Uhdž#kEڅ҃pOFbkkz4tZnLEQ5>*Dṏ L.pX-a!,rjYAK gd=Xlx+R0&zNӅ{vlA3S/. e#8c!64WHe0%{+⢽S|`:ݪaja@8cn jw J UHi:)I\+K3.)Xp\RMH# ISMAէ8)/qd^ɰtx`UmA`.>/. Q;l,= O` X_1usx3'|MK٤@ؔ~tŏ%cL~oC8Zw7G\xL.z?3""oJ)* %"fccMhbМ1 S٤#Kgn4iC4EwJtp8iM, -v{r㑐Z{ͣx#Ҕ3'Sex%Y96يBB!.:_HW?Ϡ },9>{=\يj|9V4)"d2K*rClLwG gaǚ Ņչԅk{9 `B6jO/Åԃ_eYY32j\0ڜ򘵽絸sر6D BӍDirټwD?64jugF=3reHDvlX8C9E휧wL5;$ \=g< {iF& 4cr)Z+Ç4Y#.:>ysl Ng.pyur680}eDX;l,nd8He'VoB B}m4`TJ(2Û5c"{@ہSZ2GBW>$>¨Oi1P l7)ul1OjCXԥw/oQ%[<{懭R~ճOeؑN?r6rM?ǥ@ ")=r)E^-`8Ap8>"ʾm ,_h!@&(~  %˪Ng)>  swu 78KO/`$2%r`|TwkՍU?ﳗ/#a"FE1 Z_0!AhFIs{WM$iG^]+ǿ_g$XKo=+ .1EDydTwT,@JT[Sr޸rMi%u-n*]+OϪ5I3>+ܢwߙn/5xZƤM1GbpǬ;#L%vj$瘟x`?oUΨn7Q.g0?^ `!dt$!LU(Lϐf`% RU;{x_gŲ9%M՟k~;MRa 5zBy1KqxɤźV׊a|{59% oG o'74W }qi$5x;9>HA\Wp"ߋ%sb攬H0"jj5(  L$f( ](7b?^,Y%SI͙ \sY2\i8$20GҬ+ᐡ3ȂCSKcT@QW8i/:wk6rJ;N)<9pm]C1:뻐1Qoe[ =Ym 6VCЪfw|JfLiI{\f 3|&wשZV?tMg') 3E0s.\/Dz9U?Z) `$ 3n@s"-hI4r:9~1n;Y(bɅ~a͙ 1ɧЌ/}㻆|ϗ~*!, $:ΊQ "ԇh{y]5az7dJ,)Yr9?vWіK@e")R@m|`QTJ@R)IQs`AaM XsFBQ0!!*p@oe#0f,[AkZ +3pr ?,G= ɂjӲK V]촇Ы"tt֍k3}"O6cWn06!ۢ@?Zc9zW_ rg߯2?t; W3 a+;wG-eǦ iG;D:;}q;W.#a KyLJ TmLR3z=m: x{; nJ4.mΎ~/ ,/]~L]@%^-<ւsPs3i_˱{|sStTUVr9U|rs{"PZ˺OxwtQ- G-뗷G-Ӱg:~\,s3`ZmJNiKr0-pD;PASʤA_M|~jCo1og{E;EcchgWg(p^&SL}il2*h.\1d(}2gY#Hv)NKx`_c`ӵl>'@xG}dq9H?zvdֿ0Kdƨ`|xEҞgJ{ơ_1a{3V,Q\ 'q`$HE[Z#M˕]J^dNjdK^a&,9\%//}%,P(g%r(乚;hq0L}4:>Jɳ'ZnZ{A+o] {P Aqu 9ȝVLiue5Mq"eHƋ?C;=|gTHpršU.ѹOƚPޙEL`i*lbͽs?>@;xш"{n"}"?fLM\]*ށ%2BreG95mݯ#'U2 Z1:q2<\X]FZ8c/H˽Uu61M8JGxL-Gb PnblG)aQ?Ur<!2fv_ (m5p,,r{?^&yYi4)Vw2 &2mitsL0L7Ƚcby͟gfŀ1+-r5Xg& T ȼ~QZj^;-+~5·h˧Zh\l(~5] Djx_Z/tʋZu fdK_PUUM9l;˲ƊN4JK\/X1_ޭ AgXI=4*= :!#(Q$"#{MH'ɠ aȤ#눵SqՇqc%nL`/ (Mb&PuI?]nl @m8r#X-FZ+gl G=m7?7@oF}ۨYj [tjZ2ØdC1c?l/|NxUow.q^yۺz[53 az6\1r=3c\uAw;?щzaӜnX3aL. 9MұtF'8<ڝ۱L1S"A60JƴLQpS* \[X?NZ]ozAo;qw*[?'Y:# ?G?n<=;~߇zGSˆkJPֱ ڧ~y/sۊ?1OKl ^ː`@9I=ou:+ڑet)efoy# N`?];[}דz>~կɠ 0t__\AӍe˫$)4%"mlܪO`d>U$p dTmt\ q*vnFj:53MQ0JMiX_uTᚵ"#à[FUfr;&m*IZ6sAlH B:yElFWjKo{4*h$#irKc'm[ OJv?kTNFȺ[ndm42#֏3]/pN UTpƠYΆR`P86`g c/",Mwۺ5ձ剽*E&& ܰ٦@5tqjAS1# PHfghGq>S~-S@t@hqJuu1N ]q /M\_0K [>MrzUn6_xj]_o!^k;Z9}1ctq(ڥ8=}Xޝ߀G᧨߀6sN4~KlwWlʆK-ʛmwʗM|ٲM>J]ʓhk0DQ[X&irZhlyq^ۓ-4ez=f:V:Ȅ! 1G? tGN%.@aGC\A']7$&E/q-M*3ERMG.,zo!nanHw9X( F Ңfij:*?qLP&`.w9Rv3|9h`^YuEyfǑ5̅^&Zrk+w˝fLLtx-ȷb#Qjhnl4~9lBC "aS$lJIu!zÇv^@ ^ayn#ebf0VfF,ҺQcCǹyc2 ɐQ{ᑏgH?B5R5P"SZW=azz&#><W?ݫ5,gŸуgSυܣ؃~̆?yYH-C YvbF^&daﵚ+|l+y& QX ʒ"t*1ױulo0B.m{u^-,2G]9+a5uDan8ȼfo%8 t)t&\n*SCR\1j˧l!R4d3 ٌ0Kl0\*uҧrBƴ~adwtY0ݠP=J'DSSknp8c,:3(3n˒Cj^ǟmˇ5Gtc[]ߙ]\27xI&EHCwedO:Pdtww&O:e{2=un"FA;w0*(}/C 38XDdF,"MڎoOJ.Ty6Δ!46m6pn5. ٣[ P0" K$&il4*;3G2 N1WN/sxhboZ84B`؞zi'=W}#C pZ.7~4ͽs?v0:C8x@ "+U7+s{,~i-R`5gP "=Uh=6Rjll\wq)˼#-\$[E|^±ne dpVkSbRY_L#LfN~pBߍ]? IZD̵lff]2aB][W%b2MMgkDD"$SY^5r7)5"-D͑j\L̋PIM;Pa9joDt@˨L*% yKB'@*EP x`,`!Vfe"ԜEKpxf/+VBN^"h FxSRxS 窊S ^+TQtzHW.h`C8?PQN̷ly!aZ2_]vҧ|ہ撴Of|$g"1:UגLȔXd~#&&Ԋpl`9*Z.FE W[qC[=~%M,aaE5`dXЄPRSᰳAj5s)saLWՍ΅詘E{!AA б22"^ba6.,"06Ǚ8c5csBa$7n+Bn$Is~g+Cn*S s86k O :lHB9NdҬl+UbEYW|Թ^#@G.#`鑚WJi8/ JĈĆ|`\VPP̈́楃 ^,Vbph .Mxi$|HqK\!mimU<Џᘱ}*wLa77 R|!+Cx]*s ;yQzP5!i»˸lacpaE\S ae&SdyCNZeu[˻l͆6="AXUgLbzW\}3|b(tތ^1Dɮzee4^Ii߻vG&fc7bb*WY]T|Z̓)_i ][r|t,ϾM^ZlJ]1ׂ-MKt}Ϙ7Z#zƼa`UlB+> UAtu8#0*33QyQªH#UǩՁ]vLT ]39F#&z'}.:Ov9F\wU8)zp\HxCM `)M;;}PsAسE6k,$&tA$_m:XG:LM؆]Lj0b8X)1}Q0s[r9u l#=@0) J`m&PӍ4Fﻞ!P֖ pRThx5V+gl G=m7?7@qnF}ۨYj [tjZam ~½Axa~}tӯzwF4}t~z|fKMX#lb8_\϶===BGl#؃۳H >G9$,3y۶Lu}}9)X̀ޫKͧAңO7ӏOW xv:w?n_!ɺgx>vM6{F$q##unV$n6YUX$U~"?7={ki&|wƥxKuE:^wbn>vziKbVUfGr47פnv׫_TGUgةꇥ^PuQ_^DVo?o7׿޼vyg W;7GOn{|H^DD~m?)!l]dd29̚xGKWx`FKN_I?w$yjluq`ŝ /0gF'N7tP.=vYʼnfQx紆\ԖK;23#˸\D1%CܻͩW,2KpC0QLx]0[ܜȬm&HΣv0Im \XK舳e``M3[c;ό vmX9>/cY˟)3"i{ǎgЏԸuMf낏?n6>_YjnynL, ,, 58h&1$'2vKƼQiDlj܏èRx4Os$I\@NvE.T={gq! 1S DgnM*\}bhy?J3_ x0 m3䌰 /vQǧ1@ F) 6 PV""0Bx POr/ (F籭@*M fFI &B)c[ rH/!=5$2i/)^W1˴QW'OD6rj6ݪU Q]ޜ?<_0QC!4MC'wW(w|祓WQ K̏V/]Upd^v_VH3@M8Ѫ :xdYFg3nZSl x[n}֛lwY+Nex& k6\=W#sBulJ_6ΥmujL NUkK7>֓3T(n۵5Egq1+V- 1mUtu=ijPQǠl0(9A޾JOxތ^}eF%#'[vPV6544=( cJm=!3iOv#<`9'fdz8_X,KqPZ奤F7ZΚz!ZAy׶z1)&V+&R dM8vw hѨ VrSpJcR[S`<Ef=PA R~`! G)5Z\XB.L9RqV.0}Vurgغ4#Ms\X[׏W y"tc#77XDŽK'+豖ؕ"멅'u_/tJ)Y^o'hйEdti0Lyuy-Ɍ75Fr!(9Gr8ˋׯ[{FsAV%K]l.~]r3}0wQ.4!DAHcL eB)r)`,!s>%g\VCk0rće \pnsl!%8V`i4AB$c1Jājg=}+4"#Q 03g_Ɩ1N)VjeFED8 cBLDLpT\qȘb~3ACeAB3{6rNU-3:u- %gI{c{ޕT/+qcemLbdAS2CX3FQ"R&.% JzLQ .1;(BEH ..q 5"PH5+j`!RCV+WiѯվwNbaP,JZ#bBAȶJĀX`L8BrIkU{Wsxq0)D*Q;G ,-8ܚ[Ε޴#f' oߟD$Hy߾_ *ReMG X )88h>ѥh>,D)W/\F~8Fx8J-`6ry%D1š_~)[z] 9(ȥbaG{ы"{@4ECIqĢ:GU b _(du=757VTg5-q9u %@Mw};+8/דQ=n@`a>6XL tgkV X(g&X,.f/F1L>#Dr)%qg`=ZLJ\4Jj&% 4Eh|B#ͭm`ww'5'nvFVbġ*;}Dy{]s8zocFN68y;3jIҹ9#%2_\ʜ)i7Umf쭃cpùbj^"2ɮ9еS2N*]ɋQ6lۯ?%r~f$~"%C&* V414f˕OaۢicmNz$ȱFXw! K],+,ͱb*;30XDy )يSDllÛJX9k )@gq%b҇9ui%$UwA3fƗn, pjl;,]_93<ھM©Ih4SZf8 'eȤ:.rp?]5ܗ2šܞ8H<(5ZQ4" ZE44$" ٤hs\i`-WfG$ow8^LYGDO9񯤃"ṽe@ 2(ohT9d=]^rln+)O@T`v >8I>d@ 9bu"VL f~GJsem'6nw_ϫO&P:?/AvqC y'9>gW $bZ-#,ywrA@s7GvRNod&M\/C}oqI;4*Tg Bm2gN,E&hws<V<G?D]]^ھ\/\ $Y\NmX"w̟߳a%Н6 5~~y)s>7vf\%JߺSr m{G%6Ȋ":*-G#()Eʥǵ5O}kqw[δ`@.HW#q^Sm 89KsdG6c|v#H/&fk*Nsu;rҴ^xe]^t(y`&:SC%Uy0[;SæK 1s'ڎːϮ׿7X O1[(rh&1R(nyHF8}F 7CRWKTc(;ӋЙ: cr0Š.Ю.դ,Gn_C o3N&~Pѵ$|ë(陝߉vj m )iXOMBW+|hw&:M֜%+(MoNp%JL&?.>%ތfΉJ$*ꜨtDBIHbiߊ՚8'H` p7"bCpʙDIYT*YolrR lh@$4* 4)%hњ\2y@&{fC' 葆&O\i1L=#v:G$Uk gۛ xNklY{1Pctl*~x0[\ "/w-B])$Cb[|1 isL! 9@M't-mC;]1s" 8E\G-:r#Q-5 a@,3 e0aJǀ)GnXD( )vCVPC, 㡢!\4G1$NpABU(#C1 Qa 3&Y6+l0& %ʠVыmFEE'/ŷC[, v6ɗ6ɞcye{"۷-[-eqLVI>U,V%J\A ӂrUP!K/&(<5 +82]Y%r!!  sӆۍ E鵝] @|"1Y֑)t J:ww *JW0t@cGjW! 2Gu uZN"%a؂I ?PhPT{ܝPfK( (F*KJMu3UCU%f7^هd; b4fx4Mܺ&!f poTY(̤DF҂ % 0J(ZQEdE$hd ,98r ͊ZctB-=_Jj;3p(MM`H| k8T`O9Dv4 L D7Ln{H>hPn x^*g>\CY,(=<X'0DwT7 Qi~42-цa^JrTKU!zӏ6(w2ş̛¾V G({c14Dop|J)Tƕɨ4rŚ(ZC! S$1q=uWWu9%<7~2,;Ыɻ$K5ܪhsrsh Zc|-p{!%}\*V&4QӁB\Jjyu:tU0Ȁ N[a8*IO+J]"Ue)&(uv܇$g濙)\ ] 9[8GADs=84p%F#AN>># U '5&sJ6> 82uKA.8Pa$hYrJ'(j% # v$)#yt1>N/ 9r9>YL/^AyBw(Nnh=*ӨN; yT el$63ipKE Q(Pu&hk +Y@nbUYՏƅn-A<{Ӕ Z*cꌩD۽1'ҽ1t7? !iNW98)wXt4ly `(!5ۺ]_"Qo2_@}Ni DP;_j1)]&חU7|-rʗ7i1.GnHNdey3rdx(GG$acS 2Q}r=3[OV`TuDu3@B6+U5 /6<kn).'zkr򗛛y3-\~۟gF08F {NUf >'$ckX"w" 6t= nᗨ+%&j1SD[wds+n<6@@<ɾ3*a󍬺(x~|'/?wϳiA:JNǁ@''|0ܤ 7G>nq0P?; 4Jr8 9='}A3nI!vNʄR@@8)J͌k[W3cntkȱ޳C:ALԽ6;-gֆѓB밀6KGE"8PٿW}Wm(:X|qsNl҈Wnq/>Z}k؇/i\r9xP-.Su" 镏Dʷ< `%ף)m}ʦc|º/ڇ`o,KSVLQ_yҀVjOBu#ystw8]2v8/pE=al 쯨f8@r*T.i.˥!]Nj|OKLt"i< < ;izbtB>QF;\NONi@QAsݹNǁ#R~)+~ӞvVΎq^m94+}U)b'G~ft5y76-gh5#Pk"]@XW z.|D-Vsս GA9A<(( z)z%ջ?(8=\~sBho׆tեWkK?|I<yJPR(o#* .JKT9\Zpu. >CԨ>^'94;HsA|ֆ#]]?l[lEYSaBaaZ Ӏ80/ J'!(/L0 /cjV8/ӼT䅳li[{(ǿ edMl!6(UeJ08%mg7 hSPS$Q62_r t) d ˬ(ǵ( "Ei(5E|p%+$BKÝh!w[ʜ'z[8fW*K/@*fBi/ $ e\Q&P#dQ%9:4~ 'iS[',$z)S1҄-f|V#ռ5EnQڦ.9EVtˉ-/FK$@9L}5l-^+H*t=>סLz)m  j\SKB HK1goU͈Ԏ[=Vm+y kڙqply&4qzuZ+5&z~շ߽ícfn>q~+O৕qXAT<~YoLܺ9hF3FfBOul)z"BX g>_֤ڭ-XZi ID 9TmJd/8g4){_޵JIJra0*|q68t4>?gV s|ĥiU4Cf}׽u|彩c8/btu\\]n Uݰsf,)bƻfiCﺧX7=}݆IKZE_Wz) A5P*UE-+v%gjVKx{Uo$5-nwoD~:Jx9F aZJ1M )Di)|$aloI2P=lgLu'ޗw%^pO]32:!.pV['ԕ"䋨7 ]z)UΪE!~Պx1"a`ݜCC`鋑ܑF#~b'ڢ4/fX oliӕ #ԉ on)H'4#dH(1(+0wf>X r,+F @}Jm=ݾGq/˚$wqʐ}5bOBxT=I@ W)p yRW-"Z꼞l5큾DwBNi`7QO>zܧudZ.Mzx]ȣ #=#T=ө%SJ߶o{̽=mwt}8;i$msD}B_jZj_$j*]64/bńD^v[2nLWn ȣߵ/댛q0tl w؛AWoL}n&m Ohj?BeF]}/x6sr3$F3 _ɺ|m_`m{v3vJWvR1vyT)PVi Lx<8+ >JBpdC 'ngm;g3jƬk,װqst UDXUX=sj=2bDv_փm6߲\1kɛ%C)Jctjt3=T!gŐ{ϜExqtztаg(r1Ha1HfRxtC>s)AvYw9YRSyn %Q/g˙N0 |o])6|?>O)i#oh}k}o?Dj/nMK:)=evОmNlUӮi_ TroKtR#hAe\l2l-pPOU=`))&e:NM-ڀP=>z>K\&RthNwZIЬ{K@rrI`UPٗGIl%@(3 U .V KBck B"qVtP7N: NO)X i}񭵃<yp? O͢WhvB[:ٷS'[p֦H^qt {ﴎrwU J;1PN$-H 8fS%@vHC K+=xV%<&fBP&ZtVJ;5հ>L=X@4sJJ !9Cj6PXpc&xdZYbaCAVU`2^*CQt-܀3M A5ϴ$JȭDXJF͈`43@!J8QP,@Z':ǵrҵ H֜_ct|l$;IUI6G%rM(CN&9,.Jcyzg0~3f ?"o`=Lғ"FWY?"x'Y2*˨/pytR{ ť+rkA>Ms/r 6Hr68!y,g(̪tp—uY_6ks8_cb0( s\?}_Qƌdg&riCEG5ܛp \3*-0 fU2Q=9+NjJ?KD@CP0yB㜱 :Y$4ef9! f}p%!Dlkg ES)[?՘`Pp$ۄ? ȀY^H嵌CNdji-! /3ٮLp5[V!dK{T (6m JuHu+YyéTd4H5QA2xNMә# zZF9_zs_dШdKQoֱ%؞,Sϲ|&˷ۃ2(l^&"݃5 1P ih#JOw(~cOm j9kҨa=vn4z՟wJʋ$~][[=a2Nȸ<3N 4+\t]Tx$v3 s\[5؂ <: ͉ujL꣌q6^bSNY:^2ל+\n R$ípkOr{տţWK-nVenqYvK׹AxW=J{7XtbIJK.:!(XBff jc00qU\j] 7!0Pr]uP:TK G|$ lkЮ9^="W&sA3d#4 {`p%# ΅&T;,ghЫ\XBp2nhFWg}4en<۲ ®]ʦ6k(ʥP/g=ELG4QGhj Qhv+aj0b{4#0{[@#UNst95X-.4R @ Yk3)دI+/+N]X O1*)ӟV0]NjԞT Hȉh-*i7>h ݏȭ~~,ǎ/یiӃqzCZ@g>+.Ą,Q?~=}<\|>탟|[`aЛ!laA�xg_'IJX6= w;f4[6< \G[Δԋon.~ /q8&3.y'i ɭEiV+rZb'[̴W2Qh1xϪIV2ThrA[jbMͭ8UX!ÿ&bҊwb5u[|I'a +ӝ*!tXKa.n!{xHӉ6 OZ[BW54t Ԕ~Ah=|@[ 1Z浣[=IW f6{ ϼS:=9թ P@'ָTkhޤKʎF<}6tOyG鄟xR?_kZi.ڻ)xZյW׾F9FW e&O bl3?2wH r"6'ys(YNZ+qt'ǐٮZ3Wtiʜ}[a԰%[=Ʋ [+ބ8 jp:[i@-3B!9N؆[2!]bn v|E$H&r*p #mNT!PMJ,*i\`ڗjO.Y߽ە.8+=qDk!B/x9ep,l@wxYj)i#!˳~_]ѴʞPB҅!卿.b9o;ht6~IԺX*NhHJt?TPnڭSjQ_E)hPPo\R1M.Y⻇es/?iT.U=lZ6!;Wxh|_xl[)"`f",@2l3,p{K)c*NrWG8/284_ N@/ ICZd`*3̎:c:Ov~eφ1^ `E:fzP1x1orJ_}(i;-(^ea/E+>|ADYD]C.'Ur t.o.HCN ta8e7ŧ.Z ,=q{{ ԭr6>(gKz bg_fN+6r]tb)g#΀c-Q0;ǿ߯6>/]<҆8[<.,]ڟR`3"r6OCCwOug[ l&oܹM~jQu@>&T7>6{%d{?cܒy8+}1?jŚ~/-]s'Le0 ?4c]%mPЦbgMMAN 6Br3jLnơO3w/d)nUmӢfS̸B)U!ˉze^6 .D$e%1Ē-͚(Wvgq/T0+R0qHsz3GԬc)[ ep$m -YGQgõeb>[OhڻDO& sTȾef/C.n#|՛nk9VB ,eR"l>!ɁsNƒd8gm2mv>H(doROHA K"Mt\0_j$K""%V9(IӠTz.=y}%!k-q3jKJдTRJ9͝?SQh(GFޔ#AZ ϚI@Vy1uN?.tCcpwA/ů lG%JtEquOQ=d#KSNs;O dWYT4l%xU'3+?-+Ĕ!֪VVRt"!?m!2CT0gnT$acè6U2쾾BÀ<`q .P骨:%@gr_RLεO@U"BsFAQe-XBraèƙ;+XՈKS]J {:(z-2ca6E [W=2]Qn-|MFGP+ie$]:\ $6 FZbJl;>`Ts/V㟳u%ٹҁQ|@AVn5\M5GMg^w-L)g VhϚ}Z+Di)&la&a&(vBnATفZIzWSd֟BH@UNjVJd?aWjKiˇ*E>M㕒WXM{m7ߎ&Bks|<^Gp z7|3.2e9i aJ{j'N~^aj,O7Crj8A9qbD(L`1pAL ÿf_<:.0ryso#K)ၣL}DfOb~8!I*+>G]0`FX6-$ᠸ#I/^c]<"/>{ cgyXBV)T˞FY!eX`G))NG2J0S.Jʗ#/;z?[Paw=6+ij|;{e:6|AMgـ3C+F>^9~I*0yHȪ\޳sNU5/0)d|إu4tC?R:G;~i$~7:M4k^ݿRszHlɴ(Җ۹X[&`עMV[u[a,T9ɻe8f-[|zd7!`4lGZ4^T ʈi.Tƴ&{ \" x؁|ӥũ6Bk3ƇH}3ҙNwiDzo|Za< dך{K@W\xAs5 #ǯ xPhCdN9F#%@ K 8/< G:L3Yh&?NӁ:?Mu.6 p~5ٙ\ʠ}\1}xWObt_jЩ z{yXRcBL9:*ey@,\in5",#a1\e$k* R($S\{KY\򨁚/-eiȽWa&:/ t4x|ԈY@?H_jXykr8TD>MiKúl%ѓBZV73%+3A^WZN-s.@/'P j!i5_ (ꆕ"֣38'YNôf<,'6 rfn0ٸB'=1ϜYG:T&{--,"-O)p$HJ僗I<ϹgZSG-j%"YE2deU+3vYa * 'тQ)TK%V99R@,,uLbn(s){?J1)]1/I^[J!i8kx`t5ﱝTKtk)j#hӢR;h|9^\VEU+]Net>&4P>g)r^SV's24U썭(%HQKUV[ AExI pg7˃=ւPJF"Rر}<F\ce WmY M)Gvᕭ۳ur~ŶTy EkS!18f.9Y\Y{xN7rGNk# XK8-g^E7'6z5J:l0mgb2F@V|fm8'{'"R%e˥'O^W83p*1ᗕU|+WzrdßR ̄^m:: ۮ 8^k|T)K'P8^&n*^ }A a 9!Fp D&Xp1xm!HpiȜ貴|@0l4*H\Ζ03RF-"]aKkh<Y&9`-Lio&Hۛ6.y:{lhc Mi!R!LILh<p] Q *A+ 7&"QT:6iDg92RDP ?a.ڨ45:}C:m5!Ͱc`O u3CیɝyM(L#pvVfM̆kRt0{F. Ee8fS¨B٬YS-Ʒ"FmQ̣ٳP\ +P̮5\UKpF <*`ZFǒ( Ac:PBg/#1ҹ@&9nE{2ͬw6 NJauYXR,*:+>eamJt:MQAl(EQ!q8'5I{)-8{Ԓ)hЌIq| 2K!R8aAtHAT2\TnՒaYG|=C#a!g5wQh8A˭Ӻțb;(.K5Qɰ/9EToQTP~ێz!IqnŔ&w+7|9FYpJe݊k*[?N1(F)_dW^;;KByk]T?ו TnX@v8ڶ(9S.)Lȑjٶ>dֲ )qhiuJ^g'6&h쀤% Z.~i{%T7<*Xĩp)4og nz(f'kxL:sgkY[EXY,kb}+y~m7l`[ϕbWx\JY0*e!8L<OGp".k6[rG!C6OYXc0VG\e4;>o0F5}D]]49ud,qٴ+*Ƿ?fSѿCqEwSP:SCRHC |/%*jRb'8 iؒH5kR.%I.w 3B.JCҰI#P9x9u{GzT3 y;*vٻջey/ݳ,CBrdC8HlƈQFhZRpE;Unm(D(h,sTPJ:- H$[Y÷1kxY2zRpdx>LNME7Wo+4=x@HUi4FO %S-%QM7bHњlO -O?BAr9.tp -WH؃>BaH ҕx*8/WVnѕkqn/L 猈@rREt0 X$R8uEB%Pce)h ٻ}9%FZ"bn{TvT C)Vj0?o>7(@;W'֓Y~߯n _KLxkZgT.9]Y-WgWOTgSnD}Jso^e^\B`:'I|=z_w_9P ^fcCKH>ωu>g3Ld&G P9sE幏Kse8kw_0efi1\bSIݵ|5A4̲@ao;ˤp3>{._vhDI,u; 6II+3=x43kmxI%扂 EjP&\4MI{F)>.k~6_نdOY?#+FwW l;8/HLJ:[͇8|8P䚈b[aOϯ]U]ՕMo,aU_:K"L*Nz6;#8je'쭦L%0%oI✈~wnIR)1Gh;PRM'DK(_XgM,M&XjKN@ԡZ`PS-QSG%z B$JX@̖LZ@m2C΀EНF@^襙3MVIJ QØΩ")kc%<yρpc%q6ɜ9mZ g+4(ϺtO-6- d?W?~n} sR!}ig>jv1G`qk5<-??\bzdto}#ʥ;'FgvR1ŬB屌v T-l,2クK _4T/AiJ[Q(`i"Ef׎j*ivZhڳӈvHKk,ݧu1T&6 ~ aT?:Uǔ-*1TwwFSuچ|bZ(-:@l!;dք$VƒW hg.{cww_k:PʇG45&x})*z4U/0|պz4iyA?z7xUJp8ͣ<MAt=>]|}Ə8Z1[| >> )bڸ A&Lu.T\tuf/VR+-l,pu&Ggj㴹fV4HJDI# FjɉqQAH+i҇L3|-9PY<N2I tuJ~VS4ZwY9P$nHaAB)`\Y^feφc6Sq. I3fA34Ă3xc(ю9cx1SI)y(aIBrQ#C7(5zRB^tK<8h (?cݻ~ NJ2򗰍_~i~MuxC{M!f?Y0 /4A# v*)mb˼it)vڈ҆xs sg~7p^k1'U#k 8@*Wݧ-|ݚaz \ SiEQZ+9snR?Տ6(| ᭢w8 'L$H{mh(DJY&h3# a]gŖ+p?Z=" ZXx)MBDbw-Zt:(:ה ЃZ*՝=&Pz oTm*qMV$aԇD ȞtfjhUēo" jzrko[-ԡ\\$?ɟEΫz`@>eP3N:_@A ]* RVw^<]rJ-DIZVߚF4eaO&wӔdACiٻ*ǧ&*h t`2E BW_'U 93v*Pܤs_JFz~V4$vtߔ90],եγK?'6Z\njcn ;LdnTOF,sߖ',徕)UޟW}~GN\W$1YG7^Ͽ.4HH߇+=Zi!O⟜gc1z~ؿ+c5KФ5"OJ& R9!!J$>NJ~9 Z}kDHڐ;PIau\>?|wJ.q㡷+zl|@[An<צS{P[Uك_Elu-LYYYee?NvlT=6asڼZD0F`XDaC!҅! %pHF l@Q(xKr$ԘehKmiѵM!Z)5!`@tB>)uZ@AM͝WQU06xS#%.8˒#pez!8J*MGp L)b/ؿE 4 %*hYUD) JӨk r>&'\@2b&$,i."%t{up@~KjA[s Q  jm\"*31GR2B w>9OGy >jFA&E0&ᬜ G鄵]4\sWWJsZ2 M8YqR$kXa(D\P$4t224%lOKdNw Hzt%$;Tf|J&7?箉 %J$DLx#t&\KxhU(B" : JˣCcRq=h0HBK*qcM m8qJMsKnNb䷙C©C?}iR&i ͭ.+nkHȚ^x4 2B*8 Hb1`EB#:fwmL2!'K^xX;Ua6qtg-gͮ12; 06o?<|S٥?-ߞR8zRquD+n4.6_ݾcs5hPwcGV/U 7v́"ǀ|)W88X֌7[NrJWc9&xAb _B5Y4<$a+'p!%LbP%>a7Q %y &r%)%C|CkC,vv]jUG14Ԝ G_EX y"rJV(M&GL+4&8m¬h+Uh4̙qEH8(x>T&&rO?F!PLM@BG1(qdZ-:<`drߨ1M 26L CYuU6O-چRba C! dŠݗ&,5蒅H:nJB4kPwnL#LpfVU-zNF?laM %543٘b &dT2 Jk5 AuxwfBN*ulϿi=fQUa; O~n13CS;ʒżi:i}Ii+%Hl<>aZK7RM›xۼ4)?nOxSwvsB1 wyqЧi0a @;pi,iyۢYRv!سDpU?jfOCa,agj-*1K^Rrw=M!$S"K$GA T&-vE #˚J1 }w/iD]E:Qтr*G>pEh{ M͜II5-u MP+#;mrvRk!Pkv'i=`ah{o, HxOQ0?y?cU>='v^&si!BNڗ,sS:/vIܮhI0<|9">0|mEa- 5HeDVv'%/];; Up_7k}A:JSDr:nkvs"ƴݟO5 tw'+ B"i]ī5RCi x֐ۆ闁8&хP0,AᨧJE3fDm%)eB _փ*rO_29O6y{Ѫ\ kJ3 Lű7Rco"TTޜ G^z䣯Sʅ>:k0s4=Բ/ir s\O'{q_rB  C%rƢ+:ZJځT5| yp{w5Uu=@1[+VdSyؖFǠP jN@/Ȇ#/)5-{マ 2Ʀ2 yLML 4ܭ&w _="k1vFa,n6h)GQ3~_ʱ *ǶJUyMFadem3'i#x)Kg)g t{MP N`;//UT9csr_ m"]w9ڦe йUA@Ajzgm9/^]G{-$K'Fۛy\,g:Η 4.ZKF-#ߑ*J/͔n>n:ucB=t_C!WW߬T@\ TtjY@9K$Y+:S QL ww$k/z7yrw'hk&>b$&q6'[9Y^K/'{ eڑz]_/'KdGtG'yAu0QtКK@NQ T{ΧQ#&\}b*}&/hzJx09 B%:IY;*;tDN %ɳ3Tz̒޵ 8|S=/xd g8hb**AMBPYg=5pJjg [l$V9$@ quU @mDķy̓%tϛ>y%@tj}]KY):)f@-`\:2S" p%r@8dT{yDwNE,K) 3)պdbV;JF%+ E B'+%1r9dCE|zlWYӹ_ex>|~Lis`"r|CS2 LyX?KE:Fk GmM$V`e6oH:=UQo` QZWp[6EhVs[yt#fħȄܳ~p 3N0FfǻE)|?ZJusz1W;[CoH5H CEZzBVvɪM8qBQ܎  7Yo7dj"3uYWGzdw p2fUAta 1tsBɄu6:N |~JHHS_O \/INkϗ-Pr',9ʠ҇ʞ+{@]ܫe+> {gԣ0jFV <<j1i(@7gu`5 =FЉ:?Lf *ԙ1HQnY"_<5}:'AFf)%ՁXSt9n޳30(.bd£SpD{uB`¢zQ%gJ#+W(Ver*ϋO4\4/~%Tt^דݣa}qyDkeC3~@>Xz3bz8WibwQۣ/G>f9=ڸ3 ]}1(~iF_ ?]Buo# TkN%y`T 7vbwxcG^_qaIyZJz6b:2fn?AO֫abo N&⺡b +~G|?}.8ִoq?I읷ww*͆I^! |XXdb34˳8 < f _f Y, ,A< дj2*0jcRwJ; (,TTeQDڛXSTzTb}֊G#4 ,֔6.oOМS8ie4nHiG[8mfc}Fq֓.r0MM ~f 捨aC~޴bҀ#@y6nzF鐤-,xf 7l*C?RE1)5 :pͲr@wZ~õ>.տ/7e愹,`[D -0=>Њs+Lb-isٚ@xi [|[4Mk5Xr_ %PVQju"%H4 $/L,Q-<^y\{eAr@rY>}[0oi|7vͮٺ+nڛ\y77aw 5'O V};< X ѡ: QV/:9`.x\0ųT:UqOZS `)AC̣tzc"0Fsɻ0O|PR d2ӂjFa1-*DQ䕷TTEQ@cb,0eD3\sA]o1Dy3C0|P-OKV 1~O2JjSfh}SvF 0}atlCYwT^NO/o4Ҙn(#IJO:ʤ/PoaryJ`unb3)&i9u.qE=5 T=0U*漃#`=qI.9s8rte.HNdlr\6>6H_ i^3?-}%G hV7%i){C:.׸׷ioX!Ĺ'97TeRޕwr&Dyh1Ui^ytSTvܰ*ztu ]T 5c!} GoΩ[Of}= S˨gO1o @:\׬vq׬Ӄ8ujQGe6NB-~>8˞^<=3h]9pج!wT='!b% - eWui#MA)H(J&.z(1+txPf 3>##LÄt;7gA gA"Mba[K_rS'!Za']vէLhoU2efB =η?~"k1fȕ k(S9ma~:w1ly'ȋQ R 9|[|]?9*7AE&_dt|-`VD;O1o @HIA28V2F(J"GƼMY˵ή_OJS12>5j? $ܘcz?d'\8n(C̈~*2E T\*4~̽x@I95\dpr%3ƌ0ަ (7ZI+w7OqT^xO:O6RpEV9>T۠4{2ICtS^NzꢑSbed SF$%-&DQ64FU&^j4A6~io1ّCzdijShUçww~t# E?Š-XѺM+VTcŀB3] iÍjf8'AӷSy/nw;yg,i1K@QZܓ(Knqi1O#/TxC@tafi{Inr1&IHX}.z =ܨF)z`<')K%(>%X\,3?0SZjv9W@Ni]]M wS BD&)UfHMkbD+D¢6"g爒E◚9pIUMbΙ>8l (NhuxF˃ںHRq9vJ*rU1 @C}@Ι='eUaIlA+<`^(q9 MN%ҎDeX{2bb90G>n.HzO-xAÌ\V>Fji*_W7( OIȍb\(b` #XH|CC`|a(lHL\bh@N dUrKʅ2>'1#c9RΫVlP^ v *'hĸpgAj =hN'X_j eV&ýy%g%\-@ͭN\aZ\7-=j=0GFa 9P͆ lx镙@P߿+ͱsJO3XE2U S'X3&PMSjt_|QtJSLU"bO/銈0´u%Ղd=p^1u+矿Ӌd;QsigO! 7S#<*7SS$[߃uwRM&o1 pțP-ySQ{ Օ=8WO?G'RSAQ[a'p[P]p%EH#wImVBascq( o'ĖZZJqk 0B!+8 |cd`<_{bk&R ' &΋}h{~rbBeV{m{Mn$G^l |~|{ 6*\p\bmƖ_HJc-:Qzpupipj|=\cy6z㊤Z1Y'_} d/1`b5䖃f9 Cq<oJIe! %gHHQ+b滵Bn 24ckvk-^MŷN!яwtG,>VgK fdk`AHk}kDaR6L8H*.۶9ܴn)6qS IMF."bDz{ 'ƤB_xu &'Ÿcj^2E3Lѧ{5iHWtܟuLL疀ZLH7~{Puoqtuq׷׆ERXf!Q)/C $)Maw)i2LTL|~ ٴ  Rm]I_L> 7xA9[ǔ;A5gSev)V;DIQڐN=:.H]#WzNM}fH5Q? |J`,}[ZqAU4o1mLآ >{^mQO_J\赩kdaꋁ{/y y|m*_I?XJ1}T G>XEx% U a:Cֻ7G05n)2YN)R, eӋJIƍӻM%.qa|9OKأG[(p؇~8sam4Bٔ*ѽz=`"qZ"8r8qfirD1(0.=YO9sr+(Š"Oq/o^|O|8/A7BQ܅o 6NfJBFz^:)u;w5?[OuT>N{@$7Z_tLtt=MX\礕;2n=IRȹFOMOǓ\J6Qny*wE IȌ wqgVɅ EA03AT$UJJj_ (Ţ& cg 20bMgx χ9E _Άߜ;h:o;#׉yyOn\w+H.<^4ß?\piy0F_⺵A:^Eѷ7}'Mڔy^/ip RK,}Tp՟S<7'šɨN8+ 4cʅ~|pd}?Bg0߽'"~K1'ܧdև KMQ?()Bq2fRzj˸;咆-&>moN#& =DgX˳F,Bb+?ΌQ6m0n_LmԤ-aH3 | f)ĀȝJ黾ʜş~1>pp7!nA8cµnM.U>/2}b Kl$%Bfc&kE̡|stQ?Upa(4iAqCfQ>;8'6'CP !5jP@8r:0\E%`V9m]P(LCC-'H#!uB&0 ,&>~g*JRT3{yaF~j<׃a |p>Kd(4Ѿmx8dࡐ<0]JR7I#F]} P o.SwDi5r1JD^Q.n(5I;a%O7NSŕ#) \ͶJ#Rli-l-pHaw S5Y) av 6]= PHr7$e߁SUAwk3OCW KSF,3Q+&[IɫPPBhǯ楖/GuuL\M޸$8aQ',ȸ}(.}Aw'PTrנ{xгqr~θ<8N9:S[R[2oI,M; IVɶ9%9t| ?pzk*UC~.ZOeܳwucQIvA O)[G])9Nƴ[y7ΜڭwR}*pG MEvv+Adlm@?̼v+yv]TbAj*ڑ=C'am!r$w晝ut \` GFkvQ#a&ԾXGs:Z7D>%NaBzU7p~y.ڝ5Ö޺ lR09Lv{t^&r'2LW?f aMtܟ5gtӽO]\d8'QvA t񬙱-bۖ g>5HLqїPwjFq.ʼn`[,յT-Њ!l OS(ad;LV߭7GLPz=k l.:fy^X>+/RUBhRX)B9FŊg<^xI+#D/}J^y X."UdI]0a*lǀ*-9Cp#䠅N>wJ)вp$42JiߜtbXYXG #LӶ@wB@ ܙ 91,(t0kUTsd֑bQXeyk!{ }>j{iݙS MSJNl:Hzaj"M*cE~5cp8u^yM*v2;'qZϴbWDVNlurxߡl+y.Qd_^>V(Hqr ę c9U5jѡؗ#MJB/L.%JEɿ󻟾/q8}7j1 :տfgҠ*Xd!*MJCϾ8t~J~c.^Fxt> %&+8CYA!H1`o3 2␦I&7};>3 mDRe?''-s\mT!"qa 4 {{&"ѥ(48-'}4J7y'hB pmx4(?i ̻D!pmѠiSܘdwc(ş23` Cwȭ8ڀL%:k-(D7>̳R?u+Pf@o[xPǰBMԹjv*~^B<=(d ^M-|V}ѻ)t追m>uvbu>CY8Kf5_Ei&hE_Ⴓ&?s?[&SonڂbX&{%_ᬗn0?\᧴(,T3uypgVnIjͿݻ+eԜ3S!Y;׿Ծ霑Kn c'.£Ffo0Q ޷<Y'ݷNأ)0zYͯ\28:yׄgȟz@ww_PS//Jtsϒg0)^? E`w4.ra/%ՙ.(T!pV.ZUD) JӨ1뫳/v|1軋G_w/[,HV1}nN&]3y7%^/8oE'QY%r0cL]E>QYB|=RWg >ܓ[ Wgλ1nz{)N~No//.(9gW|NNi`zgi/^zà{?~5rgi7 qI^ca;/:%W`k-rWP 8h5:)-\ )g :րt \ x AgXU_cAu h A|gwcebнgej(GC )Ǚ }OFQp,xA9"hMhˬ[.rΛ"rZ n Pp­ =I,1&4rH^2sO_y}LG뼵DJ_#0|E1CK&A)F*ĄB$"{K# ڟ0 lEŽbD_{d6ңPu8Ga9&Ph?`$ބ$I pfRr8IYDD4jW *-^(!8+Iy^8PrPy)hH(UHJT@w@Kш_b.z0[vk)g#1R. \P%sKmWϞvo2r.Pr:PЬ%TZQTb(^A+Ǣw <2%-B(L"ZdFMJKЕ4ņ"3㒛<<)2K\lzʚT*٘`H=f'SJ3ͩKk'ΫI@[0j&C=ń/퍋cO#EƋԀwTkΪͥv5ۇ.A=oi)9<h4?n<~ iqﱑad:y2o>H' 5U[\(I1 ȔgAEg*iIS"H8r>rFhą禡M=tB\N=Fia1U}7TtAW.g Әac" R1ZQƧD|h4b 4r84M*21Y CZ΍):k/h@f^1rB"S%s*4 L 2,}ћ&\k"fqM V[M7U%_Ƭ @ b3j<]]⸭Tlcy&jΓز#"gMOy7Oh8toN6bpՆHm#!w!>ފ`*=Gڅl#[!SC\T| hJRTz}#tD0O$h.J+T{!%T4䒶ӵⒶϰ!mVQ!U*… f-684{%WatCKfCzhU^| -pk(3Z ۍT1jqu`?U'8E:9Hɜ-sܸ[XN1Fgi>Mq]a(Z+ƃPӅ9hPT۩Խʶp pT+[Yt]bQDF?z!Z@c9_\^ԤO--i#Vq#LMAcP4[M١َKr$c"9=̟@VLФjNIF9끚BUJּ YnZkqgJ5NVQ8U:imNOlkѿ9iXYneYne܍3pԺȊ*9ym4.3Y:'q)O&h!R $F%yn|e)pd~UZq7R/R/K.=7|l!u/g( 8/Vd^,so2,soʲڛ궍=Ok>z27U_usމl$<1aƣh EJ+|p(r"pNf!Wz)t@-1!@ Lsaȣ[k,%c`JJ9GFˤ fJJTfLHr+AT_:|Ssڋ$eSeXd%GGk5;WPt CI D c҆0%jRD6}ZУ[Zq-L y'm!o4`5`5`5`QՀB~6Htٙ{;->]>Y!RdR1 v5l?GD&S6NF=Y=Y=Y=Up۪\g}yb2FEFҥi?{,_ X5V[VT7ߚA& FĠݖԷ_&Z#sBOh P*j_#zdD'^׈*b獎?UU_}bPb+9l#p+;iA*8Bw-`ZAL>ELqb:Fs\1q(Dxr!DMSz)CG2N~}jί}vAVݔ.MyV WMHB0 "Np_Pc{FsMe;$*ՙj; W@Q-ԻHMKMls"@dSl>2N]tCFH>TGg^*ʰCyv;f3ZiМ(CB@PjO*pʫy+ԬоT=5!$_SǏr'ϗ'+k:fy4|5Ki_ Ӣ&TZ7}m=QL8ޭsd"f!GQVS:nkMQ];DL'v 1+O7hS[8`;1/0P٘Jnt?D;tg8UXhϚMY῞9ޟv=d~<Ֆ%}t?';Ymȣig Rjãm _Wq$H$Bo%f0F[tJQ%yG׶&}Y/eX>1vqN( l7N9;OFNIoK׊JL=H`"/H!aYqWG ?a`+Ҕ>PGRpvx9:}5QgJW6Qhz u- lA5yvCq˩,ᦅRg-DoQ /oߔӚ4*{e-^@,HEl=.E8Cp9fV͗kÃheMSB\{o;S;w FW (XӒu}eP)FKCshSzý"ݷ[{ .VuRl@B#x-c5BCMRb`Nj>xJ:/!ǔ,$&X3x6B.iFRb^Xcpj,p y#L&}*YEsxq1BmVNXIuhRFTH rǘA3C!&OH"Q< #D)hud([^X8&*8DG2ў8$#p"0T 諨Co0rWZ7̯ȒUf~E=W)ґ)d}"OH|DK$3ATva[4.L)L_vkkF` Ǖ-iWgn}UEn@96 098VB9FMP֒pSaĴ ?N]I\Ӂ04B2koS8:z#Uم?_Uc32Zq͐'Ȟ 狘P~ۿ)!['Lb%R3**8_gyp`IFFh {gAAP2%8h-!y~yBlj6\^ޮZF@Ub9qG\/՜y֓Ò*m j5 haJɐ ~HJBR0r&EMas5'ƂdJ#yf=/U8rb=zg0yT` `>٘4HaI\Rpi^:jPAbQ*<3cIRo/U-үgzn=V=?ҵ[zq"{wѼ+C3 N`T5 @"҈a?gzVDF!U]ߝzm6k(XqMr@,/z hNgB X׎ZMYaMA_P+ˍZwkZ,ِ &pjRfI8V 2$ ߘ2䊈#9h)%i=bMiw̮)]?} rgf=[S&|r&%6mT 2C8q\;):qJ JvW;LKF**lYZ>lV>u6 ``"y#{9rfܸΖ֖ XXT2B 4FZ1Bjpp٥*.x!hTl<6&YkeCͨyZeĄey'˳#5s<rߏή'#ވ^uGcXW8?k d).y=! e C=!Z,3HͨںRU_TW5rX>_ɌIyB<JtLoRՄTڳ=Y[-Ti)YZTXx5D"V@%tduT&X_ "<9`)bm5Cx#hF5Ǩ 1MMy帢J!$SaN4D@M\0Q,jT(a0~LHN4&F@ pC;6.Ώ)ZFR$rZPGz.;TOx_PJ*U*TRjV* DTڸRIL%͎L$DrCe45!A- hZh85a@4CIG{I̪$Ws[A`8ҫM[{Z Z``&n ^*+<%'\iwz|;uKĀQ`HM3uD3uD4")D5wQD`T(B1YytBl.RE ;c H݋YP>\VڵJ׮=pU5ҋ(F6҄b+S$&HE}Z %.poqz4 kW 4?(I'N! h $kX$H7ޘ6HBD]ߝjkX}®W @yOE&>L=Z򨭸?+ph9`}D%||Q\)%@h lL%[#/EP@˒%PZ(:K ^6(#Fq<-Fdu]߾ DzV#j%iDt1ZN0M@Ȣ׎Wûv4&l R.sѳ|jS "1EijM<:s$yd`m`e<iT#B>x #Z Jpޏ:!Qz֋(!O@mSr>l[yz_7;ҁqG>v@u{( f1|,gce KiK.7noM{i9L4,M&Voqu l՗!2STO+a 3rda& Vcb+6hy%ihKRL*MZj( +PG¹qHAO^z9m5+j9|Z@nx}%:W糯S))KH/~X"=/UO9OCGGoե_t r *~Of6OTͥShR,IeoҤ^}CL;Ԉ12g#A.B@RJ1@GXWXLvwu0@ysŅ}\r!.}˷ArONNϦ0;@50.ut6ŻVQh_Jݙj^ g&GVׄ-3Gg5CivE[F\P{|r:턃!л,8 'ctA>N܅dFQ]~lWqˊ*m40qO4nh)ڮdנ2gaR5wq})Jav VVyC~^}+.qfz 6z!a0~lP ņ#\Hl.0S{8gCҁ136^e;^\&@X#_|aë?'ۀT>qzJ҄I\9#dNUZ6% ӕdxmLtS3sXz@ǻI[xY\:Sg$J@2BFz?Ep&ŝS8t,G"$SNzk:G0T]}u;Yd7|jw@fFƱS9>C !@L3ɑ:^C\| L3^ 1>yW E&Gܼ%PM3(T>)Wf2^5wvbURW<=߾_9u[|vҽ& o]iO)tʁ5HfMzj[O xfZxgAz J!8n#}_|$gyu  8ȵ0AyۗmZ^[0i9ڮģ5o҂rUh(KWp|Fi[3 Pf @x,//߼^@1Jp,JgS&LXʚYE,Dy^E}HƸf3Ƈ&6z`e3xôbP`AҸ!]mमS R^C-8!:Ylw(b.\iIze[5$?M>,Zc}4:S#"oaqw2k#Z+Rh9c\:hdFi G&yj,V@M)M2~,^]hc"i IÅ4wgy:PVj 6Q51hwR)B)5Y,k#8|ݡZ7n\ϳ_PpƸ|;#|qpy{C>صtbJIMQd?[|=I WE&&3J_V } }xJ𞓶BIO-;K;"~aj>g* ?YoT c:=OӢ1͉v⢬rXeQf6jSR)A2'oA VIFPKW.mKVH#c8p  r|.fl}w3,_y=* Xu3 _o\=^rFs2^Ele}S\lPCs+tͩf}=J_ox!Вd[/ϗ,|c`zN">.+W8YO>Y(,jkfM`aãg_b3io_`5\xAȾ ^?=h' ^~L1``+;ZFƙrx09hw s+ȹәas@zjVAF% bbĤN*G3=Pʀ(>#dQ-8 WO=1>G%sTo{00&h;ŃB/Dt8p@|d]ۙ}u9u<%1qazcJ+ZmU+8E.,!"K LqY:5Hޕ6r$BewvPT^`ه xa!f:lw7HIKb)R"`[M22"+h R}JX7%Ǔ׹5KAV=׫ڧEokgf7;Rr磲6XXQY2/v̋]V돇({,9;BY!rm9zeTD&8*_`Rpsη~A< /42>Bb5d*h6H(_ tRŗ'_~F tnw,t5i'v/Oϓ5-/`w\̰Ύ,+}ւ@O0t_^'W?qҳ:L^|+"DT/f_+ Z z tF w' jvz W XpⒼ| W%@\C{WO^h+"bШDTw"bv":^x nv*%.`Tn`-T٭v8ܴՁ!X+t؁ČBvVʳqRi잇K]}( _ic9F׽[ggyS1%q>W߆TU:$V wV<B3{9wQHSLcyDOYFGDPIsʭ4i]5rZn4YHen$:X9Ih%"ڟDOUcuDRy dcbvp|Ql/9H繉+%'_݈t֬q\vv=w`㕏j's.͘:аe3$q h:S jGH.x%B@m8鮚JE2%SPZ3#U@]Ze<59T#A~ Ɍ$X4j-t>OS1&3 RfbphRTbeտRV/OXYʪcvf`x{(C. 2$B@ @YTDBFz]V)Z,CghUx4ˀ.8?*kF5% \Mzw6z /Ξ4rR3aS)Lh* M6dZ`3)qtlc|{Et~ ;bb39ߕ?H4kp~h, *v$]öRTlF@<'*RVʟ\׼2NⳆBzӍN rU>׹~neһ' nc|{#-~x4ɕs$Szӄ(ټr/,WȣXJ[#O#<eg|T' |\_+oq6{%d4Eqt=d1[<aP<1 @ #g Jww1LF5O7!?ࢷ+f*FDG!\̳LLG? Yi^F!L%B=N0' M90ҩyԹ`P炀^%&X)YAS HP:Rc$F9bm.ѵ+}2 EEB8E[JC3*nHhgBa")L )IjD ௪ڠ)A?/qnCN 7reuj>'um_oaAצ^jlb;K]IӲuTKcVvi\ɪRmR^[mZi>$S%|wYFUޝ-opt`4!&~Gdb`p%BMhcf _סO2K׬hotŒFR}ֻ|Vת硼ͽA+lHC4;?~wWmkA8vALi6ZՅK D"!h5k<\ThrΣW^tK- SZ(ʆ SwQgHuFMenf h* G jձ$Ѵ>9/w$BrJz(*mlqr`'/9 nMݎ>g^0QODABV0 pI1@ |~wv|yn>o]Y)!d1Eثٛ_.4Y9>=R36_^)Y폇,\.@N-LRٜ˜͹ٜ˜͹fsfOmxT;0H (: a|ІMXD{ǒ䏋Yw޶ ]>&r!(Xmk&ce҃֋o}lKTs3NԌJ=ފf$h!{kdA,{Oez+8 ZhEy31׹]`Ky#niy 0њQqʘ9cXica49VjvyYɾPTʪeKی7W f<{VK;Pb.F&YOA3=G({9 vO#IpF( VEe "T; A-VywVyG<ɂ288?w&9%<-G3ˈ,u;e K35C Φ;Nڰy1!~l:}5M!RÉ!3|!U}>i#(ouWturvYN?N |<{K0QY!帙cW&wrPh2khuH̩% ^>5#E3w0o\Z=u$!9A|?`5;@`NAzJJ^m})d7LN;G;9QHFaeM >фo_ ֠]bk5̨ ȳBiHBH]L/rf4AAz^hByRY\a" 7&i7] 8-=˳#7CT. D| xJx#QW TsLJF:tqZ#{ȯݏYiLSd'm'!J}NOl H7F1W(;ydqϝS'fܺq{?8 >OTs<-Χ sE/.,(}w. sU(q9Ym !B+0y)<|q\R{}*bC˫tBߒпT/IJ\/v`?{ƭ쿊?’~.r nZy kTTIv^w]Y^lj_ v)op8# Nvcu]vDA_ ۃaЄ1T^~NUy(FǑMIxdP$EF;XO'0dFxB #_S҃OI/2C>]KK-K1Z=A7zJX|\S2q23=st⹏A} v1͗o$#u=X>V\&/Ej#n>흮=G1yR"?Ymԃ8L`7x(EVboha-&f&$Ʉ/$YE[:L㤣ΰĉpF7^\,r6}؍ F(Cp`]LV!8%pJNPp8ФOmNTbEւQŸQj]#!""CSײFas'5!{β ~- cY"wl ^ё2J2JfCQ-N,d`c,w'QE Hw8I ] P2`A'aa&mK-e}*Wa%o$<-:l!F:rֹXmb۰$p՚Ԗ`@ vDWw6i\\ӆwʇ5p@#kageHyσaB xbKc97Y |b5kG2ZN8%Q1/Cƨ;I_`XHm$QpARSk{A0z jPbN;"QگwBSŝNtLbv@x=zPlXh.R& ǰwxp_l3qɃJJk"̦qV/E֝<:y;{<0>nWY3cjdd|<3cX2\|ۤaL gɂ^?I?A?_g'v"Vexuォ ǽMAwf^̿z3}a>8ɻp4>( p<>H; nTֿ|}54Ōgԧbnd1cA^j+I! $, C^ )B63wߪ̲*Z|[R~\¯2??< x2wX6!.`i/hD^؛қ:PssLh 6"7};rnn.!$3p!rO , UGn 43ߥ}b.͆w HXI@vc cͅa xmсChϚ Qh6xQF,; kQܷA`+Pa`tLD'(A Ѥr>*@kG#IևI)N%_ r@O%lY~Z@yaӬG\Oݙ,dv> ?krY}%% XI4a$Fr8]1wnHϫS@2{0NwWU4cxloL^ul*K0oY:`Mr$Lsggñ3Y|zor2[|={6fe_O/gѕy px$<@ .*X6H@_ R W{wcxdA<h<דo. .1 W={볷//&b26z ռLw~Mig y+P‡4f{/sϫWu /=sod`j_ׇ2_ǣOČ,m,}P3.]/M`TK' λU.-m6ԬMY6|q Cf[ёw/,c9nvџ~H N {KbvF[Ľ4[EI Sw[淽tʊ; )(_z`,S{ ESX@ zxSu3/o/&.ooi>7h+#޺LQ^z9 O SƓlk—'fj,ZJ)u/^?}or6m g/㫏ԧ-pvd2(g4_L>&VOy6(lҤ2\Mo$I3ɇy"xqSh=LSdz,f99#-yOF?]eשwc;.SG٭ξO-P?[E$=XK_dydt-׽ÙJD4bøJ:Z9*"ՀTyѭj%D!%^iT&7K͘"$$8hsv2{F,doNS 0zѻ wۡιx%Zk}ٻOS*l{g,i vQ/3J8lYCګ-AWEb;PA Pr( E6iXD2A$D' L"+"eoh&+0Rcz.'nEfD`͝M` DVDY9%n dsniDKesv S xhM7\oq)QNb BNiZÔӊX;0gDRj[!bm4vRiY("K?%)2l:ou$~wIԁ ߽{ט?=9ys \\ \y;<4h2:{qG.%' לyE' }rXA#]i{_HKĘi0[Rԑ5(o=;s{U*󫅛eO~=u2hGe Q~sPx4tO^..$NN~~rZ!t,bݭn2b j9 b֤U&*a831LD`fܧ0n0V8!#TsLmL[tx:{q#CIe7@NU ڈ`}^ 6"-jw`n3$%eY[dxOig,f[[~$}GpRcȵa"()Q<nQ)< \̓Vynh<Đ\7P'!'Ka]NM)N-IhKt!+Q=ssKatB=ڄDŽU^譯 v)&cbE mGc7uv8i 9x7k,yטRbv/lypc+=h3不>ߧF -*?ڍx~\D#EjjCA#D돭bfcX"8ѼUc*iS[AvHsAn X.QM-xlxlY $*Z aMZ4LKa)b!4Rrj늿?a|N"Og3nɐ>rdXmkA* N/@vmF3\ʣŽȺSgaf]PuLAt0[ 1(c3G(Hk#}yCH+x6V;W]SAy!B֭izsiմ\ڣA٣S@WX0C-lebD8rS7O7nCvW+YvW(gd3%ּHKEs\*>c᎝ <ŐsX-Ɩ1K 6"e5g|Qw`ɎS$N.XW׈f"ת&C=W~;({s:UP-qVHki5n: u^Kfb3VsgG7IsIX2 ,h}pfWHu\8oO.HZD]BLl#l:MbH^آ"ufrbRHSHnpO$̳֍hgH]g$S-JO66A~e?{`C1/CjDU͏A8f]b"OyT}H…lwEK}t0)%U}tz͗j(Ki u>,q˭EZ p(! QjIڪrp ˂Y]l NƘ8r9͜ET9Q^(L۩DapMURCq&bZ7.svqA}v5 xaH%: pcgMleĈ&&BGi!VFhɸ]©X@N}Ϭ93,yxtzS[v{ n8nRr -M&sFud.ԇ`pb>—%g0T'Uy+(8ǹ؆z7q~6'ɏ*๭P̢sELqD1cĄrqbK‰B%6مT7k`TpD=,HPAQz57Uy`}]q ߧ8rǝ_@:IS"Q}($0X`_Z*d$J%eV! SUb-zkA&x˜"bDƠ 'Ꮙe0n_@l{` ˓C":f*73˧Nd陫ŤϽNH$⬓B-B`fރ VOlQа +9 :\#Hh}#"A,2q40dFx‹S,9Q^0h;6J՝σE0"אgQF#]Cm42DAoH%Qd-LՈ*ƍV[ !XrM%bpd 4a"WbrFT,t=ղy~rԮp 5}dsJQxi3LMYJC|!1QUԇi>>D4'(OkN?`>ӂSbH*15Zc`P`(ldૌJP(oۊ[(-QT5>DKLVxw`阘EGXS,‚1V0zR1ЎZ2PۂQEkEA)%2 Ąu]޵m,"C98EѴE,6Pr7gIJnR\]"3ά]אzx볘"~Y6{f܋"Om&"M0!$ dc;DP 51T\&(14QD,R6lW}ۀL0.>ՋW%n^BöLJ$7_O#qѼ4.Volvh;+Aa"RNBa 58`4& 9JbۦXvyD `c MD†T{,KrϑCPSsu#D2& i@֜y,ΨZ3Cg|&+cM s 8]gdl>\tpOTr_8]PI왥LOP{ҌZp^ 'DV;|/M*9֓öh_S^12ӯM l(-( EQn6jḐ663;T)ڵH(mP1;zQQB#X[#Z^[E7̷E4e4f"% ͇BHx˂&ȧMp(IClEqPHu#qJ>)~~vWlDYI_#*Õ1;QhRYhnuأTvbQlH%cѳGπ f76͟Ꮧ.[H+v3IhٕC)f["1j: EvK$?ݒ)gRȤ uHUUƓʐ)*֙ `?q2h%YmӨBڄRɆ)ZJ$$&/M?ɝ ]xtTD+3H$I; )o>qCmq-nJJmڛ鹀4h{H}^FZQ=? 39 a;^`lOkpI)mE=SC|TN/)ܲT8i@M0802A>%b^L=hL/όIc3} A6зXPzБ=_Wb!xy V]E-oL'<`M'/̿xZMio޻#` bٹ47PT3O\RϨ> +]iFv~L){*{W-0·f}f.W!ol53e␴YBQIai;"@8 _R䲰M0 t(`18Z( 8eHL0\E Ê,4qP!0a 82$Y:&̀ӶF9'°8]pdD'LKq3-&"}!8FL$CQMp@2;;ZȼqU(aW!Ϻ&3u;he '׌rw'G쨛G嫏fSṿMBf->]Z{f->]Z¼^o֐y0wi9L#}rw$<J9W!bġ݄@f 8O2 @ER,v& h R q(J"`D3a8N`r`:0cPZ1_thZ9_F Vta8`]7y70Yvr ""gefWF䦠ZNn ?ol$Li>q+Os:*sYLE[[B*iU( `ꆱ*nt#qnbG(L{G @n k/7kx#E+iOrؕ`'9d%@5jMr p FmbkվqAE,"I8ۆRc,?TO+-|Ts() zN*HBb(g %`S::jx%!9T:%;SV9J(BYDLP-I*IĎjVڳk8svc1Wñ֡;+qRY3D- !B)102'cC`0[#w8SŒ[?k XT+t!tE<7l he=/!Q+9ԡA'ӌ(?à`{ލ?w !/u; $^#Se1z{Yw2V'hǽ} B$i3iX;p: #fMXɱb]B_zeA5F8ՓOlۡ ÿ "I80F U[jYHv$QAAHr>@ˁ8ʁt@2|Ȇj3%u[J 6ZD}C |swLal(cE'j~kh34Xl0LBhvN]Գb)} Wt\_k?!G]ٺ=nרum+K0hӕN@l`ǃ܀d9[N2Lo[\7e v|븃L81|L$P Wy=f֐_8Rt?b7Xr>(Km (ݤ#+{Z0z 𼑧5x6Pc0Z ี̪ hfr朒#+ Kz=:i)dmH'jYE1^ܥӀl 1jCHr p)~R,QyTY4}1EYSaHYPc39FIu@:2& F&<6!80w M%K'/F:l}J]撙Ғi=KWgY4_E޵uDjtY.N%Y2g鄢VYs9ՒGU^YC9 ՂЍcB |>j9*"NB2fT%Ol4J jݸЕB5 楚}iFm \`=eECLmV :FBJy^eA #Ʀ]*I9Iv3̅/_lgjh-Ӽ@qZ(xiFǽqRoXcZpC\-P)B(/ϛ(/HW^RԄ>|hNR&qJ+gbJjJWtMeGlgzΓ좃OBӺE'nNmN?q `5C]K^h"vQe\0J(B1uiH(ɃFiYq[y*6d߷%eS".'/b'k:}7@_J}xKsszxG;j9(濾wA~D,͎_|v~Cp+6 `>"IG^C.nX _{xkCS } hx%dgzE.^^r; >6FlB{[eFw?%T34KU:;+.|9 AvQ z/1(鷇π?~׋goY _,ב[9]q OHTjGjKxH|Ȯ˶q4;T3>YA:Eة~l?{W֍/Y)d|((vwZL3&K:qVv{(teu>/ܖ2{$Çr>FϣtBawDpLsnD=!moUvcB1 \W54?j8ۚ1VxkwdЀlFԣ_]mԽw4CFiYSR5{5ԮTYm񅂣 51*.ohO| 9fEIQZ2KO^o0뫷oݎM>W+p}7w6_6.wE)le*u4ctmD;>]bvc뇪ˮt%4I M|'׸2*@#7063tEB6ѓ^V3W=_]5#S9f iTwUܕ_U['tO|jIh*^ e=1wp7 ")N**ԢWϣWkȉ6ɸE*i VicJpp]-o[o2_>4g _n(!nx:yFl5T-}t:JeYom$;YP;G[m\^ }%8^tadQx ԓ=xH="WI1M)Vu#U$6Z\6߰N»hg DD_/\GI?sokG$'_sc qFKKkwG6ȶm %G% 11 :ڎ ^Mkb#C7w3[m?pJ [@or;B ߮"q1"rwc ?_"{x荘xDŇwo>$JyX~H:K TN!6zg??ˋr6_ܭ~EP J`P"LJ9.Y9+o QSb Hi\^ܮOXVX]y !ZwSS"qys?}^B%WRjs|הJU߫en~šSLu7&Nn*)AM 0ND'ezWȝuٸrڣOifRޕ%aNBTpleI5/Jcw4L >eLG95[F%uDZ#Q򊔖km)J8)"(pXhkAdb`pQdv砻W_f~Q (y-F~p8~;]s||l.Gb̅_'N'Đn'\FsUl4WF.m ?LՆoNUObrLMV9/bVY){ _o7[&TkS/C8V݁f$Q)csȎpr\h\ׁ;_};v2EQ}𞿉i[x7W8wk橥ƆڠJV^V;f뻳QЪIɺj &'fK⥧DP\*K-a,Pq%%-㠂4CJC NABKRR'8@ g" ?\$e=, tK1S%`J`|b_Ueg_}t$.{teiP0e%_z;/~RVn@ʎ_ ]I[m/{Y.FRzs6n$i2ЀS sj2X c_矖Y"Ref؛W_ͬx~t-W.ocqkK?0b#LѾR i_ߥV)s"3cDZ*ҏ-7rHB1#`,I,Qƈ7zq Iy;F #a!OBxs_o|RƤj=1\?FqT5tF4-5aRҗc`bdJfG2g+ '.?X!C*kc9ѬB9{2۽ތ=ʹBDݪnTּ3q?.h쭽;Y}ӝV!xW c [[oJl!ÅDk:U(lUmFE?Ӗ[;뢂 PÑ78-{ӈ=!sjd;[ ZI46CFP:ʠhX> HʸAz `e/ nel&av)7Μ2v/|MaAs!BsJ/IFu*xjZn%gͰ3>sXMrvњ1MPY {穚>]%ޑ]zN4Uݨ:hk*#2X ݆8n(71S pL^Xg 4Fa0*wE:۲Y^ƍQ*qwXq]q(ţ0_.rKn o.]<Ƙ񛋿}' l Dc2C~v5?E?4}Xױpvq7}$lP?ZvcpY}}~ܬӿ17Mn1[pCUY5T\+WR-3oTEnbcݚbt#gt.S@-ָ[-̛mHLdkvB)L䀝4N:ɾ- nHEDbW(Q#I 1+Zo*t5H%";NmL N:hobZohw*hvH͸=])l|ݎU! (REʹ.FA^1hО4%^HՀ]zxz~@\1sGw(2`hZ7#Ҵnaé"jBWODS:Ѵm_l7h%Trǚ$ }iYQXnV2yvl)+fLCP2 J.Jpp҃VfX)Pڴ,$vi9! eT^IP\S%N;!%xh* t%aBKE%-$:d'$\+LĒd0ʞ~4Y! 0CA|oDƑItsHb{#(?wfPUTGU*) Cz2;08 DZUQ*gh HbbB%6P<|<>>Ra$tp%- W7\`-p+*mZp|suuTtJx㱽dr5-YneVf׌=@ 0/wRZ2tD1M| S䠺KyP9k@(a*4{ᗠ_U[+i:D뻱h3(ݣZWWJJ%Cdd$EYmaB$iud!{dO:%D=Q_߇}TX Ū\ۋǻw.}hm]1-Δv$tB`p)Αr%e8}_~>֠}l0yjSϟ%}8 rDgRy!G}+so/r9K. b itoHwySQ0=w׽3wu1bkm,\RHF)C7Lo{QZn;jNwcƙpL¸^'zd#\ +XUp= 6㖦ldZs{R!gviozJꙍ4UɌ I!Nfu{,\ |?  yž2&sZHjp),fP譙k$jlʧk6}i? V-FmL6]6\z.6?/N` /HF8Ҟ: 6ymmi蓀ٸvSd5'RJ^/~Oyj^%݈y¦z\ίU1/&RSJBY$6꽹lVHCf=.iIp{j'|bt)@ϒ^).S-GA4'Z=R3AORΤHb$@JGk\( ],V_Fh *x_u&iDY^~lj F0AwwLPE~OP`͛4&jpg *f"jm0)gEŌP=L M7u !SɐHno)Ktw>T@_>Α4n͹MR@AHXDDa-5K+u~ViYraq,)=)1HyZt_)"P3]k(Wm@Rv:n~=6"g 5kuZClAY𠢶ҧXcju1vJ\ecer/<),jgZzc9K](C\!J:- r[1oB^( yɀ`qXRK ZQrP⛁i#@<|6wk-PFm;zH,eGt KV-62ku :[7Qa8E,eQG(`;+Ka$eVBdH2"eKÅ!u MN!ɹl [>A%!V舦v@[X}1uoI5!%݌#""N*kV)YP]pJO-W:98盇 RߵU^q~D=>]bьG0>]^#/g:*߇褃)| oho nk"z/(sN9 ju&zJQ2O407k@j'ԩc~ IV@15CLk&0_ί KJ xFѶ4”ggg2WS^?.HCMeŁh[DNж0(#CgY-g=avZ(ڬ\s2‚6^J f_30B: $]HHR FYp.ܰ(.zJAg`{4 b!)c}y45$oF[ƐI,SZšStςE"m^84hLw%[4 r㍍G%JA5p7W@,t V7):y[Q> gY}|]`#6*Z ׄ;;h;92cRGYxsRBk.aP)rSj Sj`R \M݆dI0/U,o˃1MZЯ7wh^vj%3"@glvGnvS_yӺrVxIT%1gjnd[WZY%ڣ7UQn)n[/CFu3!Ṱ -ws*e6>6n]=ɖ?J5T/.oo/EI?.3,&HC]J)I%?..'W|L$n IkC(ү׀XA :5 1)|y MBEu>Pu[Tz+{k;6"H́8`6ͻഁVH Zb$#m0=Q1lz,1d Qb[;, zrC_ ֽ LJ\/eWټy*7kpn%]9hQNPiM̔2ñ28$C:?{PyH COsQ+^(H^ D%Q Xn+ZFmZ:Ԕ́tF~ HdFTVŮw{ /;|oXeVK G/RT:MR1q<Պ! >q'm7K;[#Ks?=} {*%ݛo| gǹ}K@=8u7<\jbTw-G3+F N|f8Xj/ bP72̳,N#EsJtk@㩖j}mO1B٬q4BpM`&J6P=}8?JZF6Ox<޴[xMv!!'.A2Bb%:k4n}Oi%]s|C"HȉhL)2v#9b#:k4nHD*znʞ-gv'%\O1)Fq'8B8; .ݲ rʬ2n 3La dJcӑ4NDP}tWXZSjfөj:UoPA)ӱf 4lZh` =RXP۩8'rDx=LB:8R#_ڑ zLeOP&4_WFA̾p&~9$KU\R4CAv=/NeB_ 6 k&hNs0o,[,! 6"F]Ʉ&K،P$BoD[ғ{GeXtzH7N"vOsswߣ 5FO%Ӳo e^g'YxNPCJ+Y&zsk2avptY9ժaC<>^d/7~^k=B*%2!s=v=ԲvG#8!hu|O iЀ~=P&l"͙4H(\Ldt5DfrMjjsrY>;LBol'"- o&{X?bx]x!G*'{RO~zׇ}'Ȉ5)ܟY1=j曌uGw5`|\܊0+Ou3+ ipssa")>W)80ĸ6Ny3*U>R\KЈ3BY9S:tu7Mu2GH! >6AK3-xΦHW1g粘cZpV P -nQ,Y|zZ?7X>Ru|32d. .ߛ_)>%ϑW{&vLT~7"bMut/D W yv5igфm uB[$Ԭ܏a%1 ]l= Ҿkn o[&I$VJ9dkX(2nR.ϖ2o~K2t^zQ-a,AvIhEί1Hn.~]\V~VodOw/Wp/?}Y#q?ˍs2!,@3g狵t{뜆3S,Stn]U;k~|s/ G{#&тo>_2`(A =[q{?}uEr[ls'RWiFl@GEέSՐffL"5BQ&ܛ:A͎Jh5o" iR[ǛOD7MONYؙ[M.?-^ >.unrsRzrH [۾͈0 "Q 0Du8HwτX,Zxb-A*^7R1X_U7i%gw p9!(՘Y]FT#獠·=Tki'T"ZCR`gu=6ռV Xgؙsj#IҾ(j|=IJiPA Yr8#[Krޝ[GaD:ĺzu <6Koh- Gƭ߮[Oέp _J߻Wӽsq?zU5hie{g158*m;d\!oI(Ŀ(`GŚP|sbq^ZvjΛ۳JooKSPKK5! u ;DUoF ۧ<':'Zh"梤В7X1AĞx *.@{EC4ob28'(jUI *yL>^=>Uө2pߕ{n۹q6O̩UCMx{yjZk5V/|x~6D-fMT|RPve!ԕa?V•/>9ua媛D$ 9q )>"[(>F6\0*![xv!!'.A2Uq~zA8cnNMp^%OM{jr"$S\jpWx!Oxk4:wxZd{2 !'.eJ>' f<Dk 7$Zw7 g WÔ*h']/zְ :"]M=0C/ /toSFN'EzJDES10ۯ? kPgѪ)Ogq.Qj͆^d/72u^3Fi"@A5!@j1Rv)vQp]!xnFٛjF RMys*0"q&άCX9<}+IWH=E/,3SgEB DYpѡ-kzƹ~lس>-GbAI}C{8'33?Ql4W(U5F0.Jޱ qA|`R3hv ~4Q35vk}xBuܞze_9'Hk>J 5NNud/߻`"wkwiKx'i+Á;EZꡠh& v߶d:/u%n 7t)N: ә`iE 9cE*S&")̤m+Ws'iD*BNR<)wLEB4Vٴ\%5kTyB|V9\K{Pѕ gͥ;C~^;%ˣS 2Qd sCwL娫K<5 PR"49i #>'Ě/!'9R#tkN.r,dfD_9 8!ʑ;HAۓ.% v<#c&J1ob?*1*Ȋ$9&toc,8$><|/:B^=zVJC)qGoBNRM5kr>y̳cN%5w]YKT褎5|)iT$^rEGA lsTCo@Lwu7Y&|o]ɌY)VJwCqItn7qǩIF\))jF RBPN~8n5<ߟfrSIȱTil Kϊzu9O7_j/_%Rwh(|dz3A- MNr.Pgʚ_ae1C >tn;̋'Y(RT,(!-EGbė@" c(k7U3xqӏ-gtxXqm{@>݅&BBHiz*2Nz,He'f!DQyLDd ׹tEЀ>[](tר[i$u8\ ?_ 8d?ZVncۘj嶘je݂'#y2iq\iA uCϼ|ȑF\f M [s1?44l0W6?Sҥ%W^-#ͽmL]0q.f£?{{~Zxwq.#Hw}D JVGp>AD"l!>X|x ;jҰbO_ SD)Jfd4{U=) O8a)hmd-QF r=> .H |{ےF30HkH`te`%>%KJY|2fVx;COP Z#1Ay۹g~&L: /XuFw z-mOEՠa6 cXThT,lntslq Haj8s%sE#r϶B^$64Y,,}]RJIּ%fvq~&_aǀ5UG.pzPFVjSUSJ__;GGx@9Diw)T(((+XҨhH6»vN!.%:Z'MyCKՖX@:iL6X(~aIH,3\(@bs,qf0S@Si%C;LF F*Su )?b8s _tט'Fj!03|tnv]]b1Ƃ1khH&u%84pӈ.eY(=ǹAH#5vT+Rn' 3|([) EKqݓk"g*`}Z ˍn76>?[l9"M@ii]hi3cp06HOgh8Eg AFBԇT!Euގ\ JHN c+/j}XoFM'!s~m-+|@TÅ8`3G #T~ֺc׵ X-+8!Z\y3S3WfO?&W5 ,n ~_Ծ\1hEQ$`1z3h]0>_OfDWX-\}ݻ(  ShYBMP!AN-#&+pbG@LA󠥆uFKKk3C ! BNp̜DL(U`y :$a1eΒ:aŊ*7=+ f{=Ge_$Mhޫx,Jv r(`^m'J4u$Bƿ,yn=U0|n,fy0 zF~N"p!f-җ鷌T!9]xbVl`J頊j 84>g+,NHNAǻ0c2C1,ΜͯPJ QAXیc!cy*Q-wgI,qJF6rYݭ1݌h%pܵ>'fdmWc> E#CD HbD v6L,x *WHSfuTPyG}AXB:Y4:-ݫT>T߫+HJOP ߵx\ArV%q?:l-kT5)q񙟰V0h)HpfWfAawLbX\RLu4T#9g5U$pDn@NA,,<0L "q6F:` J7i[BWe \-S9`te AlWlش6~B|kJ*sx|D@+-_[\| pGt90 DVGxKjp9ە3(gk~z^V$NWY"q|4O '֟?|/y:5ݗ! .]afi}܇QΌ-H:/ƛFF .a:#Y Xxqhjx]54+I9jr54Usxl%b6ualS<,lbƃ'Wdu^@QF1ҚPva`P^2cǫ!=d!:52)F6|l$+vq`˚bv߾,s`B΋_%>.uYzH`Tpۙ J[? >Cαm'TSثaΞ{ v6S%,sTvM&J7]+u<:Exu?ɳvxv ۘ+Z^̤R1,nWjc3/S#Qj^[Ȩy,׬&t׊I}+CmxD~ >>>vیQG;Ȯ:N/<))>3^?R {f}&i"5?c i5lF1ªUk:Aj`AkL_7* l õh}Ӧ"<>Ds9@KkPU2)7ջ"LܳBZZU<\ a`A,s| pʌsdi&wqH_Yރb~hwע^iaP$:^(iE^4|f4̰Jx*$’Pi̅& z#4ǜz)YUZb<Z+L`iGe9/ |YT+{Yv0H OR J0_\1.Z*WqV>.4 Q*KxY,K]t)x&g:-(~3r&δL|j.;/oh`7t7j.| 7KIN]Mw{)cJ%,%7-u)C.C5lEH@Bcul>iLi34l$9|O.$z} /F_-4Ji 5wkT{LK[}.Qw.nB-!Ph 8gWbU6&î=3`F]G%^a,Ax+4 Aq+*V){ B)%S̗S8!"Ph+v AӮGe#ځK DN9G(β(GyoR=(4-H))fD!K!GB,XiÔ!Y88E E0`9{v֖Xyy%SWnBJdڎvzfkKr$<7#\lZ@ z'^)kX5:Gw~M%hBhq7{I R n$"HAEtya=4N[Lh7W⷏-1]ȥ`j} &+ܓw{ũ k1zW=IB1Vn[jQ:^ҖFROK[}AvڨP0@ۡ8} ==(8q{>& ȢQK _ϙjb@*>x+%gܹ݃4ekS\ 0' '3捬QScPHbm+ׂ(hk_ xiaJ;]jd\MJf.1VlپA@6^s%>Wpv ؟{46ITߠu#TrH\]9نj> !v!Ɔ!l꛺K ?[Idž?eO'wYvˁ_"]^_ֿr+*KԳś󽟁9ouekE>ӟw~ mm=kS s{TNBpͲ)MV٩EݥVJ4~7ovrٛ+smޯ3wF[L^}<9 sXW_&36/A9q>U9yS*@.H.ej{qiW&`N*:'B;fM/Q-Y'A%n[ʼn^]"ݻ,LJ \|0+Lv/Ә$)]aeH𳡷 I,͉C&TH41囑e]e;lsv_EʒnxƔ! ..n*Ioo9eL%ӫ)û!)z2]@r՛zi78.N-c2BkHUF}>us,m}ҟ~4ˋtӐTtW#'?9;K~ >͹}X36i=T3SP_py q&TZj9a# C^cx\˸.TcD*fKYgo.w29G}p =Qv0=Wq4N, S(P(-W8A9+ [ai'&g8  ]hV_bW'LBQ&Fy.ETh޳hYxݿx m]`Cgr1De4 8!M. z 8qkc}6O+[e^~J9Fxqcob{2-4 8e$vH$4QGL3}soA5z\BW OOAdi?9BS4!y"{$Ñu'0x ,G#Ge/ӧ$1a)9f`\ /n[_M$OCL PG Rn,գ+p}[hT"vg:pSP6*qϼIzĹNLFZG^[J[thzq}׼Ha}G<k!ap)+3[@1cbHC<3Xʓ~u3' ?!쇆!0Ie%1gT*t)мWi,3j5 |Qv3r]j=bBnHKrz;L=BB}AQ$w@ԍQi{b2i!`ɞizs~fP,55+.I tJi ͈bcp!w"cf`E- Ư1J;DZ,%dTS5ъ 5&[%3K7YTψE !LQ(tIJ#ᆩ†F  i,gƟ`)L0v>Z-<6Q! Q`cQ %"_:,LX.˵ `?!~A枇p{sܘ] x00 TiK+ %1ZҰ 4T/,tûP04Mr[z?.Xַz}qo9 LQ ZĹ y bxn//@NS\f߹47ݨ7|{} #4C#q_1֘"e||-7e +3]8 TZ*RJ%'R,&;J. !/S"MNDUw)R28nAʷ|>VV`6dAbLnk8)32NPCJ7o6oj*`0߿I(ETi|owV({9_K89|oԙE`7uB4$g'Dٿor~KN6A}S6? xfBV"i/_.lFuK u}!1yú^>.UuG, 7,rc_ѻbb:hݎNͻe4ջa!_fؔ"\ Ϙ T[[0ʋ1&yEcLqgx6L q^8!9.Ғ3̞Ls:tܡCM<|e㫅PӨrNuscLx^"dpg$*Ĩ9YRЈaeZ[Du+1AB\s[T  C%"+_UZkC+ XlTT+/+|r#ȗVY t.aА(2LL=_ڲ-uS"%ۉdӖSbX/TT&iFnsS/*kA g@iB"WňP.oG&Kܟ{2_d嗹7Us?w%M@uR;_9ʩBxUZ;f\154p1^M4"<# Ш7u7x#V 2Dbn; GW=}jj)鸭>KjqSPudE.hɕBfKM,$(!_9|0ܔ_rkXaJ\y"qg3ܳ$Jˮ+.S zGjSpWond/X1_Z˛Һ[BtZI~RKbj c@*Vdz(]Av(W`rrA5n aW?3%I߬߫d0:4ƚuӗPjNG($͎c6ksSu_}KY7},:؞{ch?rʹӵbt{ӛSDݜ&m&#K{^c&K.\(|%}[y΅c)4w^ ln`ypxZ_2HLpSAƹÅוDžxPpUm*2Bp**9ɜNڨC7^UDStV($T l+A{ѭj קΞ?R!c',垊AL9r?-1Y)M*{ EtLIBfo;QOs~g^j{J Ce{Jzn RZSoP\8nI Ř -Zӭc3(h<1chG0Ɂťbmwq-:<NΨuLgX6hǡ L%A|]2' 4ST~>Fa s়S1 BF g qt$eZ1*[x*Z]nUcBrSh1`Ɓ|2m%%7,-{X̓bUA7>Sܔ*:^r!ؼԈ5/{3K#ks{qXRKG(2y/z⑖s P&^߆ b嫵%įմz%!HjތweVNG1RcX粑|r .UwQ$sƮCLF'ji!n[ %i|X%*veȊHfUj۠Ao A)(qje׎b)C=`=3Ljs;x%UH|C]9'f־8(x9cR)  uB1l4=R縨`hXK FBZRo eQ[,ʻuSG׌y yKP{_S ēD敪RLYxJ[55b`^gՇ@֢U(͈3B +B)𐤜9WJ0j+-DDx]e^b5тZ\'!)u6*'k4J95WHXs.<.\JMrhh2**f*C( Nynij8 0+j3PBF򔄬0Й u$}!ЈdHJsNgpD$p4LIPMK_ոJQS_))0b3*5/Ts= fRM1>>%F%qk|}!.3n_fO"N6nΪmJtG$WPzd<"A qQ(NitƃhOJ =zF %FRA (PiHaSH3C=~TZ 0do !]rd# *E/d!50R:r)zj=(ߩ}A\O-  Btu]gOplݟ5s4${/KxpP 3){~ tlYc3_F;NQ"wO\EKG=֜<)C$PEz8sBcB,퉂QPM4&&\l!S:t=ybO'XSٛ?RshܑALW9484S#hdhLN (&_* # y?jl3k`$ ̸ĶS|#/y;C4b Tk9fhOŤmR}O?-,hFe#ͻ17$kja7͒*I10Y :}K s$'l2/H7RӍ nhӍ ʓ/++Ӭ- ʴr=\D [۠tMnn[!\H[r[xP(Ƹ !K?FS!S +2T;3 vOl q< vSe⬉h5cσ[6\'vOLέ'$B|=3FW2hNQ~> m'8HiA.;s8:;s l'cL\A ۾E23g5>Th*luLZb2p%u keͥjNE k[o˂VR&ˊj~2[f_!áüo(0eH|W"_n?߅qŀ+WMb—f{7FkB $'M-9 A$QW')ߏI.A鯅CDJn/]S(TJ5Z4ct>`DfYpPM)v^hFOh(?u0 Ҿ\ċ3ڸ a FCA1D1qbJ Ѡ&+l JZUhylZ,)vC^Ɉy =:e`KcHn]={j h.278+/Rәt03"9Bɹz1/UKM]aItMT[MXֵPRӥ44Θ'*ǎSNkO^3FĖӪE~Qz@LJ_h.~y)Q@ [7ډ,i]PcTGa-C#?2c4t꣒!ۺ\~E:U~|{a~ק{ҧ_/VHFd`z5A++ ׳I-XTIT&2Mȱ<:U#ǩ]٢JM6g'J-ǫ uՓk+r҃ S2A ôŖU,kPE!VZ$=mB8 z<ˣi9'&D8_/"=no& de˵-JӥAU=}D1cg&1b@ {-uHpلzGuk8:@x0 0= *h F z6!Hߛ-un1ߩ`##}Vj@ޫI@ ]?D׈^;z,%r>ecVhD+S^ FF8Mr^w6Q~YfdkDvmpuF 乤S\Ńl**A]+ȡ߹ӡچrN,OTsaX]Yoɵ+^.r'k_ a`{Mbc<3y@EbBQ &E5&Y^شdȲ%VWΩS2jP-'cE6t@Lpo4IcEfslWwXc}B2:AMypg3?_TLH|/rQfȻх??yt7w1DtX,/?9t6_x<\]x zd4NvO GDǧ7ŘsJ!v6l}zKJ;DIj;A#*%4R*)(x$Sk%o8(-?ήhEs.)}s D\_Rv%I^IM^`2cҀE0E ;IBIa<7VJh)O -Bs{](~9j#aQ,8؞ho : ]- Ө:T-~¿C*%!?g S.`Rp (-!J*a.=Ҧ#;#DQnηV7 L 94СDHn=|@ŚX }p {2Ǧpf;NQt} OU ᬹ4st8O5Bf&k&=e Kd ;GDrޖG׳"Unڀ¤yv"xʫ+h7eGW+\ OsUu[z?xbA3xH.ĕ(&kljE'rS.4E$tG촬HY% -2*X, ^3x)!8Āt 6hI+b#H&?X(yZL^J2Gx(3YpDLwf= Zx B%ofV |*+V)aF{Z$BK\igQ 8L-12qE&r- c4s:'f\ PdvtDSnS:-L6L(0Be 0Y0!Z0fu`\FjôR|5Dr9h\̂o%(u Žot/WfQ-P7]> Zh~ ~t[i̐l ~ ϲG\a#!쪣MhWxƌ!Xmav! 2wJJA iʇ|$pOzj z%D9-f&4NdYUskeVKfd7v5jkjW WYs}jS)-S)&>5ZqHf62iX΁ ì3d"\piHs0ո&Jr=L;$H0SAA5 dKS-AL;f@Ctښ:U*($+5W|\{X15@sJaM ڻK Z LVCtv/eq: HQeyNAXaǸRq+O; x\PѰ86g}9)Qp"r2Y* +,eQpʍyՙ* b [L(pH+DEq`/B.cKtK;S+jSS2⒋pWet"YxI mVQI-5Qx Ӟt*5U/7>\ \B!aI6iP Ik|xwH#RNNhLD$G/ժ5xǷWdJI.3j_4bi[C:1REP4Soɿ?BǐsYR6XCgooo?OKY=lx?^_XnfZ?2<}d-C<`#CZ׮ZXz+PYo*%M;v Yҫ Ŭm62u,'c;W4Q0*V@mmQe 396dsCq,aXPf !h\l7ܸ-I֮bD?Oxhq-VINv[L!'zKmػ-eR՞f_eD;lHG03 z:ˤ$Ljj"#A|+YtrWB ?O~EyB358'3`aeZ [RgPNх?YL{DМZ]fu݂jmgQUOOdݟGadſ}cX57햎ݠm=,<|Ż6{ R)3>koUM*ޫo^?<ڭ# ]OG4huO՚RDh~{ϧ0BsMS1QX6QorH (g0,/ =ԓN/Φw6v'  3 ?G+/6<{/o*#Ew,(z;w~1{s7yZ腾K:_J?? &h4&>>"W.g^Fׅ,D)sDVѩ*,涵[}bvkBBAi2Ey49%l5v8\s%.z 9vIr(ۛ`ڝ()uӚo8VCZPړh$xף%z F}&%=}V2ԙ#`Iz[c)NȟӒB,ow_=Ő\dWEXNFCU8J#;ʁLa1%SXJgytɴ4ϕ%1ZtRccVͰS FwOt *8"۴8vI!³Jaŏ~:m~+B, :@ r23OQ*ϰYЌvkM.@` ìX[j)ܫJIyCY+GDZ4)A;! Ҍ~]WM瑭UIۆ"tjJF ̇-SN5* 4 QL!Λ3?7C`|Qd߾mwi(qL"as3p߬[>OVL5\ I_EZq7~aK9&7ytfǤp ꗏŇBa| fp}uy>木 3/ŋyyQf"OxqX'6I'o_\p `0 MoaBc:R=x1,Chah0NtшZ%Zɢ=]:P~v>&YXL>0Yi;{ucb9O'fTעT)#[NmйwAZ qT)$CV] R`꾣J양vV}|bVELc_7A Et꾣v;x*Xgڭ~IDք({|/N y|wC4<4[Ǯm8q`}%J%Nw@ּ%g%f=ofY O"[oocY ZZEVZt38)FCls!TV '{ap_bE%a_}Wg)6 CG(Ak2 qэ}7()ר!)2RNqyq(,3.dH[01g*iX1윱&Hh&& DH"V+$:>6ff7yCcif ]Po̲ͣ !B$t1_vn3xApR{N,[֙-H06}6n"HZ ,H94!AAZcs . wLz$VBSndW;$ly7* 8=%j=7'!Qol)oe0ܖe <']Y`^֌ |wn wXp|g0H)ʿ_uy/Ppc4/TPBH0 {435W!pőưBeqIb&F/AiZ"X& 6d8Xe Zsm&Qpy"1!S U+t!1./5~OEBbbY ^zg/[BD>_,h1yBYś?yt7w1DtXO/?9t6_~< yqa5@uU|flfyS]"d#e!Ən.Tsbay6l}z0tJ8TDI猁ĉcvtJ\/X?aUO _LHl<~0|'Cae\/2xY\=vH60 Qm>T"`5R qOH15z.Ak9u_zL{Yϯ8f$Z VPf|~ҽ?S >lS+ӚoV٬--Tdhj$T7G}Ĕ*Ϯp>]X_͸Qt7;3ν՝>"-0D6N?'ŊI+k^>C+lOo|}xq#ԫpcP',+o9*rTUӛoepg&/a{+IꏾV6"G)j 0Oʮa+mܹ+cWܸfͦ3qYȶll5{Ӥxb5,3'iZ9 $QfFId&݌ _]|Eq%>z4: Gt!&ZFEb&XQLP0F|Fi@ ' +@Hh~Sy@@:RA%-Tx9=.Sq ?Qh(da\ʠbV":'$p$a( Aɜ.(CZgzȑ_ex a`{6ydQ#^ߠdu4vJ#8 R~JT#u4 >[~b}&=]+ qd%)4%C1\ !0Svkи:p4 C\4(ɶp&ϡb 3 dā q n/?aZ4K)X|7`'L`J(. v[ vwRwNC)kH"a f԰'73=;v`Zml֧ZaTiq-bUZgsOAi㉦!!\Dh{ |f=i:F֣(3x-rm"S{5{ڍ }_nARoۆcTjܟ <|/7hr ELOk7[S RDU[g$LFYxmk-rݑ)(SH,t .*^E"Z]?Xx8JoE} XҢ֚tI-*/W [p6lynjqSrj`ka/xXVw1I?s_,XY6b@}X!ِdfAoqEz[ fq;rvO6-f?բ"LaPtϴr(*9yADv >[TPgwW*m'U v(0uكm@/ZN\(3XIǚ#}'ŀEՁR{6xku9ą#c `BG,O6\h>ԖIE,Ҕu|F$9,!w M`.C^XW<}],K?^y F8m5=|zjAP)dG=%߼ &oA)DڗXON wjJK`^R¯]RM27|CqDhfTD^;(@`Y>oVN/0R 5>/Ar.yMh Ls[coǖ1 qTqv3MWgZ8X鼜"I*2ߛ@'>Bk-+r`*%Cq !s%GJN'./UC8>۪PI]݄/lvsϮ46˸>)2;H<8I1wΔwD;j@n { orm@^uаJr7̨>tȠ ў>e ˇ ɛL y6h%x/Xb gB10ՃЌbʝ{wv`b5EO*p5ŠuEEooG*pj%s  6G)ysdfҪy {-U:5GRMBy睳qDj`|Z>~ B\Z$f'No8TѭV/pV3`f9Qyw".))m-e +=EySY0؋.+iaAT])JJ B3++hha Y!R1zEe1G?BX 7Rb-8GX=u@&EqJ~-럅q%<7JX [PY }AZ90S L K0A%:p[-8f@Χ@SpT&N^6-sGB[T2,共HP)%њ:K,viS eDE'H♊VB#ymӥ4}D 1,)%pN"g<ŅXc%t Gâޖne$R(q" ihFd#E'J%)ChIHשsbZMsJ1)x^&)>: ֊\U~zm;kH.bQغN?^kJ0PilV,d+7bSf"{m찆URkYifs͙À{tքPB#$J h(,mAtSS ξYHtqJßd7 PiMX0k,ᕰW}yojP Ke͹ QXLO+3[ ZO$c? /[QxO{=0%GUIK`w#M.퓀>2Ơ!WmLKyCqwq594nb8͡Z] '=$#Sh\% Z}e睳q৘h^)"X>9;D fd@K̘9ƪnl@dT̛׭:QUhcDȖ˒# -;9Q,"XhqGUC2,_hsr9BrC,rPIZsͭ&QQb1sLN@*uyn.$ LhtKv8犣R؃S֚ҔQ Wf"$ͼ &q ;`yDF"1.8ɱČ)4C0o;S`\Ze6 ʢ$`8,5RdBq !KM"(o,MpÈ1DRΉf(;GbSQ!4W0@Q 2ޓ :e7@q0qHK5\9eN,-M4FnWdxZgQgzbjde˕+ nq黳 ?N\nn_?-|,{#nWhC+Gt?fqGatGM=0+cѿކӦ #,;`x$_m_jO~ 'o|Ah4A 6DEQ7)#=Si3 #OB6U@HérK2rEU("{*qXݱ.{Vn $)2auɻvs8GRLzD U ,#:4['C>l0+gW+/ʴ[oft41p7 \k̴eɱ,$3E|5ܚPM>d`+%t3K3 j^yQzYdt]>5Թt}CEPG5R~~R. }I;KH kq3 9K?+? g3 ~bY."a7]{|OUZKg4 N3~onz_vz;qTv0QWNNA\#;|2V!"-%P G3E>$y IyXnbԧASLHEE+9u˧;ga2rUH"S|˔ބ{["0R7KWPYY_JYSZ:']QPp~tHXzaD ,$\Eכj^hN%4ϪTc~o -o,S2IYbl) =C3(/@"Aa6D+[)g1P-ⰈPz>3+pӷse[<[i`eϡUp#;n(pc&yC0r2$~H騮?|z!]{d5.ZjVb]%WzYM1HIl:F,t ,>gbp y"Hš}vkA4vs:jz׶vk^=Q吐W.eKB:{]ݚ%Rw e 2BDQC=>EI4@R.fZ2}ȋBӪFѬ4:c@VMOdj$ƣ"-F縗ƥr獯I|"m <.`X5W7sd<{ HZL'giKH1wGn\0EmN||dlFQ)o?ECԢl$n 8?䩶 9QF6 pՐ3?8\ճ`rp,^k)m: ´%_vb04^ kĆVEo$rA:*Y`0ڊE^V90/[qh }w t˴>bRkm[kJ~A%R CLgJJ%,v r) KAj/7FP(ˢ#)^73~N~2kinFˌ7\lTD_fw9M[n)joHQŧP6/ݒXD%LD)!J*&%=}I"Y~VɴcSFs=,\}ȆƒqQ7?YJqCU{|dG\>ḊYZT[O._36rZUۜ8ŧۛ ?,n߼A7y[CN7 3{7YO'<(-r6g=LT. 7iy]f5|Sw)4O~ooh ;w!=D%4TjF mpwvnL\D܌r]| Jm?0ٗطw,gՒw[2$. ᐉ O*[y[yMoeՂ+#2A$sĐUD`8Ek-.rFց&_n﷓w9 N t}d0V:mxJ,]|ɠ'UҾ]p} ;S L7XlGWM=[UyfOzICP/DBF$j]UR6&c(ٚ {AF_yjD Z&]|5P-6pFw[v|V) 0fq S Np6 /@7{O`JvLX`F AFt9b x2RTŘ:B&,A$(G憨hG,f  #:m EXI{NXʴiE|̦6p_ P^8 2a[NSxkJBw c9D@ICQF:ոeg,u7mkNb9a1}a10PjNN A=r\r.%b)cи+w_[ڼ1E;3D mOE#m`4n鷟[);.f!f~mQ~1ImMƣ>/v˶+"G>o.OϗObE}FykՆDSy36j @S"  SiaWJ eF< >~|<~ AFR#%v6"bPBmBq ?v)%/~Yݙ e1uŠtvFW u8zv٭ 6%M.ލt<:GoA Ҫq̼Kwo<>Ï3l/L! [cv?߰;Oo6+bT#:Mi GD%)jKW렘9V1aj᳞g l\D\cf{B[k~Qac*yptH0$Z؇M=PPPP\ {v_f~wP"սP*_нDv$W. ε׀]oi,cʿ 92iRjT<2oJG j ~ذd+_gy+9m #m԰xo!Đ=z3RS|JU"a"X>RFNGh x~Ұ v?9Xjd|AʣE8V#]p9DxqhK0+8T^]fPsh%Hlוrk%-z hk3J]Dœ/b\0ZQEͧC^?^'prյɆx$_&M {4pspSG *T(\gODŽvK( y~v^'[zm}ٖ7mɫdr$#Q "uZ  QJ!]VH|Ė1"yvחo-6Zh'ë]pW=;O #^L%" l2hfr^0#c%F#/[b+Ș<`5Xi)7.#DZqF0DIG~EXu~`Ŵ&bpnzhIY]Q胴O%˥%a ; |J$CPӊtD+ӵ$a 4K;m) S8$Xi#8KT"GV:GK7LDc eRBUFWP8|㲌̯An!B^]Ec[ [XxN)ϗeO[(lR>!NZf Fgqɩ}]k 7E??zwC>]z7CofWlrYEE>@&\_u]W~aG1DFpZ ʈ8] #{6g䢽ы'/'~mx^/mQ7/=~,tԠۑ.G+1l+f'яѳ~dՂW" H5̅T p6}*5?7%~BRL|1) br{Bm%sljE6kMq)g( Kr7mŒQ듲z|fO2M &q;eYi˓T+Jd"ҨSO9q҃ě{1}>^쟦V.g,ptSWgkq _9?gv6|?Ǒ;6Ϣ/k86-Ʊ7u*BNօwZpKԖ{WW n 'tȗ(B'.B# ]Q) .wQ0X$QQhwNE u>8sݒ zJt:L:e\Ѐ6c&(*,fAV6\Zf48i8PJFO[Vȑ.iqoq=mqߵ!Xvv,=AND]\@oi-*rnWOOlx\|ȴS!2\&O7Ori9Ҟ}2AVZǜPq:T~ej~3us%_*Y8 ,wD0+׷&ƤO)xq-NORݫ+/I?i?QSVE&Y]YY~njT֦g)&6T$8e1QML倃`{ƥuR*uTk{`bU$\Q%y}d\Y#FԹ>቎ztyJ阈dH ߌCz_ dѝ4Z Rk~nJPrZQ~c7?+|;=p+C2?~$6V 6ئwPWg3^S[*-~$8' ׵gUvNj5YUC,4ǚ2h@s0G"/V wOA$u/i|~&p]%%+C?§}RƟyr Eڦ>WjeR* _ybe%TeA'. Ձwtt:䀎%h`)Oq4=|~1]R]Vc:XڔgNBpUKXqz QuXCiDke2MfԊ6|Ev\F9k' 7ByPۖŠg:ܬ7S;S3m 4P}iOQ%m"c2$T:9 hnu3F#A 3D0{vua "elu؈)~uVGW bK( ]cUR^nD!'ºRPA" xǫVRs5)kSƋa{TF-}hמ_rn0(;8,2Lv|1pA}0+|] _wr-)nuVpW'(L:4`WB/qpu6-|}\o2_:PHh#}b3?.BVST-hR%R.a_#Bn ;Q \BȡKIIX+NG'Dp\%(AKO2m4LNTRhЫ"en&(E: Z3眰h֥ %h0LjyIS+QFH{Cdr8Q`b68oQ0ESCS0yRhC|vʥHH9k 4DiԎN2K`$pw;9uyQ 2 -3hi//6)SS#Ј4?pH0tJj _+Ȧ`iO1li% @,d9Muk/@Wn^{*~W5gCߦi_h v!|ML=OKG>c:@=V"qÙG=m~ \`fD)?{V E%]]})! l}HF_x$Dg6j(^>fɢ-wZ1Y4=b[X,xŢ띻xTW{[TCOik>6,>w1Lv_ǻOw]kw qGɃRjo)U-;pBk8y6;6?ڤ^ڶWnǣGWB`Lo>өngPzգb7&/<&((d2H,Վ  O yEQAc.9fS%n}VpeKݻхh\}Փ8-剪z|;_T|rCv?5|l/ٌjŅ< /*<2e;p??$]Oy,Ww%_6bjm_JCօ}_g.da*e_NCšd_Yj~aWo7S (= )X"wT ֪_G-} Pd]N*/?-]LWUztdҷw ~ izwݹ56mNZ_/'w!XΔqVxO߲Aa,& [aD ]ޑExHpdP7SԸXKGgɏC6AKo:Gwmx?H$NB|ZbLRE p°ff9jrI񨥠 =ͺ؉K߭VޝU%-̊W.3JZyriuX-`Ƕ@!EH!bTN'B\,~΃LaaAaֹ]Ik`av8x)j0q,;S/X}%Xsgh [lu;|*wIzo8du?Е-ijx ܌{y\>㢷D?eT M4E2vWlXl]{fΛ9]%VTc }RH!H aѷ?-N֠,tvjq7 N 5ܿmX&Ȟ :{984Ozp@5bn;-CkNmo ynKO Mnm/FX]аKل]2 }ru?_}y =mbȕ;2U59xe2O˧We1ڂUY[<-kq0J68Q=R~NHaqǫuk70V7I_n۔wʻ}8^VJYϏ_BO'b->>VJ9oVW~brXʌz{dTt8ac48tu rBX REI%fOa{t_umG=|u9\j Pub#tE͔l,⸍PAxځrT稴hL_hӴ aS?9ۇE>,TOc{_>Oo>\ՃZ6g7.&~j{ӔY]բDU3ҾD6rף)I 0",ٺ{}PɁSOh%%E1.6=6pä69 ^rҀƘh}?R,,k<\jd0* шBp7| $YbI*Z]DT'E?KRk!ЁX+J1E+x@J.%`dEp'$ Zj-ސZ7bBz W<:|zfn'))ao nmcQܭ>[+v1_?Gt0JI=AokupKIPk[x y9Hh.Ji^u(H>)<ԓSRL~wta/tkZZؼx `QeX9kFb g+3 M*Ʒڅce f=-f,rY>]Hp6_a$L6Ot 2Io, qJt./ k7sgl1h2j#VQcYaPI'+:`Tz/JgXH=>҄sCz[ щpruNjaN##cH9\.{oS9Ӌ&n#JeԷ2+GG9ezt.FpbI)1 Dڪrȴ9׹zw|So nFE9F)2B &5aNDV6eYD'c?"`έ]9 (d`It2 )d4tCE#-3m %Jj`:z&J Trp\+8qQ)ASLEOU*qzG!tOwGϡU]J t t0U!Pl& B (l&[9RE PGSJ2*SyuTf[|dh^<6+>ЈcD!T4h7AI4#CJO1dkF.UrjmM,ԃ9!"ȜlJu/!Hz2)k=GX6iEv׺iMRW&(琨q7)qMV^t=cx0$UzEV 籛Ùi VNTi_Cky5k'wKu֙aOͬմVߢRww^58`}yT&y;uOoWe埾Xu:_ZBhԽ"dˣwnC&woyt`=cN<eus1uz#fyXlCrp.~rY-RCN~<.+b/W~9lUtyti$qþ;,- e-~/NeԱkg/:MaϪ!d,ųv 1#]ɎnEלs(.&{w(!Vږ:9[ǜy8̙˧W4ZmFV̥C0gv(ˊFZP|qO{7T򖏎G>ƌ98oQ]UJpC#3s);؆B0kIi(IO? $_y3ͫ&!,һw}:_3{8nJ_"s=; @6Yd0I$gݱ=Y)۴-IQb_Uźpzi@/%3?: 'zi5 S`T_TzdL(t}pw}p@G7x0ũJV"  ea$K QJB@"xjꇒF~H>xk)h=Ĕ%I̚ f&3J iJr74\<&-ѧހT5f u Ѭ,Jj̠F` &߄Je[  AEFL"8Qp0Ci JZiŌ2T40$^JCbkjA&sQ=>IMP{= ] 4A9H"rr*ܺ#nfeK%q.%HX!nqap)k> Kv㚧k4/4x̍0.!6 N>=8)!>tj0n ~\<&`);WR?:Gӱ0BPڂfECZqjw#!` hk/X _6PWɬO=l"V݌S.؁z%!@W ^%$ '7N5A\L_9f A~ٮ n?hyʁVն>-tzuu\-?/ԶznMĴˀ?yͫ}ݷjIL|@>ukMam%lӫ mԒ3l_~K$+$M>nDz#jT BD'1mLLr/#}/N{}/^ܞ߽ʺX>>ܣ-"$rəR}@t_BKN=$9v=* VƅT7[E=$q+AAU7X◫_}`JgJ )'Ji~_!(py!} bB:l:" 'fdqn34D"fF ># R(S"J%= 9/C9ʓee`y{柟"N;uZFRZ%sw?! L˓8m^,`;6H5RG`V/u_BǂBvioUO`4Y|qáyUJؿ֛j\?q-uc=ug3@xnuUO3s0X392 z2gs&s$?FIH-eLAa9e06@UIe-H$Ѩmq*Q@ETQB㒔Y HKz/)1 DQ.G3T @fJ8#ƐYQQ<+d0CK0%RR[]X\DH knM_4/JPfQ3`_z7'Tpp% U 1u:d)ԱE,` Et_Z*rHK8 Ō`R 钵)?hh]i~ 6 gq3e#fޯO1B/]I!nzl Kb<gˀ_)ShN(~ i#a-S 38үX?I`ҫ^ a1E'=| UkML#Q?(ZY A(.UE)P HkD]#,U F )R S \=%  geG}C;o)b/Ó +h@p)@ɐ 3i!4pHZ2/1cA!%_P](\h.ŕTŌi+BCKYC:3afɼbyWO%ص*pY?J@jp:-/Z*piK/p#+$FaRU-EIDA@PVDx@Cfw7w{g}W}q r_hTSB!نբxp^;mɭV}a5ɯ y=~2W0m p6ⱞU jf}vƋeU,fۿ_)>Z#qVϐ4]$aV5><8 xGd$sU)ܧ]3C ͓5~}.={ + b+Rzc/zT=o~oLrЂk]*7BNq/nx.&_3"~I۪kcd6 L9҃ m̊MrxgKlpJ·C_rОޫ 615Uk~zpF61G]-.<3sS9I׏ odUs?YbCÝ tj GQҗ=XH&+^d)$+^tW(F sdGpbn=g? S\>|h;|GXwB @ G(A[U X)XI*+iUH( TK@$S|ة2|/W/(2<ڛ&q,2B1Ĉ.գ#$G[=<k #M;͋S([BONɇ:S\,?aNOz)67fY qRZ-`N*[^ݮ+ʅD}uX,וZ~^;}}2o@/wVS3{E7su/WKuL&|xuyiaVGD_iOgՖkSԲZ͖ݯ$_(mg+J.bQo D" nLQ9s \sS=SEO`W]$/c4 +7) m28 ,|0ـkKE?:?h.Kg$~s焻Y|OiS|^lfI{c"xVBw,W'٧_XRم닙XSheXfL:Ro$d#$|Gݏ.&Ev{Τ f$՝EV˅6[ϻ*D tF1j D`0-}UNym#Ifz_ vD a?/vF48]spL蔃ā&p?P'^1&[ p=ćƖ}?@8p,sĒs{:H/5hqRKsK` #0%'''d2w}'L([SAY." (e2p|k{ ZdO2.d)w*r~>sk=QrS2=] .GXz;.$D\f F, Ś k lExWac k * E&12R ́k(uHnԯI.{%̈́Wb,M MW\qJyJ{ͻ6%W e9G(p (f%nss-aǟ}Veb~ ]bgM 5bu٤9+WGLs!}w(cY3!' ГgsuS]gz8_!l("/vqyq D)S=a/]]]Uu:ُMDDœcs5gfȌvhae"A 7ږfsxU |>K.哦–:{4~{dH$zݙn+PH} m;*y{m9AB1Gѱs$dr mnW>R8>7r;:4?@I$Se!ށ"cɸE]}A\dО^>qkZdCw h{-}T]\Pɶ|@Ybґkp *{fBy2L򀻦ON/qg3/W}5I^.Шfntaཹ/ >!BT}y!>%x*H/[TEr^Sox ׎;"#xz9y9]?~n8wdFS)U^ȫ??^ 3)d0rH,̌t8kO_<1S8v&Z!Y!eeIQ qct¬%Jf!0kR [>: g3rW1~b}P9f}|8,T ,*?=iS"o?VY^k@7ZvsjH#DGϽ& oO0O>m ;c%ϿܼTaDTDOƳEQ͸@$ ppLJʊT0)Qx<ΈXQ.p𻹷W }wCiB$eyI秔Ϲ1wf|=/ 21ob*6gky4>]a=~eɮ/z\/er,z. СY<$"`,#-[gsҠ|_?/oSOa tqA^6?2"xqO:k)I9O3R m[bMw+4}ƄU؍Ũ?Y٠Gy]3ݑ+y z)Iq^fH(#-"N 𠝓yE[E10AO2q.fiܛ|ZL_4Ws)|Pܼ8xz >M1f|[O龞Dc"(x%a XXuȂFbLՄn9CT`f1-QKtK,8 (po,%Xcrpb6{ANaULnikKA&uNv,˲!.h#6*3tZz #  gtedvfN)$d]quDY@$(\Xh6yӼUk7G;/P"ڹţO7#MZjqvF9+|QMRQk?8x՘fRV-GVtxE-!-@ nfe Zӝ%:IF% }XY;ۛkj9V%QIJf2`y[N9 kf8 a9,R#N@Jbr-5(GL3;no;4\~+SO?s-+x.̮"S]\oC4}ӾrQ6׍ߌWO84zg¶PF 00+ΤreLqDXmv:贌8LƌsFҌe`ĴjE+ 7ElM]4u2JTg^73D$g9U "ywΜDB3u>pLz;g2la/C;EyIg 0 ZzwkG·f4Q^Q=B5=ʻc-t4ho=ʫɭR;SwG&##KD"DȡOd+LL,fx杴4NZJ %Ƣc1 ie39W̊VDZh=lm6J5SX<iUfp|ĝ#&6,3)*{ Eo9 ic ́ anґ dӌ!|pF{aH*#-(RsKeV[GwhxƘ`yl.o|ѓN㓥aQ'^ܺc)h Q3&?fO\=P)aHՃ:j!%ջNIGгS:OtbQ-#7)dv@%R::% SҤRR&.D18,iDx1TX!0O)8(Bo_P60LL,{sYi%4MBl^;{}Z3ak&@^mdyU{JY98mh}ӽt * }jge+J9Ϙt2{<ʌ#cR ;lUspjRfZWdDfIÃFI ޻%NV-GUONUkɮXPC,']"0qˆ8 t!NP\!YsT$ bEOSQ.#stHٟ B%NRu77R*w"!I{j=)wiִFkJ*Zk{U wU& mzM/AV1]F]OEuYP !]'IyK}yU5Z\ؿi.J_e"2G=EdS̛P|2jUV*W^Õw@1=#oMvNFҢ\~T|0Fmķ]`niR_a-e]<5 B~1ӘQ̓|a(8+.t1( a[kPI~Z7r\ɴwXF[gROh Z}ر\lRpHm8 La ^ 6UV 񐘝ZE[GJ{lh&6Lb!JJ3oZW>R[ se>@X|ZN{u?P91O7E)W[Ed%@Q;ӭ~G,Jm(6dY*bd;tc,wA|Y9og::ٔ28z2}f1M:qZoذ1k dN(jtɷe/eNwe5pteEm+K>F&PGWٙ7d+$ GT <%[6SJcf#r6,3Л ݊)Wr/K0-#L.-_1H}7FӚѓwp ;ftOcdtwe0u댗pDx\*#Lx}O}vf|GYWS$5=Jj*b,LwyKKy,S\q<) C!lL }"eO5;]Dx%c۷W#{ͳq>hGrCv$%_}.r]=X7c k;Qaޫi;9} (}d r38^Oc&!}|1O79)i[_Xe;tFyѤ2d$ fn.njczPmGlW Ý:Y$59ӫ y$Vqc) D), Je1&U*Mdgj@2 -~uE\(VZ)~("!P+pj RN6B!Pk)d>R܇b@H*{a isq.BQJofW2B4yjFzNe{SI+"mFW^(S&Ӱy%# 1yfx)l5PAO4"E!s=mepk}㘷 ƪ JmesdA LB%.`Yk蔢9-O>aAUaT{~\'hKהTlv񡺻IeK]E|ږ՛sޝ_F6-e@E^q\C ~puP2zΗT'/-X2@s΄tY_1hf eklaTX`*9oS"nmS[VBϊg}MKTsicSt^E&FxA6a93uyTGGkEW?i$iy5)HWN&r]ťD|VˏG))Vb#2ՙ w j )EC~[]N/y)dr7gϷ)RA=טS*O1yR[^mIYd[Z)+^:]q@BʑK-*eFx"nSS"f?_Gx6#01"ѳ<(C9~ؓR) RcdӅI ٠8aDKFEuCXaF:љ 'A3;hf,*y.VRE k! D 5R K-pMJZ2gf0YжTnbL%-VWk0 55N|^wS)&52rE%pK/*I.*WCF]XKR2qr2|=.gDq^#0,K0Ef8k BeL;mswK -Z^PͼA>"MT^2sғ.BPKC5VnсCYHQamя-Iݸ boqЮBir@n<^Bk,cX43Ϩ> ^u{7P+]ybm ^h<#@Ye.[R-[? hwݠB2_{ž@|J+5xC1Oߟ2Q+^!9 44l_!@ۚO"t;^5f6Cap<ܒ.>$[΁ '󽐌inօ $*r933U)##(ͱkmke>q)T%E T>Y]o_16Od!&oLާYT0' ȵL3y%#MԄէuCTFK6Wҙ|MHL!LPo9))gl>(XE8wҊՑOq_!.57j:8HщTD=zdd 3 S J9t & -cuGeo|H5rNHTtGn9SFڙL>Tav)PPTIk7LJ \tx6%g}fstR '`&LVv= _P_`ܚ:G]sI?{W׍c ݺw':ʲeYiӿ~EI,Y$RdAP_3JmwtOx!;F<-V!KUk|4T\r.Sujgq,g\>B$2V~!(5|XTuD1k39bfY7oɺ r25Ͱ(DXXbY}&Є6P6Q:HXHB&i1D ѐA-ɗ2O9$O38=X6Yh2tZ̲cdUL )hPmyIBf@Vd[ 8.@;ET]eL$'!Vc- QlN`uU4! U;,!j )@l3P:ԀVVad[[ T45Zd`ݩk(DAN OrD?]ZX!ک. >7[Fb$vM| YE˚F\ZZun$Xuk["7P_s-TZnbE\FLy l`L@ zdDƶ`UTΖru͂ - 5"-kq 2/QȕZ$r|D5ְ% ⬸hĶP{9d}!$EX-L;lksj&PRYJ[QQ϶fbMl bq˭ &]u>:r)UB*G?%JLkt vVv A_Dȡ"6UpP/QRODF@{VFlዑT%5uP6 T]Qg1%x46$=358E )мaN!*fL0ի9ԲON19d0jb7mܞgӅnp7_ —h|E FNIײ1UT56&`h>cTUh j]0DjjW^lhZzDcMgPlr֢RYҊA |kFly_.ߦ %aS _F+ًaYKYЀd,ysYΉhn΅pl0υpq 'N- 2 Ojfйl8CmZ-SKsR6!75T&~9v$"|iRQđiJs,ɍ_ISZtn!s>;1 $i<|k{K!Y_eYoOD5[S2S0;[:Ϙޠ?C1#gr &7#5!_}]gH;; ׬6ˈ1'&b}-/Ob`³}%P``'=-!֖"X@1bq BҌl`lVX'\XQ]zFvi; 3@@ر0遹@Ѕp>_Qp@T u~88̣ 7hmpƫ ҲZHdqlġvb {,>B5#@2)pe鵇9|Bu7+ ߻6 /wr>ſ߮~'mu\?M>~nLZ9R=Ϭ@]4OO9^OWk}FSgw&7=u -{g՘} l6QfAb 819BL\j|yrܘ"9F{6vkF~pIBو;͐=mwu %Ϟl[!шOMq'Dm]Ĺ"~bzCBGqX>ٚ T?4Gp YH.z[i-'-F6;HȒAЦpyz 'jwZwZb]əkױƒ]o4!^` ]E'Xc]2wvאepw);k(9kc{J%iy¡} b<2rˌd/o[8%͸(Xy2"gJr#mv˕8L_z/Lc SЗGKalB_:ˢOXPR&:y2KaI(QJB86&)ҩq+vJ/?;;s@[{y|/Fu@:$ᥡzSmɧj'm=Ӕmu%uBP4{qƄЧYxd_^(c^ra̷|sӽ W0́g lwu`]F0cirF O&e .g﵄x$_vˆ%̦zpc蓏@Y5͓0-mm GZ:h )7Cth94<:(awQҤ1hmNIBH^y1\7py!oQf_t^$3dv^t^7;/Kycxr^ʃ=_5p//_*XZ@LIjVg*,٢ ymnuvbcĦJ(l:-Ll*F/5f5#,V RQ+(&_}Eb%9+t֨E{Az(\q ߬>TѶb-YP wxBhc0.1,5h"-X}Ĕ5|Zx??8QBT ?]@ b6x=;'ikhW},EXړpഩj5zwU;s _т'ɟ>r>kP[R?>I:%j' 9un fuNrO9u6B{&'}"ĆI{09“6[K"ήa '_c. a{D'l!l׶ک鵓/mJCZBC6p&Cե"k61-=Z{ǼMfcӆ u/gT#%d_#a2xU}}Pw12ҳ yL%E [yj[dܣkgk0Р.-&\CKUwK{ ^h>lyA%rYEƛ'~n_/Ws,鸴+qK_B./ri_؄K{E5ב۞KI.mZ#7ɟ!l~SrinpNEaQKO?5p9|2.c mH_y`_=`g;_fa|dmI OId$9Q$:bUsHJ|MJ,ɥ )'6g f*Z%Kx C} 1<@XeU{NK \2M ˢvyL.p ja:sRɥj$@S#KV؂2t@C`+m⌅Y`Tjh\ *RSj"u`" &JǍG>l!i9T,-7Ew_*O,ۛfÛ>o3n7ɼ,XJ`J5м(~UY(? kt U~G`3doEgjSZymgFEd~(-kU9s"  J-TfG"t2ewNZû p~&4U1hbUr !ڃ2ahTr<"l U*54F2 7Z5Z֙,&a(:/dv)L  NQQ% !ˢ6]v ngr5NU'Gbz! =o8J/(ݗK,mJ3MK\:C)3Gqуg%sgJ\sM)$(-x 8{3j NcAjQ%OIiC}]B §>Ԩj \EPgЌ(zpTAsy_4}{JDͱkju SHDo_T=$׊2hOX YJyD]VeԠ\vLju79nT:ڞ+Kkh5)j u$DW*'Kc@ͥQ"ih1,yVK7$0' Aw`uZHA1/5E;PUKDZ@A|s{TQ8QUyjnrF#ܦٗ}hW9y8 zxef0×mOL:I^"sΒQ8'_tO2l43U2&"3VYʘx'xڟ5шPsE6Afh Mg Chg 0(hMt*6`x@@{9W@KEv=p88 QGRaA_ )IAa.RQr ю'ٌ/bPhhln(  B{Zep@ ?1JEL,rԃ .3{G(WhROl٨ByZ~n" 6Pq!az2»ͺPK ֵB/Pբ;Z J~mK^AeN: ˋU)j'n*~6T5Az 3Ƶ XUu,N*%6x|;N`ne%C;P%9\ vSܖUW^r|$hiqma[!QGəz3X*7hG*v?aȢkwGi] l9xT[ԘEs|f~zu`DfAƄ#@guHdgmqDep$jHߡgy:$Tg9z4r*aj]I8ρ(~de)ߖ׍%(kgqJj!'F^.X_*.ߑQmiat…"Db7׉W_׹z5M QXo$'"c6:X(yVs, \IKB D\״_+#|7Z։rNguwt0T}5T!ۢsjVu&e[e(f5_oȾxݼnYOJxƼy*זkC㚩ʒҊ%8`0lTڒJZ(xٮCO6zntE&e640J~;PW*I"cjѥx BEޥ2BADN gPD:J8k@ւ"P-5ex 0 W =X%K#< Ab*y-߶VqUj$ rW$ ;]!o5tUfJ0iE'*!J b<>Nq0*VMeܶ'SCvZqLmG\l}K5>ƯA} ՍtucY)^ȱAS;aY6 @ ލBQbd߀abo]Th`{+%aJJZS@ 30 j|HGP1]+=@Wn3TxjoL |b*J*C,C[  3g* m0$r<3&*擆g*D+lOjPQ2hjYl""QQW[۾;bT"ŎF#qSAv|[~tK1PSG$OC:a)/ \}D[imKaH P0p+0: vܳˋy\kEd}XTv]#cIBi szrkV"+.T[to-eb,|*k0. a?gn9"Qr "C܎4ٚV#(  XJ-ia<(d s<aڵpl[+Cs&;`[|ˇo}7)Gk9W*֝J{yD` lrCG\Oz8F) 22iFP36ggLP8@KC6}THMwM ? 672cJRK֣{fT"[uB3U8t#cpX_OIݽdGf0V_3lRܽ\o`ʟEg(4U2UE-gȃl`T75龓psRA|Aŋ/Xs~]`{k34} \;9qd1 \Q+8 ~wq;oОXݚխYûn͞~ioʚQKuEsE+ȘfMMYsoIwνEdof;CJ؝3% R0vtm8;g.L[*ѧpšG8%P1P@pYzf&rSEE ^ S+w|On@)3t̘)ZzA(=DT$S_Ci8΃AG>\DshC mFC)3 %R[us>ܖ´Rk4ALP0Dwvq閌Oӛ B{_:=oᒗ8P7Рi dyŜ Aia)"Z6ٙƙg;ý4{LuR&RJ60">Yf%̛Y]/yp> 79SciԖ~W%|>p{SEUk /ki=UTyz-b9Dsӑ&^ԔűH߅$c^M%ϡ1I/@RAf_P0ղOgQsF.i_ */.k2 o~EJМ5Lm.KI͠خ|gwwXքcoo̤Bݛ@4HJo (F+Ay 1c>l2\r}{$f'/{WG6<" ~5agy3liZRKr_痬*Y$2J>ҙq`0;q^h!o\_QɪK0~R+2 ^d}QdqSnrrjI|ADVn~>l.x \`NVh$ȷUÃU1>zWiڶF9oxHg^ Lӿm7hI%r"#4k/˙Ґؤ11 [ Lr1V*}y9 O &ȜI`V:.)ݰk."nR)[nر`@/[p qU!4q8d4Q&1dZFJL)|j7\f`%7ٌ6f[bzWw?RF\߹_d+Y|)[ll(Ӭh}{+ǨP혯oH#󞢜Ru^H)uymijU"7u*O|c&^G($sp?$^@pvh @YVbTC1 JH2vEΌ}~ iw?mQҗߟs?=_9}VVp-@ȱѳ&L=f2+i,mGhR.jﱍ{cgNz!넫6Z!΍ ک]NQX(6ňdoFY_}B3KkϩPV~SU-wC]~iyXmF67mUm9"x!uo0P5'dcFlϣ4KqTY+.@Hq=DlB6f{wkwSǞ ưV<\k__ 6dbo< h873: vgGS3.J7X1W ƕz_k )Hrh:_Nn~2"b ɧ"a!p1'g!}+GyD\qXǹ(4ڵbLt^޶,eŀ,9W{J huj!P|W8v-*q GNb\diX\Y-tc=6IdbbWR!m\qr$1\F>Ƙ4dOXaXR%7[dp>3".8vcgcS/U1p,G8V1gRpdG܏۱2G}:?욫8ê혫Z׋/zw?wŷ{O~㻿^8786.>Eǻ/ەǻV"'?ƅ~nxjcsn &"t&!;7 U}i-̻mFWm6EO]'Mk:ذ *hc@U[13NNy'×ƪsf LhqSR˂LqMU @Ǵ2fxPluwOo M|=x06fEӪڇ*Mh}T ,d'"ΚX0~[{h?,h_*4j FRh_(, JsCo#K#yÞozXL?#/>2}{НwY~ :ĮhIH YI8w <$7b6D^bG1p.VQx_W( 9#95O u0Y(fesN "\JF#S*69+K1O<"gP rLPCa q_պSŶ ҟ:oN<'r爴ѓ꜍_wK7~h'Dœ笓#p-r-P\!xYQg(j)q2cٹ,\Jvh>< 7X8X.ԣ2P+#$(3sΚSF6um"6k$Z8?eTP LۦKTmj@T;RzQD2] 5ii=7F[0:Xhnd8xjVWy^Bfj+t6FH-ibq@yFY V"Ζ0s3aǐY{C+0 ,+!H)mE6_f+._RX+^&#`جJ0huDaa@Ƃ|[/G{IqR|߆Ƽ'^)7J#2G&43d(_S/Cܕ<L=ۧwy;\V5yRϋ~uBƖtȖG MA`l_AͯӸ=^6BBʲW֙d/էqpa}]}C7|}pxL7Dqh((F\"ef!71J*qZRVezhZR @SϩƮSTiSvR,̶?Á]Y[)CҷKavհ`z h{`V ժӻjU^ZVe=[ڊ1R$ޣbP>|nMOJX^&tϗSsxfGycKy>5*Keox5f@Y*1VR7Z|fRL{Eqb֏˒6L~SĔ8KK&fPJbiwtKC뜱=nt:Cp0^i۸g56xmyԶttn gqoӪ w_-C-Z,I גm YkYCXEw:C?;`0q|6R}!iEqq>C O )T!!򐎀1636!vgVJ j ɿq|r*$xK,(E A1ɗ5su&JK'@z@ኌйx W TzҶQvJ~zH|f8JR C!پj?,;nvzY]! os]v۾' j@a2\.ɰGSWsYC)mk(/tW(;?uZN\| @:%)nӊ!>ξ웸f2~{b\xXLkX܌ɈhL_ Н!Bmh)tl c5,K;< T#kѣhM۸Z*L]ܗDJZF F|4n488̨sCͰY$}[}bFam,Qdxh AdӲ]cqM雸ޗ2^&09IY)I.8kk5!贍MI``I+SZ*ʓ3b&tT_kHTڻݦ?Lɾ2"bfQ'Jacfr:*f"V2+kl/xe&7˳kL:qr3'` @Yl'ģq5z&y< m A]& ϗSxWI%^Jfˬ%MaքiR@O<}j3cwZ;)k7@9fR#ql+_}WW=SIHԤ0"=,*sgYѹN|q WVY#'9'(?;.L^ zF lt،(E/ x@@zUnJQ\Igd 7UqǼ,؝i('/4%|,?ҹ,rYbq,Z{iHJcA傿DQu}QY N2Qi>z]iZԢ/[z)Xa,#d hGK<x :Dq\44] TZY>FHTS* [_N (V_{4Gi@1Id@YSɌcX=y [R-6*I\| ?QW&$}x:v?{ƭ/[F@UyJ|S9g7O)fȒMCR"-sH1ht725vrklk 0L5O!="l24\I32L1_;Z'k +tE# ++uhxQS$ܝ)s XFؘD"MBkkؗQSP9%G%a3m m"v! gy'/ vII{I1̇@1[5T^,ҁ040Gb򖥯Akw)d۾Et?7YP/ʣ4kozwQHǥ_7VްcM/ۯv)&)B@u %<7U Vtxp:TBW }#F J 6*Q^vv䬯Zca*V}Յʂ4!2,{(!/Z+7JfÃxQnпju3-ayhN1Ve:+ãI38#a{ٻo gvJ '0t;y,hNyNrl;w!$^&f\+AS%cJS55x[OSd9<w\<=ɨEpNeX} N7I'2)!Q>(]#wRĿ?f7ٽݻZ^W5u@刑Ke(ҥ! AVfAjS f=\Nry N[/g]'zwoϚ(ga5[v"&w͠\fogٖԩ#?^zI)vBV/[]{ ˢ) ҈/~nK>R+yrVa{%/?WҤeDTTnQ-mU(k= y]'<(W3%Hj@65%NHׅuQ\\2x[A~-7e]wđp3Fa ؖ9 3CXDiT urCiG}Y6dTUTDM^K/u6L3Em}4h`RPthҳ,MJ0; R@G*Ť,臣t@׬ , Ȭ}Qٌ~o_6Jݽc`d쟶M܄(r?oy[(*[&4RؒJ\ /CYhF VH*8oK (]'E:bʂf}z+ n0C?_s޾R+'F[*ܥ:B7#BUs+j[]:(*]#UXJ N#_GyD[Gd#1 /]sbRbЄΫ봸}z)%.u*f סQ(EL(&"BTuE9бHŘ', 澒>;D{DO>;}<0kU`&[m`ZHY~( >Ka0[햧Rny ,T%WR ,!N)_ hx l{|q>G}\)  Qf#  Ԅj[ /j%Kk+;'\DR5TAqEmJ+U{RDՑ)di+RNS(M#1l_Zc"iVRiHE ȫEO E\H kD;JA\xqQhtj-$W9{3BR3̪E-tOLnCh H}WM˻Ren_p鑈!)Kq Kb1B+V<{y +U|4V!ƂU=g9hmY>QQ0r)LS4^ GO#=^cl;Kᕔm9`EY*sH+ZZ$.|-PJ2T..Jl7»:s.<@ϒe8^A%,!7 qLƣG))k/),_987"ZD bEANkdnԥJ7Ԭ꺬42j21-P@ _*. ,ZQh;@(R!gf"4eU(S)ZȢBW^k8V&xR?IeG ?tMfA1|*`~T @x`xyC_|ѯLVj2z7H=K D&a̳6YdNݮN<pFK`enmDCe ae8JuSLV*oFjNl4BʡG0v_ZH@7L۬+ИGia685%6´#Q OhDa ZhVBОY%m^f{ػoP7$ݻ 1e@ʅ(ԉ2eC.)&=F)]yip7:y7:y[Gi7<)S.K+VT )VeeMF{ӐW .Nԩeɏ̌dZ;Q#!RԵee+%" M}"^FPJmR b6\Νº{Nud_Xx,|t=LlZNg1%3o,vN;|X%8n;s8eMY 8 ]&K碞JDFpER<ƑtJ%3NYi늊e7g߿y,9r1]=q|$,s /__wuwk B^ְ1LYAbN R*쩏eUQ]b%TtZoZ{hf92Fv4VsV.pz찪Xoܦ =N@cZͧ^I''j'ԙ*v&wQǭ͢NUU W[4ˡY_ ƝZ.L F1;msn=;вtzHę󃈈QE2jWO6kEɢ' W&G6ðG6i׊&R6Aܯt;6_VK=Y_`Q">kEZӕLz>xZP#j7MZ5f<|ۻH }׉4jeO=MkvD}>~h9Dkz2ZDhXѕFIU_Nz5XP积SA:s 5P /XǞUՈoSƛ$ uJXWip,PXOL3SŠcMфw`z~):)J=YhD(DF8&5:V |>Յ0KQ!$ҖzD3OpZ=9ќ LҎ=;'%n*9DjX$53)2wu~;!֨[scR笀@PiziEeݭꢘѴs]XgѴ,`<╖OW])7oW`"y߽}^|4_cILRoJ|c&' qSTu<\xm%]%ai8JmeB*P2ք*}K8&Z>@Y]/:hS1յAPPQŝemU\-˲ԕ*pʖ X05*ӞeFC籖  gӱ㼩|]qBPކ姨B53֥۰q#ɺ)?|ՊPT!ѿ.?8ч!u!Tn7׿}U_]]^[JC&>|7)%/4EBQּ9rwߨg_,q"nkꨳluY"}Q0Z/gE>ݰwߠPa{j~c9߷Ԃ@Go5޵5mc鿢 ո_\YǓښd'-JԒ"Q%jPm˩};88A0XA֋)dDU7JT'B ZY5H8V>|b$];G5i[^?E-'{xvi}aAFNuDPccWÓ.x`]-b^YG<$PsN# i\"nF!\Sfr<\wm{t D] we;=aaZެ0=DM7w>dsxqDI뻊Wlzbs%IE'Pi%3 c~Ͳaεch .F{)phOlTz!գݿ\=]JVaPjD+V^` ZOwbS*5y$\?9to8Rfk&S0 U}܊1ciV{|fY}ݳg _  PH0$@q`&DT~#a R<&#jb[.n|Wxsy/oH9/B$D Sbm,u L@Rƚ `S ¤` Ě؉nRPbGK*D{X@X$rCÏ2&_LRD!|P27??%~s,5Ťs3GGyOLr] v#?{X:2Xs-\ҿ{vj8*Lsma|6p Aw%zOffȹDzh"9N)NEe%|+=ҕs5}v&1 9}UJ&Q$jHCrS ~>n,W֭.!SU/)Y nMhWut}On51RdݮJ thuk@CrX K7|hѨI:]5t%c9F߄ot7S>Yf$(o}Vl1̾[حbh\ûjq7wW͝㝻ͻ;ňsW>bCOf45 쀓IGr͎D9a"ڛS夝{<ݲz5~x^VK-^-`J~i:uwYf_ .z4{ۖdMNw_ cWKBF3ų} 8;`omݕ~2n(&TKl("rjf#金p$GQ-o,Wt=K:f]Ps ę+Ĥ 4WU: (Ri hհQ?_pyJs6碞+N!z]*{8ǒ΢qտ$"04)*y.d4|h'! 0<8>\PUA!=Z0ްp&maÝ=W`˭ %^l}Owb VaCKOЏW7޷3[ŸXکsU-8 +C_ WRa  qnβj@_~neT/BEd&s+XD%X`cpK=XgA01 sS2ސ9\+ $ ;\orp"]L}'0f UNqԺ[] BT;X+:3S&:n3Z&4+WZ:u֡yb!]Ѻ Du꾣u*m[ nMhWTxՠ]-ī+DCsF2C,*  Za̘slxr 3$a>@j s#Dだڇx*/$Uvw Z$U2rb3z_cJ*MpsMGHUXqtOsKg/%E~>"r#)D{ ;SJidR8? cyzLc=Ss,rR>s!ZL,_d3ӽ*|ki!SZz^V߄->7irWޞt}\v?^qg7xP r-ݿjl*Om.fl.f=\v18izVE̕2l%i-[UG/F/)%:ׇ $d7˵.s{1}/ -`K'*VzaŻ)v4(juNYi:+c>~alF>@Ys ~iA%S_^"`%Xi:,e:4t%z$ȧ`/kLk zBӠ/(G:,w !r{3{NŌ9v)':gny E4c}9*NƓuz*8BH5P/lZʼn8]:FذyD_ʼnV6*Jn/2B2PõV1Bg~& ^ ۠V=[EѡH/37DCX &Ds((ĒYı5kAUjȈT$UP $TX:?'ˣ({Ws02t!%$"od!Τg̟R~;LG|^7'B Óu:.;+ujrPjϣrSP:ݕC%)̇'/+ ^q \OY+u('nP@ W%RB5FI e]k4jr:%.N{Cɾ[-R&)t -kN BPTLQe7m92I^7]fyf1%`yYE"#h7Wϕڮ1XYgW[^N8ϧI7[!oK'|5Qۛeh,oF{wC?t9'&;/]3&Pܪ~oiۜ|gS;~AEKkZbIےԏl21 ,UB֊:d5 mN.;+OLKD~~sY\rK3fy7ecoD/=Mtaq)`7m I1@mQ;8*A,#Ǖ&,{R0.^IlB< f-lS }|1_ (r%p( 9! Gml9X{&hm WR_Lm:@Ky)z\%RO;f}a#a~iBR\zW?}x#D_ .|[ w>(Pp79oHt|Y 91aQI2Q'w>*YPeɭDK }51M5NH`År/!b2j19WhgSu1΢mE7Oz* tϻ6yGJ∗H{|}l*\f-^W?{ _3;^, AoWxPoSEӃ"ܯ &j}Sl{qF=J)06N][cE9b5xҜG5pXwܢ[(3 ر x~{>>^+7ijJ?{R 3Qi6~CŊLc+m2u|4:zrBߏw gSޗ_-`J~IWWwwpWKqh$}㓱 R=>U:f\Z2@f6g Bs{wJfJnȏs&GVFSl$#pgl'\Mf]Ps>ƹ)|WYptS(k:o-_6Nvll4q{.n.=-|ekbGٵgƺιTzz1fέ̜SDa0* %`" kHB0sɨJ+)F'fj=įN5\u8ø{~]q{DI݂9f^3>\WJ\g4كʱ۬ p.r\{}pe6n0,n,q̃,;$[,o]~;-{?̆b|o36_/[v;_̬'`Ŋ5 U у^AcOhٔmu+veT&ޛOc[5 F0.g`##ԟp J[jAf> ]Azx 0ۄ'o`Q,ဨX*CkbI,NxF& Bb*I]pKe.[`DMȑtz#e/ji&dVl-(QHgiJ:/xlD2?=~z8>",*t>p9i8ˆ]kC_e6PKddQ"B_Bb) 2ewlOKa̝` bwG.Ksy9JƋ0r~!P(ni1=l4LI_E܇Bo$.pK%|!; a Ğ{3SuW{K8aHtD\{.@PZVƂ@+X QbUWVEX [E$J \[f#C!á{86ӗIlphUDά$O `#QBءKԊ{c`=e#ˀ\Lj1j}ȪHN4(1Db$ݳ:ZiㄹߝmBHs|-VF 6__Qk\ a.EF ~ "J'@J)9RA 81Pa(N( U-FٟKHo8L~ 8!7(~!EԟFcGL-yD$IݘI&༥JBg_+]qبGoA HLSrhn T6tL9څSt-YD6g8;t+Jd-/{qy'(@4t/n^fiH/ZrZv?$4"4l6AjT~##q N!6[? [HnQ8r+)pEF ewlC3MEG7.!rقqs7\ۗHdiͻ&(0 lp(ъ3@'V&7Zgښƕ_QiϞK㖪l5Us6SɼlN@5e$g2jlS $ _ %I \Kjb*:_EW rέH̍r^;5 #q.4sB.BZC5uD5Ldžb1Ĥ2KdJF;wSBC]vcuk!h ]>*# 4=(l/-VgDHȭ zn&:|xXY5Ua68Gx×y\0Z=o(lZIKۥ#d{=:L5oھ,B*įò[BR"MrLjڳ39 E3 (%Xܠ*U;]a'[TS@Cgz~ܘr]e'sOI3#[g(%ro ݸe`%(Ԑݔf4#|_*2R!JaLA\^zwc):G4B}:Fp988bX&^co/.4\ҳ2 wc_1w0Tl,UR,pY4ݴ-[ !mv T!dow[=EFQmq"@1U7|#|&L z,Osdde[OΨr5wKւh'!1md|=MxS= wCO385/XtxRYi߃Fwn2 G+XubW$D:5Jcc j m}~ ,1Dq?UJY}sI)^g"2~*Ct<ֆ@ Z~Yj9%(A/,j^d)]2w &=˲2CCRV*V^:Jv?7?-Ų$va&YLg{tS!J2u?w к(N6ynAZE=&bA!xU~ ې\Hy0TU[Ssr>fE` f57b& b$$nQD];D6bxqy8/\O3rxy<~U{boFz76;⻟OdG}CzC ? IPn}EbU^SmZLe;~M?Kvs2mлK{CƯVz(yXr=*cǟ><>-v:ŸOOI!"{»>uVR޻!(ܟѧ_]|\Cwwƞ JGoN(%c H|1ы?-<}#(Jcv1[l'yY>N/T䈿CCLƉ" >3Wb/"ѱ[jMA`1 ,$QL`> Rh%Y 'Eݬ ܭվ숟SV>a:w?]~idŭ6ZG|1컹^\{M_n9}HSo~eݥݢhn*Ǯ_;-睟D+\۳lzṙQ&#@$v tu o/vS<!/-MqR p/Փ"Ւ &@ä[ )=DHb)=k)eCH!T쁐: ciL]B2= vH"3c j?fTcyC?!'E9ϥHk}?d :-_M9iZ$س?=ج сc d,qίuFUd4pzԟ:9Gz} (P zf5Bx<4wK3֞~]0A`'"#T!p+nCN٬ W<ˡ`M=ѝFzC 8@?]!8ٺ! 7~^/=fb}B'kO9P :te|J/d`V &F'OZb=hy[\գVyYs:=j9D6}`kT.Vw/|vG;u<93Pim:_-9LH8E 4O(0^E E`5;)ztx:_?L8K~#̨ͯ-ތEVا.6uI~~}+o_>_bcfڹtޙMd:pq:&̧wQCJS$ QLz[~y4&*9d@qː o0Kj/`6!(W4Ţd8la;k *,E*&d -i>*(-mqS\(x1c / 9 ־)OfdfM"4 ZL-V)f1kRc4€)IiC3y K y,uUl*V8a)72\U KHҘ:CH5Rq_5+0yWal )X.=L0tk+$[|c+= JI81J@S <黧 */QzIR2|d\+Nr=a'EJCMsG ڑ̵9&RyK) thV{X )QJ Q\JAI))=D@DAJZJjMufj8՚^S-  Rz~R oPǍ}1EF>fҹ˻,{ <";-k Gc%U{1eYTsw6Lc^MO=6e-k6m N/t. Ւ,t!k:~BF)Ju 0d#oD3Rsnq7+Y"%uŋnVٍ̚% BU #ZJ (sZ+`e'v[o 3~<乹ՙ\|5z]K5L5k a"̑ /t)Q]~#L~q˥PUjϮ䤂oSaž9O~Ӥket??\+r/D7om yCnW+%^z~t7f=pLs$2 mX`4XDT !ҍv7=!zW_z\80oEChK R5:p[$eyȻZ/溈n/H))e&[G ^3uP.Z#SPMrǼ:`C/_}_]b"Ӆ&߿~Uo& @,Xb"i5QdIWջyyθ6CvN/< =8LtXt9a=9|F ?}xGRE9$DKf3V:uFrp}29nɰ!&7  v b4S'+"Dai?1Dah0=MSiM\CkKр{L,`6Sw4>.+$ei R$)N4USƥHޔZ):s{4zT$)X~6+0[f >|^lwu?ɣ~7o ̤ѮGv=gvHQEQUHPUQ0(0z+(F\q"ppūpEqH/?wL^5ɾn b Rj v77+T|섲1%"&9^+>VkԪbX.”$`!*swR"9:=^ҭ N 3IxUڝ t9LpnՓCf5h;#KV}\)v*ljnԮڵ]?}UȖ$Y^;,>bxY^Q kUmUjWG7'J &zlX_:8GƈK)DIF8a]>{ct>{U=8 CW5* z5[7潾`њuZ_"/5u:QAXAPea|h|۸԰#z>ӳ`)7T'm3 vڧ<"sHKkt7E̬R{:Lmɬ 6TS u8_?C4ƾu;lnVŏZ{OSMLŖɹoδ4Ҷ)<ӿvvbln0 ߛumvƧh9Gnwb4u~v2N5<̐7>E){oqn[S \NoTn]zn͋39j73OsJK0Ыv5_B6K5B/5lv"rNPAQ1>!S"i<3VH8G1Wi\*P!,6$%\H$Cj$RمCjh:3fy|AdcU}hZ}&}( eup@!Ba.>Tۧwynqgk>V _TjOtT̼1eXݎF |êP9,tF$Y'GdR۩PWvg+H=~U ;)pX (tLbϕe gE`Mzc-ƀP;usBRb^ z+YSܙMqGX^Rcd,kk Lk'?'#J2C Z戢X%"SXaG!qI*Mj45wmT̓Io%I+052XJj a$ 6i KAT^,NaS^D3Lg,b"r )C D%:C&*8Hcs3$ , 3r"8' )MhfU5Nd(3m=?"~xGk knfcdOሧ3jՄw8F*[EZfܭ\^)륰Ga(s,,uv9}}m73^zP)/C|;|"}3}ƞԺ%fQG l\Sӆ>"%)[:ٓ ;ruwSN; ˥3-BRK9 wXu>r>bx9rƨZj>og)?C<Ô]\^&?,A`GQ,zP]V1p~[EIS*]a"|oQfm۴,"6Qa $NPRげ7[㙧X(8BR^ׂ6][6AJ-\ MYGUؼFUCz4TWS1FtI1]Ax:ˬPN'X[ao]:6M6MkN;`8{gk[L,jLMY,'_5Eu1['ۈS ᜕)v%i]&r#9IG|O}O͘ML1cO#G4fl>"^JNHiH m$u‰.~)ȥ$h"ғx|RJQUW) J ZVUZ ])e@˨j'.|׌i"K0$}DC>e}1^T oHI'<߈kBhH#8ۺBg&dךZ$yNN_V_K$z=yX~ܰP0-Q8DQHo!t&y]8yr®!(SXCo2ךP6ȘmRZ_soV; 0KXGS'*$r 0$єg"c>Dpr51⑆(zWOLJ x<4S:ސ\dFĆIc:%S B(?UP&xR-=("X⳽,?K}ql`dVBwOvCh0zЧzzs6??< )bzUɣglFe@~Q>Y;j>n1ُlۣǻAXRFGp3ƫw}.im}>|ϛ ev}R!%㌜x Ӈ_KjSS9\Q]Zv+JUoUFQF|%ًTex.׍V*ta0.a(hI$=TR?4S"X[a$ŶvXwf9Pd$b*EXd#;NQƓr ͒cW% b2]88{lt7^b]{;lTK>t=]椋chrA,1\(34 {k3,>r#s.TI{f'4Kͳ6TPVX&L cM8Q@BJIb2Ir=sE$)Nt1:HZmwh PCk붗VzCN8\ :w~>kWwʜvKJ kzotTi-T g{q[.@TJyzrǡ13E<ƮfKµ T у0%peV sEX~jLb4_.?y{pMJ](# fY% !bDhTNmexws5^N`OZ n]p⎞+GcJ9*߫/s&a}LJcJ13L8JbAydy4F1FZH!"}nAu..1gϺܓf GA%a~=ُS쓫E׋w0)탉ݼ)OGt1:|#gG?=߬CYhGCokc5=n-ښ<Υo?tq*00-TU(k秭t ]B;oR?2EI{K]H:7R RReH.ٴqź1aǑRW0_2xLw.] }47'=ruzxEW$z|~gv.1zZS?k؎'vS%TpջM8Rdڅo>?|IVc;AJbŊcہfjף E%;;_~+cO9;ba 4kYD]D\ _ڲf28ά'Lb܆ħ-y3Kr<6fLFrۀҿ3c]c *wVdjb9 )b'C [_M#$ZdHy(<˙\;a@S*BPP*Ӫ#/z"{^EYӃp7_.ۨQ P Y~jjHXUĎnOZ? &ق?ߍǣO1y2 ۃ?2~ɓNgYj j6gJ,ʄmj/(ɍcCM1Pd%s~WMJI%()8 wd5~f=Dm؝AԊsD/C?9YOfbkXHWF if M(?|eych+ Ћv"ލͯ_T`y@+Pr+he8 :oPmcyfM72QA>.#ݫ(EՏcm<(<8b@vSe y%@Y˜שyPÿ 9|UAQ4?B&ΪE-<Q7#l> kQ/<-79u[܄K3P%$j% E/3Qk2=d,Z)r6쭼\OqpK W{N9.k;SL֋ 8+6`&dWFݹ(U)v/=.LyWGzcbo.{hNJန=d#|HCH4NzG]Y;^xvc_v8#ZFNs|s8|rq: mߗ{g{ghY-覢.qC HψSj(h9AmРxZ*f18)7o3d\L밒sеu~'ʹ0uo8_eV9]F_F(t|o8*_Nh:ĥ48&(n~RU_dU}UEVMUݜ`-3J2ʐ"'CK&PdVx-ebNR IRtQܵ_Ch_M93R=#_G,{/$'-$R+vWx>*= xi%w,%.Np>fnB<'] P=wvGbϕ \=N2?l ҵ%g%8vH qdC"z%Lz6<T'z+S_k~Ě CB|}. r|_ak\2]ZʫO-'YEoSdrkI~|܂el f+1A?|u[a͢NTB҂R#IF3\k J<$'HKԞB[!)*q7G LRU"mp]=nhkCjED/gLq{Ԟs<Qd6ǥQQûL4vwWFko]/ʹr2Kq'{}aԨp [AdB UeNp*3j˗'tN~ewAycI߬8(|yp~ډgnpH^{Tí{ Jm0^X1}M$dg^(s?>-Q˯ta9V;L;M-F m1+.eu#H@?QM rdY˸DN`QyOArDJ)-vWKF[|̈́%${+.H 2Bȓ$>}]¶lSH*MxB&ESb3H<)hҸo=6'a54o&_|ao[j_9Gr^@dρe*vw8>^0^fh6.*-v)G}RFBrtd;FF9=ռ+h{c6}O#hs$hGo>em*gaeipP/)GY.]d9hQ3 "iҞSK<<ʴ$ifѫ:% S -+NߵNiv6; Sjp>,׹DCm M‹Oւ. $DY0F;%U }:"qtFvNJ!|U;Sp_$'XO,5IP)12)rFjP8*/Ap #$bki-Pݳ)m&1|$"M)Krb-&aP>0QjAIp;04O4l5È?}p?4P6"9Sg߿bw779Cڇ:wk,̉Ő oYh<[轐 N~!9~Q]E7340Kv=i-ZX>?7=Ѭ{ ͠_&'\/sҊ_WۍQ9t9#2rp CU6</|hqiKoQL2a_Gi 7fEdw/*_}E-g׌Y?;Fu\1h\ńvNrQ\Gc 1R:+fkq*~~%sS &B "& ZVE˒pX hы`rUhD̠%5e,5~gS>Ki;Sޛ>B уhfPr%B|e/`Flˣ @aHR7ZRUϠ.FWg)qD(hB!eDIkq9r|{CiLɃ@Ou-COjNn5^r)"v(R9K {m8gY*lh_:bLx@>Xkg"XMԂ5`Q1WIT 4zW `tBؐSlNJ~^MyrsKK p~8gZu.tw;PpND)2m8 ZJpIiB%G׉Grƭ o!fPάyϔu@6ޥt}peTds򌭤f4 {|%`oMaf\ ]RЗ!+4[:eP&> ]7W FC|:Eذ,FlOot[{bn-Fbib!3jQbEVX%BlR*=|P܏8m:MHB TSD zg%O>w$7z>3I绩 rqZ B45qTH BKE@xE>5Q mji |`B5CPܩ6bM\zFɸ3sn:;)o E̵}el{>B#>{Pr;}y81i|x',g{Ch}T!QWXoe4ӗ);%]P=p~4& ELaDx}(SݪEtQF!a1-lnꐐ\D+ɔgo=nsnUy":U(Nhx *U!!_-S`#JO&Ѝoo׆e*P_#aˡO;dzmz,5fw7W{/coY/.8xnoҐpUI)K.*Bh_[}vUʺ7ϝ +u-cNWlVStX)(/`V8l9g{9T&~9M]cL7mnD*>c#+_ k͚;AijdVW9YHʩ(.|9Ը(2hIJ)˫%-^C֭n2 esGTe:L U ekŦBJgڃZ59 a)cTJJ*3-~wOK29}S8k ~*P-(4dN5fGsW.|4Պj*ť3P@#k3sfn'7rEDs7L]y5j159c oO8 [F-f=4ymvּzhr2ДAT)ʛFH}$6CW|`E3D9X}=O{ܴuv6FjsI syScjGpmN kU鈖!,oppp`TY'Z=⭚ 2G!Rt;b\`uL6LƸly|k,]u7B*Y_%cIt1YytW.tI՟,JJʫ|8J7MAo%wM~joo$?1yg|f3'aSК6lu}<9pd02ueTyw#MQv!<9k BR s$-1$:!)yk!φªUiu;(d~< YVDSΑcY观e&TPd!Q?ǐrxNHէXkpy;z7|xLjJ2!3Q PS>U.[@cB(&cG|sO#(F#$#زz8!"|C >Zp*ZiwzgcRĥ%q[<}Z&'왨>QHSJAw@BknR"6#i BZ(]dZS.VQa¡zr@ĭtU'Y|ʲKg7`b`=ͦ~?h <tpS m 8 4o^֤,O\&J!YOf? [%<-@d < T88".``Ʌ$#̿ 󓃹 U:3ndJ1yb/,H}f󑱁&'62jOQdx?L9q`&LQG8=Nƪc}u570p*t ki޻`޻5dޗBAU׍B cNvsּWͥV N:9]x9qv*͂[,Qo~r}39|Ң3GAPXoҎN ~Η`y"o8p3 N]+r2j:((хL&=<5DuZw/5X(^pVSa@mJ>#`.&9x yoZx1m]y'rݢu]P\tEgrQ`Kx 9q[.̧$x4tͻX倨΁ss6ʯ9m(s4jZZw06˯$D]CrYpI~G~L"PZ Q* ~& G1վ6 ϢFHz}Q |MV])\BKc⨍STKaRLָPxXw!w'8,[:P!Qb"ql%:`Ls!F"I5u ]U]2MiI&+e64MbEY6oXH|FDmHY<2 d(a#GVbWD &Be{ ;]cD^bMy}€ED0=u"0GTgEW.~S$VS\ 7uS<}Նy~*Q.wb"7ڍ?~!XS @7Βg%6ֺLB@T|oܱFLZ$ lPdo5& ̘b)|s+Iro_eVR楝nw|x<(bu|E>BR,V-V"ᠰE{}[;=cBxR+"%gIu}E, $:ho8)8yIQ}QU]X4H b1F)TF"OX%2 ~@h.LL8vպ@Ea?G7KX \ 6,vR#bܲǝm +5$.y!1ߓU0UY 1ךdF|FE5N@(HVg>$"5Do,J5k.e1*`ז%mfih*D=_ѽf|߽ B&ג'u~pMԭ2<(?ͮs-Mxh"\ǘ|ް lN@"ͣ(?卭rYő/(]}QwEЈS52&u0ZF֫>DI\}CYDsZ'zW~h)zwț7fpsjw`$nu8aa5x$N~ׁ?aOwt'Sa7j׺>nkN!N4ړ&XZZWau1ń&۽a蘠 Aw1ԫBEKbׁ!.&I\]2I(̻ESHP$Ó I-*E9⻪%F[@^U͡YLCwA1)f3,D<[SX>ogc-KBUGY+-C'z,vu#fa+2P(kⸯ պn>n1E[]BQ"y r9uʍmN8֐ᛷ]dvǭw+X1p_4 e0BrS#/pas2 N$օL.B-*V~\W㬧TBBJcj]HsRݤcB瑎dշT)}Rvrֳ%c ޒċGY~={36LMyT3i`y.Ƃ4 0< NQBycz =N#F|IEJݲsQm+}mRi ~y6{˂SLe߬Ǘ ݽK?A/:A~9E17K8\$3p5˟׷{x % t}'fHGlݫ8e? =}ЋG|u|A ~ _:kef-?,}hLM1巷&'i|oVM6\"o}^ҲkUfi 4ZOݙuc7KobM,? '0=xUσⳓͿ^ {qs`IMmGfI][܇U.fԟ uE 0q4LfryY2>jڇepimxÜeHf/G2O] p)-C%d XmK>Oysz&ld4I6v?`.ۤ|E*[ Y}Ƌ=$᾽f"J6oY6G, SZHPu?w͠|zX[(<h#?='l|Ca[[|g/2R [=b)oZkv`퉕Vt갽@рrYr$ [[MK&`VZCNYCsLؙ¢ f ;S"ȩm Iy8J7MAfkLM% L~<,IFVڏˌ @hXt_6NB^>}DR,v`dVhMPDgE :/i΄T*O/Gφ{BX .>2c> I7 C1&.Xf!Lǥ0VTchS5y q n8͑>D" UZ1dHfLa]ms7+,} tEuΗX{u{W~ٔkf˲q_cCR7+!hOEg ]ɗs)|U•@Br`} ؘ?**ǠC+cS*XMh)r>`EbXa#nG"Im"BI.F?7勗` څ#ט9ߏJf}(Ϛ X`6m'c4O1@ՙRw_Nn3i\a[êmbzçG^I &!%FWϳ[OC6Q,T hCF3>7Z6\dVcÌ kF_bqw٠Gmu|ca&=hfIkq>keYo9NC6lp+Dgx!H{.Nڝ/HZJOhw ~;}*g ƀROG~mݼ Xnq9g?|\n&ބr87N{/eUdd7Ww/Y•5[~uO/Q>/g-`x-gMUasast"st&4i /\4܋FΟ0vw>2j0C_6m6\[B胲Ŧa,c?bυ0V` \-M"&BVawm:ZB~Jqw3EEj'Zѓbb,7>EO ǭhK`\t@R3y`w0(Wh,K LQi؃²X*>> p|:z>; ST_tv]<[[Ӵ;;ЧRIKZKJRj#BKKst;r-Zj3 Q TQ)gM\1yL?6܎?ۏ_OSzѥ'殍 V+U4Of|Kힳ="K܄8g">OM 2ב߽m7eTyqbz?Oy<:zt RpîZeQg mމ+XRaؤ;9kBUVָ2yʍIxi 6.kl>uӁŲ0 U$xq hmO~Ӓ# $rH =k֕im}>1'CV .?uNiHh',H!N:Z$1PIa6uBԅ֥RyA+/ c:`ª\TR{rX+ h'+gJ/Ve&k>U2qϥoo@ղG\ӼmL X TY6iTEX'/[)JKƩô\f _Z0% S<_骒 FV@)Qz u"uYEeO|-#ı۲}ꐱm)/T;Itz$jk[ȼU&xC|K `<8O*<;[{H8M(.qAA!X-e2KDdV^ ʈKi5ە$'N\eTJh'/Y66FqyЍ&7Uj^rnD?Wg3;n_]]~0+NL`ro8(OUTTg+qG&5C+ц:{{Ok*Ixy;|QēHjwnzs7}_錹ݟIYDM&hvxͷbe{FVCVih-Ĺ9ߌ0G}n?$u~'dem+?vh˫xf)Gلk^&Կ7Gy%I gLYir~9/C&v$X _XJ6: 5>IGKگtZ'5bIDlIMt6c}c_~Kzg@ʛ/Hsv"x{=}w%7,Xi3vԘ* XBy+,׭: "ch7ij''#] c;#'٬%- i:s$N7yC؁PWTŝպ̯> 9CL$Sf@vxt^h-AK-@!lɕV)[D%]TpLi/v'8风D\dv/eV "X' !r)B'0֙@TD_Y+ɐZ\=hc .;5k&ԻSF fD+Kvʢ8Nk)%5f,s~N;mDiH:> x¡>(xQw頖HYX:Q4:M4DjI\+:rN|ENyef2?H#vAagYn@ 31@@ڪuHz8@C M t.Ћ9ਝ5s̛nr/&JVoQƂZ{rFUw+ƿ`^ v]`% ĎʳS9!kMU .2:x0>Jt4cKx5 4YAܵ@xMfCSVqG6!Ѣ/Ţn, (2q58l!<ӢHfQjE9n"h{# *ҦEwJ~'s2PPЉ +|<~rw KظzPv>z?aF5ݐn9>曍Y'JvX^ Cvu0;6ª}? cs/;za;2vE xvO ׂ0fFc]+ ?&,׭YC,d[c힀Z ۾]W?W^>>1M;ͮp^uꧮmfmiާ.7;r2!QMΑ"~s>Bؘ9;FޑН nj>u%`6jT`6VXz*tb!J)͡[=M䴏eb#.ib:b &LS )#Hɣa\?EEj>:J%nj@#p"8ȵ2*V`RhmndTK*N^&0-AXښ1-s.ҰsіHxV:JIRhR{=-Ф>iQkiZF|.Z%OQ}ѥ8[KQi) FRTiZfGUv40i=IKXKm"@(r^6|ڋmckw,&|\nf}iE/#>O8X{0巷?wH{¢H~K$׈+3mBl"܏P6ĢXhHhfH:0'S/r7B;t0 P|mKE.ʣPIu%  7=oz  Ѿ6G.Nv0.?_%=. ;2R㢰ZxxZI)[xcDd ge''#TaRl֑Q9D&NBf]w'{`>ruџ"z .Xw.FIgqoߎp; Ko7c^?E|]wmPN.? *dU4jFܩ*3UJ>ntqK9$Nxƃ)Ҡs6̠8k綃3uB#?&H QI!~pxw <Fubd,\<.Q6>zַ7.0aF@˳zvqss8L_-W) mws~}4}HGC:b=bl|R{S-:om^`-swJ,] O[m^0yM]gSj O~vfJIb_򍚞JLbEh7F>&xTGq `ps_&, +/,FBZ\#uPy$)G*6?A2>䢾"aﴢ'VlՄM(zSS0~f*Q!3ݜ%{ɹ1LqJ:;{H]U)vYww$鬄XKw! @ُ.j?˙߱7.`~"v( ewX)eW˜ي>B"ﮗ6<~J(_5hCGmFIfbT*dV.TJZtí/V~PWuT^ԛ,Ǥ^>rnyz87̽c0w Gv֤1+ _;PFDkͫ8N3j"+tkM`&NNa/q6ٷb&V37ee;m띌O!rް'7J,}p6;EgOx4:bs%Eep~,pZ2vm<q O8*è+]_69ٵ%m8a}<@#' iR(-O.ݪCBtBv :^{Ёoz'ru&Rq_ 9cBj$[G%PFr(Ѷ\`k0*z$l SQ "LxRPǠo܈3RGm#  B4|XQ FDPkM0l<АŽ,89{.0 "/sreZ-ĢPDZl|h!jku˂%d}K.L$qCƉ31XD"zu$sīʨeLc%\[Gƶ7 Xm2-fYwcc/Zy@l\-V`eX/WC[;V5${7;u9u.B+יtYwyxפ;t[oҝ,';nQU8nU-B)bjcQF(TVM[XbC`}"$ټIwKFgXl 3z]Z VcE]/56 q@>`5׀,#>TAؤJ;oV,) ƾǰ.jB(`kU}k FϵfzcLLXr +Tݭ @2Ln]`' t RV#{KvJl]B qt3Bl]$@[ÊI*( ﮗ:)5q 9xUo|.ؽ]-oc" &],M{$F3a b?sϞ=OZ'Z|wzLwR.n!8q ,28n˒)InK6ЕCtf;}_ddi,Ub*C< FBHe1s2LZY|ۧ#3TBZJE\>8l:3SJ]% DO(#!u|SrҾJ ]Gb z"39#tŖw'9gNoS$3:-}g" iV;rL4,qL e1ZƲs;XXmzxerڂn'Je\ (;QS*97p-t\r`D g`ȓCH@lGVPVj]'%DʇנVߙ\Fj+0ԃg:9d5QySkrްsMcfۖ{\n3Aԕ@-'ÌW:7x}n f^kjd!ŀO%0 W⠉tSa_Vh+3{ VlXlb v1d.(u0Nړq%sYakN̨SpXs,drÓzJE&ٹӓsq7#z 83[oonooKez{-'Tw`-tvYtF#0BDvCƘZJZkG$J# rPm bjUs&׋w%i s͉Hqc-CBEtGɀ{KeB"<;öh`R/`ġNKZ3{$lYi?؁R5DnǞVFCR5Dn-j2/O<8t>jk:! Bc'peo8jsXqRgQK=oOQ# z j잪 3:͝ZsxNQS1xkZzj|t@`si{F K/ɥ2ch"?$Q^ץR+|A%I?fH`fvv^.@L~mJOdF]wUvneAq%X TJ8R(_ך0,[HL+0X[EA-m86 gT=٥Dbep=p&hd.ؕ= )XQZks<_~.2A+6PxkL8;UFCaNՂh|g=x}z)>zE SOv z&ŨhsMO7:Q7T$!Dڠu^$lJ-|Ϫ_rd @h#XQSm(T}r†Ħ56^!Έ71 Ǖa/EkZfi(}rZ*W7dȩ|c ǾCҸ?9^61.9m =P*mF70'XXcN,\4 hk>*Bi@*5YLnk>.BgmVII ( 98,c6IqvwBwln& іb5*Ŏ7㫜SǛ\9oFSośu]E_~}žź1*ӗV^ݽ)Bۥ[F߿Z<@8?(%J|5M%iWȎAx f72>P`V+vL颻tbc`*pUvk0BĖa Gn;A<2;=j U]Ub%i+#ϕVԜ3=47m}j[Sj[Vg(8%$8Z]ɘQFQk2|PN5lI-YϙFo[լ'ƒ :KSֽ:w[,Zd[16VUi* qp*&R55@۪Nj$0^NTKsǔİDZ08`0X8O V1݂+*6Vu`-H[h 4aVJaF)RݟFbptaXj~.PCqmMp ĤSqAb¹.E;T`zc8n[t2b P0}e "?@""Ŋ;7ʂjas)t'!M ަ am Q⿯_E_ .8E+ׯeQp+/uՇwM@bjtiQkE]ye+Vz"öp\"n7w-h,`\?ww!P^./]_?}oys|u뫯#kV;^]Rq/n㕾̷c1QW6H菢+GWk\ԈCIxTK]sx)ѣ^QhB= EXݏkK.P5Wu]mɺ2v\KӡULҮBܴrRZ~~A`RMfa,bZO f2?g]8ǕF$Z\Y#,]dJ>}7Cr6I6+ m헏Ѵ&9.#owgnhOoFo/nk9$_\}-AsWf&B)%m7UA*`>+3k>|$L Y&'(ZgHPuĪVK޳Ւ4x_~>#gӖoU+,GϮdLRq'K#thŅTS0y١YO(NXOcMv`/F3 Ǖ D:s w9nhN9#gQM  l6d Ӻ`#lxpCC@$:儆-XTLWp"yvxKyiS%RfY"^1$OQfjg.=@o$JMγU:I@+=k+ćy+iҧ~EQ}RIl&gfR6o]?RJb˰%NqU_c2}h *Y'wv&B`OF-7Zc10]%Q**:G>.7 6|P3Ot.Fx=yD up6 왽g/UZ]SP=pQ ]kCc=xJ=TS4 킬5FL%~/:#QBÔ޼fnH+'yD&b`_7te`d?ߧdRK_k+Ƒ* 9L Ǧ]TڻѾZϒNY9K§~E / 1JD 1٭Sc5׆ [=W-6(sAfjrv6W*gV{q+zޏD9h,)tj4п{a‰/տ[ĔY3K3a"ma|:iKM6}\jV"12꤭&]Jw,f6'y.u+8C&z$s1Icߓ(=$RSh8؝J)8[1"-?.r8x6p!&g-GizNM$cl, 'Ou`b(Kt'Ȅ/bAt:G+j0swC nbe u>@i M N6 \cUus?%w̰,lY 9]YZRj:g~WNIV RNJբJJMRckm\IŘ,[uw9y yHE}=#3xA{lQ7HY8)Y_׵RRNUmsxp޻AuD4v)4<{Z5|rܥKڥ'RYufUda}%_-3h7sqE2Q1z))6dWUߨSS^6WVS:6+U]Wu74vvӪ]=;wܥKjK"AإJ5ۥOF[Kw9RR O&kܛٶ@"fR&ZˋӫۋbV]x.򹏇H,UW+: kg'ä17qs(E6qL';Y.FG *KZ"E'L" ۯkSF@yW ѕaA6sߚ;f邇HuiJڞGCuy6.bQ.Zc^wZX,)jvZzrg>.cYJg:`9A^ *)U͸ *J^P&baC%jV6|Cz d/5[/BK^ܑPv%Dћ$./Ly(̗-/H8\/旟? >NZ}~~OWX%ɵ1W =z_4/ϥ|&?w<3}!4C_FWDg'1CeThtdp{w5N ]|(Kw.l`L \4 %͡\m8㰀v,:L$YH2_ANm<睡!i+rZ Tɠ7IxmLKm"&vi]VM)gcl^TTqڇ7ԛ@:`hnOsW7gG;_? Щȵ9Rtڣ53uWsW$D\ZD8E"M+Jib&\- iڿ@U@ɋXDtVt4j$:[8*`%ro \mZ84{ͫs2_r"L/L ?oCu#)~)N71)4-DL@o|2jTIm 7Gii>DX/iD4@ƗFnf5NX^;Tkq} *.4j' Fye&2RՏis}6.#WsnK BT Q1/cXu I¾uTxbu܏Q2B6$dN^[~orZB[krUA~EET$=$h,YZQ(zU[f5wgڦř,[dZLJQ zkP~I"@|~j6pD6N$0\EE~#pL%t&hV (veHi h &D L~1{U{!k3P%~X!!XeZzKA fCi8Rv5B+? Dl[+YI9IH,S+Ped RG kp hRё٫djz(YQisd^фlPԳǃ5MT?ԃPxRU؂5&J1fbѓ%3L"L/BΝ"BNWRq:+g7U1͋M3"; S @z&#ʻhstPeGDJQaWǧ$  '$%; > Bf |xpdNc΀I63m$JC_ÙdbV6 N0Q8F$c>Ueb) }Vn7fi{i c;m'x1kaǖ {CzG*9kDu&ftVb!D2?VLNN*&&ܕ+S9I-S$iaT,,'1eRӂDKN%r,K L X{R"SX;+I^/~&qe]G3vjS b^_]Y 1L$}` gĝ Оpgp=@h9e;S>dkhtsO:Kƿh20MoĂʚi 2HTpfjyYȽ2$S$wy߅hez޾[_ ej6^Xm5yCz9 7XPXޫ٧OK)R#$:xjV/b8[&}-촖},spA%vL Y2cJy f3:Ҭ%Y ,=C.G^5GơкzmX 2ӄZ~Wt@+V6g2P>\:ٴ>cmRZ"\;ȹpCk˜& -(㴕UvnҞtw~1z˯}/.s=zf)FݰO}=* MBu@eQr/VAƀ%hT9^O5c隁׋鴚Ҳ2M!|]rU }/fl.17;\oH6eOdC TdCTD DPMC1vw텵-`\qӯFQ\qDHL%)"3eոs̾֕dVzO`%d*z\`vFB5Sڪ4?)S]v]jv`l]|n%m+kuK]j5.P C+(]\v5g~D9vsdYy=Zj#kM9+?%[ȼ:esf:+ɜ%&oիI.=ΦChܯIrKf,iy ֈڢaws};,-.fty'QJb̉ J"0>ۺ,) taҪHHRyT.LQ.[4 ,7zy7I4lNg8VVMن)mVj86e>!dj4yD[lHȾ̵vr+xӛ8}5N4V>i?uK&qn$fy2ǹMA$pۚg#=Aײqs߄uZF:2k)u./f2a?0eN@q8]dksN*b&J$,dpxL>nmܿk93vvp1-z"$d؝/o;孑2zz[]Z'K`?hu'ĴV@{USԙN3$#bXd;{w8G=apzu!p$8aK/-^g>9,|XLmv71Xe^C;}= Wֺ|XjpA0(tl⡎юZ Dd/ 9yr^GծzJԮq]r"i&BT~gɢ4FjMWWh!dzQƖI홣r5kv-Hҍ^,ⷋ͋._p2ǼG_I*3Fk=KZcW*?!-ҊVj!T h0+x0dDOEbr_[8!Ψvg ws_Ap83|iK6.-k)/q1^ͪ%E6E+"w Ezp0TYk 9],7ny8 uFJefz<3HRe!Xd"V]!p'1ʎވD2Es{ @<IHwþgIъCeih_TQ9گ}Z7t,ȊwFÂqA{^9μ b^F+@顊+!fk5ݢ6R Ʉ t# XAuV`iwVLX&ə+`N4xmQG3UK1XUWrS /lcqCxU^sl/:^F:>40 Q"ǁrVq&+92Ea: IjQ.KQ_Й0*ɘςy0*Gw'DZ6AhC~WշCUoUCcznK|0? 7߽=C$;![dG =$O1ЙBtb<똀Kf#jL|/9:~p 5RޔZS1!Ff8?{Wȑ©#;^yJlQLRjuO$uC,VUT]ʊ/8Ԣ)1M] (f-G -K-i(Di诺(쪤unQ5 2nhkSs b@ 'e-sQ (\/ Gt AXc^|̫Dp̡֦:ᒓh,$z 1=L Q+$KjrYQ) Ar/.B!k`7}$6K ʔ0ăb}&Ge&g)JF Ȋݾ< fO8WFџ/NIrWymi<> ?bHAe}0Mgxi׫ t,?3ٓ?/O7#Džg i|M^nL5 v<'_ .a4i$zxa+'$'PB% :8M 󖣼fz΃7BLf91Ҩ ֢21 JPHF lFLF̲TedO1Twu8_I !!s4A=7fb\ɪd1fEj +)::OzfL7ǍS.S6ZQT^YfǕ|hG+(Jsa+҉'9o[ȩ^"+:(m:+9,O$"$| 2cq279,dF = !Y^zAGA75kc#{ea KxYpdْ~cC]Br"64K,I-m9a̘Bn_ꙈLrB :;n8du2M|%'цVM%ߒQ(=j!9T@Mڒd AKT耺/V0|bdoc%%%w-B#^{ y/I{-j7SI;o1eZZkȸGbH0^ 5Uz,G U/CamэK3^1NL}kTvh;S=i?BI魿GXnf={Kg!;+FNxPQ(e]LAE֙>ՠЭZ|çYAy7N3ǫmT@rmFQ  ΠZA O2a`ѮZj0%Hh!+ k}W^_JQjO ͙ڂu|I^ sՙYj*j57q7s)e:mKۘ[K +yj<'@y0cۜ^0OW,Ȳ#Ng8hbAWL b÷tqP8gt6pL`uSC r|wjP-4nCqX4ZΛYi C6$PbDO=dTAh|mY 1m~X3}pm92 1(-?F*!yyu&+l7Km͍+xi/x5.A[؛\LIb5$& @GBHdŊl+5~JMТ;T+}]j p}<]@ t 7W%ob^v #ףQ? xi-w?rx@Fߟƭ|A k `$49uTO磕JapN}e0`(@>@i(/.,\^&1j#|`~e\=limVՑ<7FA J)6Zs]Irֳa2 g%:He|jwvV 50mEbAɅ7]MF?FbH % Z)CϪGp^[TK-szBmYRoX A[fTV M +E]I9I,yzg= Ar|k-}4u޺Gn!uk]6d5O=EڔʐۻPQ: K%)JLL\Du"QwKL2E(&}!~x ZsLoaM{ D#nݏ\'wQڷ|Jt[$hXɉ E* nuUW[&iXnn8P}-%B ǓrQRGm=c'xل-&PN},$ӌ0-ىMkrw)s> +MeCga] ;wSChlY֣|M7@nIy3#@UDcPcy,xe1x,d9miw)mZ5BhQ m;U,)VK,f 4pd̐ҪGm@4BI6M4<^3|^3\,1K*G؋ϤbR8 &fGR 2Pԯ` ./N0|椾:}eVbImKV!kۤ+)|GKg`m?P58*j@ b#^CDwV/y lyZE/a H^cU*ϹzOӐ_p MjpͤĖQ: }$Ѧew랊OiU>uZm]jnk#N\VRjV~ҸXhZ$Ph܏SZZ¾ jVD vyȔRF4r \MUs-U~O`8)| RaxKP-4J̵c2Jzgͣe%*kĪWojKȾR ` wdzGBo-W/qg E9HR{yK\ʳMiIiZ"<>~Pb,i:J%2hEr΢vYd|1ʜ 69UUlYh)P|?hs!` Ł$J![eQEC32A x4+p3G(m)c9zq<śY/|T߲+g SquX#$עh-s )bTiCY-[\TB!Ŋ1i& e`ON`-S1o'3)8cMZce[kh`Vhp6Wb=U:ImRk}0Nc/f<$5=fۃdA݆g溝/Xhoe\.l>NeҐ;-֥ឦSqE.ja!UG }1As/ǧLejX 2}AxӬ !S{vڲVcliZ E]t] ĦtOb1個kE-gʕV2S,,-suMtW~κu κ6m ~LnHfy¤fR-w9r>!Cnơy6sU9yJ 1G%ͭT L||ۯhtd hˏ#EݘG7Wώ{Ur4JKj6?A֡~ <7`k9|>N/!XxݟKKYv|Oa 7A1d(ӈ !&W?~;>ӯW4 On&'$g?E@nͤNX T+k$xw3Ikf|*u;lŰؘY PHOu.HL XfQ[vK cy<<BVr=q$E|+e]hJ; iC*N|$ BjhڝO~g_aJ̜gvō1hr"mmՔ|-(PJ!t=[s95U]V8CGCϧ:H{>(eA_Q\l`s­ujҲ$09Qt1I%JVJkYOqו6'd_!fe| Y\郧ЊZBͺK1^qxˆWX5$52SpWn+jckD-)k1qxSRZ/בLr>K3^1NLoui]y~ۼZ2a:--Mke֧ Q^U.{CUxR<ʸMR:x֍?%ֿo>\qbR˚h62x QF$-5YggsI݄!BcReAPgR0V .i@իMnF.9t7[|Te.cN忧Ark ,riwkE)e 䔌]D% D(mL}ԑC#s,%8V[1$겊QYemc"hz6@6P`BHvU阘VJ!=U@Te±D4TezߤyRe:8vhħ|>71YK~y~oRy&Hv*LLC 9?N+VTc"2ɔ9O>)<7}V+v3eHJ4 "`3ΙJA)) 僑&?^lvºEg2Af׌ظx//9EBJPDFèYFﹿ_Mm~#I7" F3+9^oK52#z~|E/d&d;g.nou}zdn&ݍw|^eAr6;4ΓC<94ΓCh:W7ӾѪbIcMW.<vrq=W7=miL:t!LJSo^=d(S(ɦq˯wYg~Ce|>υ婛'/Hk19Ym3%•wp_$:G4vBoVEЏP7et:K_~d :oHF? p }:[1 ]>!G}ہ(7 ߞVQo}}@"cۻy-]K& 㙦yW1 _ p|wygv`OE'S4Umv7qXJFH'$,7|_uLww-[2U N>,bISq2ӟ+f>lW}3G9b뭄;#Z_} +]ܵ://cDMnf.b$\̚cim #q [He=R0_*[+'#$:dUuYT^8etUo0e:K)ݰb_z\-vFհށ(TV%=Wh!ਓ&)4Z4L S^E\T$@cw,گzryW:m ]9.Jf]Y۞߬z?S}mcXSxΔu"/{:t{{r6Mɐ޳Ћ(rG J-lux,lɣvzx,_}p4[3.&Y+߁_t$5o޻$rbyYH-aUv!<}tSDS =NCY]}jD8s9IxpF;E@zDm޽wgi;?4YlݍЖ&2|;yҨ>WIO0f jv'cwb 20)~FE l6jlJFGn x#*&`~5A!] 4DNzH$3@ &O?tw+d&?#@ɲGrb#(J7}n+ӛկ1)> P]+OĸơC\8g5%s&18v8ӣSq7GԊBqbsxTdQ;^p;6f"6sjΨsU]uL [rFoEK:YF]tڪfvcgv>5x.|pa&( OnHj n9(O{hQ ;G9L90liܯu=+,7S;c#5r%+Se~etACC.a?un=m$M'hKqj㑤va[JviMF|rrMhlpVGECȼPoH^_7fKeBoww)si,M)MJt_6~`;y/nnRkg,ɟt-niX,SNꈳfyG@L+mF_M$-@*{q529Y:DZ@ 7d I11 glf7Eg׋ o,m_̗3uof*E Ϳ=Ý 9z{zxB;{J#H+ij,`B01̦E4ODlUKtXoMXb ,tFЃGcWJKnKHbP'jw1him.](^-Aj$XۨT|hbš!SJ YyŠ+R-oSzPdVU-AWлkQVJɥ LiIKp\oۓV*JF"YRһ}II[?ֳS~yL$Irgd?;$h%Ghڗא}>(WRѯWIOk+Dfov/ȻWiYTRюZdTTt?K*ON`EQ+NMzTGAJ*Msߡ FU$ϷFQǫ& +Q^a P4Fnx/i8BIF')h@K2lC'a;[a#UECowH xn?]_sȑ*,`_ty]%Uۭ)0leYlRݯ$%"I-qtt{?i.Wf+I+׈M s{f 8ݟѯuOGKHu\G p]\k<|[^sGpZuz=߆{?+`'<Ქw)>%8d ){S5۽u(5ڛ ڭs0מ‚o뇎(瘊4 WA:$ϭuк A }Gu;w2u4u^*?Һ!_)|ft1к A }Gu;d=uޙ(Һ!_)"!`qGϻF:gp 3s?x^;)˼tEO~Dw˓H;iuUUl4`nGrq8 l6E”3 *x:/ف<ƼOy&!g&jqQ/ꗋ lqM?٠V"'r!ylt$:goYJ. ^=nɨ P(ai46^ em#'[hA,*YjD ¸\Xt~&|ؔCg]HـD~. kI(S`mkS;-j66(ahi[QX=t[nL$T8 u޲O %i/q>I ӟt޳31:i>zYOQcd$Q' 4bIz20.h0#q";ar \rɲ0oy,D'XItYO &ҤN)*}(2(HF*g5@b1 Dbø@kԥ)kRC:QNmc% cwJ B5D PWMNv1YFeglFm@){VhOc2Ggߓ0.΁i6noV] ݙS˱6cBw6DSpC::)}-DFZI-q- }8HKJJ3& 4Dcd7=L:`YY6?DUdV32c:+ɹ̣(Vg[ul b"8YFd{eK6`,ʢZaB!% K ote- >iJ:S3JX+7)t&x77!X$V˶]XU榽7gk-c~wgֹcQ+xd~]~Շ6_( uoǯ_͚HoE#ӷg^hrQFjv]?|@xCYBeXwsϟnnYabǜw#NANc}g%K8I.Kn|CBH\ !@ŸRY@%[ml$mdQcO!M:{Y!7b:4U&7nG/Tqqݴ%c4UD YU7=rl$, h98 ILD҃g7drN#N$LBd Adhc$%:_gB*R!$:#TH^v1mJR 8./uJeSe}e+Ǯeq* kuM~x}vh6mvanvzޤW䣖Oß,wMO3~w9z"!7/*˿s@3ɛ"Bs02ܰrZaULUʬ2YyA܍ yFy:T~m ]qart^JZҨ${x/שvL[nt*C%(t՗Ϩ^矵TGB $sZgTg-=i-57K-d;k)Dj)\8٭V\ʊC%CJ[^U) 5ZTQ^]$$5 kvnmTd;]Jk֖cR)s1#%*~(XSn}]]+z7s)\Tx"nc\u8x-/Iq :j?hI w R2@ۛxXAk>qRxdDU4P8,(%p3;l̛٧Vפr:AmǷ/pkV]ʼnO^8N(N#ښ mXTޤr6Xi2$E ;rnY$@FY_:Hi98Rm9P`𠁶$Vh"(|SX!N*QOJa)j]YyQ@b@㴗bo~OOU\ TljΩօWH G*YtrC2RGfC>gb?<39rU\?~7myi"49Z>Q+q6)R^`h[T4T7N)]t{3o ǝ./ש&NTͳ9Ic;9lk}-Sƶt*ҕW2'fn1 R8jecxgmIea.ZVxh˲CRVQ W"-$/!d'j[K-'ɮcRV 9جJl7BhЭ?S'5{kqR(xz,3(Phs7< fb~ e'HBZ *_J9>x^ڒ7 M%odw1p] )]5Zk(*WTƦE@-)oR (_|~hW̅?۠V_}l^tf8n]Wocwzͫj㏫4]Y+ rqm[¸\[\ϓcM#U@4M%oEBUmҞ7_rTX]"@L Cg6z&ԇɵ"Z݌O@C =g4-"i"Q8e̳AN-+1%Z Q>{mI;4bQ?ZnzҖ7yv g# {pvk8 zPV$FvmxY:`nj-ITKGl@uH8.M{u˭Uo&F7^6ZT1cPUDp\KLRRSx6i6i_#k=Ok8zD8j(i*>`A!dmT7??;'JK=Y +-C8"+=aˌA U2$ [`d0.w}m wڼȏ*qqh0.7l컭PW N[_Xn"Yn,*u8$H:nf1n*A( ï Sŕ;J4?1Mv|{J_Gpg۾&aӈ1m8ջfgl@ДU>}>zr)n&cv! ‹헏1'ٜӎ+`:˝Ծ9RQwb Sl]&_eA+|rqo mAvƫ Le= S:+ʎugSFaQ㉵,GJ, Dӳ`ࢰkTGޤ 7:Botv]7:1OvC؞Q\$\T9:mCw^JҸKɸHhx:./Q'5?z=kidGiM:v- % %8ǡiiK5>qE\ø(T8-m&}'Fim)#wQ}jTZzZ ޓ6r$+^vgU#ӳ0vvaᝇZ.F)xIYF"Kqed\yR*jХ򤴆ڨF$oNJ9T?W2RII~b+iW|vo睅Bw7 (*J ykX1{N⦑L~ 7KJ@ N.oobɒfCdK%Q GK!I$6"(--{i1s Ǻa|9re޻ߚ~I"kX.A[<>"=Ђ'C`KQ K٩6:hJ6Z =0 |7.5:6?akXqFmF.m|kOv>w`{ Vxm8k$\)Bx=aϯfԱ((p-*2WVUJsxd/q,gq%iqi7uj-qEX )(vZsder7m{g۴>27ݶۘfm? L[XsT i:i ţ4z *5 =d9&MF]'34%D0N[h-1z`Axc + uRkyÞys2^J4]RA2M u'X&bD-#-梙?9DG&L9 >g&fOeGhh3#XiIkDAl4+9Zґa8:5fht2DY4DPK H9vUZGǝb\?wL[TR:TsJ:0]ZĊ L$Z;҇eև%Gv7k<9ln6 EgB/fNƜ@Ir}$]T'?qKSѼ~Cfq qڨAp|5{H5~J•Ki$k}דŨtXW .~|?_ƻe_^L@;H??})O}ݜnS{_Y7*?58T3 yMp%Qg 9$趐ʯW2Ո)~Βdҭ ާ!{)VF|{C$(ˆQ: 砵@O)(w=}Hn!}5!y X3bhv7]yj6pZzxg ە_j`+ lrN1k:]g7?=:'1DE89էFg/m<ӚHպjR=qRXQ-ĐĐ%`?xaic&p۠DǸgL*$4P$!j¢)EJSSPV B=AόI.egGࠥt{vd~A#JLp%J4¹z3c!SD1Ϊ$J5b}$`* n}4"oc?2\GzIQ,FA $$B HC7҇*:BƁ)8VF!cr/7l륟R+rFzs΄p_;ߎo@]K }L"Rɣ(eʼ|T4W h⶯Nr4..(~f)u e `q~@qj-7K3/YQR9U(ULGYP m"h_vsްfUue jMh+i-3v^R8󤴆ZI~j 2 fR N 瓔E)Y!KGD6fUDSɽ'VZ*5Qŭ'zhfZIGR+El+4 Ѭ%? U0E4!f IXD3/Pέ4-kzA!OJirR5~D>`vVY\ʨ=qUU٨+mgKD]dU45 TR#zB ?bM0+Q6#6wSIoϋv66aze:muVo盟գTM۫Y!| D u8T2]* B c, k>^z!Mgϳ9%%a%)D:{EX~u65.3 .R; B6/~_U>oYwIU %]iz=|4 AbgQ/.>[82~3"kPBFTAP(*UAAYs 5N>9!}tN41qc<_tu8E_vÏ?Ȧy :eMց(4o sM!~l/1t:#^I"pH~ :TJ4y*3qKOuxeŠ?S{$~+9 BHƋJY 6^|ըyֶ|#Iq\Ci&x;a pn+J,FX=IZ. W GW-y. G: ̇yE{{n~Kx"m;Fh$g&6! L !S9&s1 w("Z4|A1AZjy[_:#q H] dE+p0*i$]ġ; 2~=PKuL `+*.âò)5߼hF]x;m|4r vE>hJzD1AL (Tc.1nvX%qg%bgfg2-x_A4XIr c>8$ǒrG A3OG$1B#dfXH@!:vGCc^x.ߊ:ڝȧ%4p"?IK^09(N[,ǵge&D5k>9L+\YrdcL*`-եQ^sޥ)J'\t9#R Iw )͸{%Wf'1LWob 5go7*i,e`aE!i6j6Vns:V{sFRmuܧv&2z!`AF*Q$63Ӆ”i:Fn֊ц"F`d0=W24w(JUV`~no \Ģ; x?qTOf)ۦs'K "dRa4AWm'frW%gX|= gK0;;f`,l6.@[Om{ ߥ2/ gv\YƬ4Q9(­P@Q4_;dj{J=03.{=Y~Th1F 4EbQU,$:ù)vjyM]5B.9m [CE*ڽvT"ʯ}VrHrKz-B1f}/e^֜\+lk1V iS,5yTDwtP١#${xӉ7jt%h&d (?+[0/,K$9̍+'ћ+݅d]I-uF9FCǢ3,Dl7$|~J)秦|. jFoA>:Rsg2-[6" vϷ5.?/&֯t]bm]8x:ҐP/zsh,Z7K,9ӣM zU"Gge{Rqãt䥤 XlԌQr ;džr\h2_`Wcwv =̃jk Raldc$! 2+NJ) \F7N :M?SvK9#mr%@ix&aBn03^ g,Fo鐍z%A̲]b ux,I78v.q[_*ZWJs~պ/F_D{JWdrK0f>,չ6ܑlK`sgm*Ї$BAQ{1;|T8LMe>@(/>zԝx9;2ϟ^LeSYNe1TV"Oeu܊8uQ<ҊQbOA{0ݛQ 4գMeSv#:`͛̾2dlJBOϻ^Z(UI%!̽<6 [ݲ D vic]k4h[#BL\;lZڔ/\ߦ修zN. .sս}Ұ͔~Sus8!"0D3"T#(U'픏 6?6y-rwCr bo$kQ쏴j~dT}G(E)ϮXDyZ.RףMm%A4 +ѹ-8ʙэҙkk'/ן#в1-kw]̷%A< >W@bՈ`W* 4 [{p&+>ZҐF5!B<PaUBwS\:^#{ $ȿ]{qA{I J w5EJ>=CVrmfzqȟˌPcDiuH 3z/ķjHVK>s})$Ĵ&<ѭL体\B8 1lZ.-0H=K0 qqZ.FO$ K$!=#'CieU,: bj#*vq@ %wz/D;۔? ;u~rC8xIkt&gd!"5(S5cgf A!@zkAMiד,d eT"vF+>]]>E[?yJسuam(o[ΐi5j7+Խw st}X}XCqx7#b:x{ax!lU% gֿ8%A)Z卻)eAlQK{r?{@Rp}[T:9Rz)Fz\S|31Wk 7:NF\IZݴ7j1>1!o?"$nʣRw&Vl@G3ZA%ܬ;lS} ^ݽaZ (fZh>CKŞ-L=jY'/c8A:HQ;9\[=G~^t AIGE$ОqW<ۖ.r7t=/±7z蟢 ji{/%D1k[ڤ66㨛tqX)x?Cj((+ P k,4U4TkKlQ-YJ@sZ;U)sRh+CEbct1R卪VFAiYM=V JR@0fgEui~P75!_A]U[ZfW).g^[tsgkb@1yk[yhV"zYQJ 'J%Q9!KZ5~JI^KjR֦E "qc{ 1ĜslWѳHY˦u#R:\YqkPYJ.EP;l BkZtY.Oܿ| _L-8˫'b+X^C <|~g~|[_\t~onGPdn^ڱ3ߜSΖn^p;~?cМ~gedrYLgY:''r)-@lO`Dz wWWt/?m#M=H%%O&w7wMr+ɚ@룽'݊^jz]ٻO=}b`o}idilzccZhkD`5pC .bkD5X=2] =4z2MupCupFlu)|_G]Q}^j6=ooq%Wrt_n]|1窹,nV6=yv|zIQSzxsssv7 ԯTy*O6 W>@@S* 'iA_0\2WczwA{d z?{]ՠ}o#13H mzaɪO[i`/ F@uy|]j6U&SS+R#=t{-{x%"0X 4|q.!+@XDt@]"e1jǻq (ȇk}FX\=)oq GgRs o2U*!eRSjשyB`e>, 7$ axT BL'1ms ҩh:+лa!_&ڃ?+nu3fWD ZF'>#xWy2?yhgBq<شo4ӟٺ {l~og z7p()al\Br+(\^r5k{Ξr5ũf ߬m纟?}*z:_IWhњCs6ڳ~[x\P_5)8lw{ŞNzz[O7'fW8ѿhk^-~v##5 V,Je벟jfyJ[߼M1|F Ǻ+POm(<z2ae6 ?] Q *# ԙ+UʔV"{sE yMvABH%@:#Eˁsճn:Fj0xq9D1z.R bA t*(pݳtNaA10N5*QtP VS0.WA%Főƺ6L4,4&lM %M}* +)) 8jgPdw1pl-CCxh,C5gvlA m'-G ƹDnH +CP% J͹ܸ/ jQ>n)c- i7*L4;`'λıp_'[̈́ݍ@m8i )҄b3XsXHAw&0;970׸]s( ډ}A,1D(p(,%sXqz߀āؾERGN/C_1EZɊ<8Q1ie0 A $hO{ZJϼ2֑}d C%-GߛMr' A鯛Z@S$#i'%ѯuv4`٨պGRХn6BJ%+(aH¤-V ͿЯ1cw,aZ/Wvfv*8c)ӕ5ֆ[T"+ cU)~-ϩ]u+iN+o>pq柢3Z`68V *vd+l.l.Pֆ):u"N PFLo@!ɀ#Dh-FBtfYj|cAJ[u-u]W oZ1R5LBXRE*Gvv.ВWMSMecKRdsԼR$)pQDe_hpZ[866e%L)8psT)JF1SI]%9" $9O*j5ꩅ y/hyG\,e ZݨcHmZd60flԽwx HaX>>y;Rhń9 vITq:5eW|VP8tn (2C^!HB 3%.l>y\j "c D>ꨛph?n0 83p&BCl2dg+W8r^Ш2*C7o]c D4uYռVGYcUbS%$ K֕n,j()0YE KQ[.$p? !10 paU("ָVAih#p+OfFq~"7 #p/UIb|S>-BA}zS%9u^fbMƵîy(<d߽1̥ьH҄E F!z #-3r8ww ܋0׸$p]J^ ;J3f{'p:Yb*q \a}"ʽG48kx9C9/W8!X)+݃b/-{&pN2kwB*^ #e#n.7:,VM!9ݭ'mi6d^rQ%?nς`ZږxQ=yC$׭4sy+ +Uwo!" ~ٚ'ZHMVTQ*` r8d_eq!0-E _K`(M2ze<-ٺԜ#녳d"=g3'6kq$1_w"EJb#A$X^3}7;3^{\UW=,󨮢H1dU̲@RtTXrpvL<E ٴYlQL,0-OB^91ّ ~AS_>ƞOk}X޷]K޶) ,bxzu4@7b =q_@w18>mB)n}W>vFPe2)֚rJ&*_z* 1e㽏hy86KyjSz{_ps.( *X~Ïup1#pO75ȬlOTwD7IR~ƫT j7;q_4\zJ$d+\Pzl_}@4Eh/*C*E'JӉHgqշmV5-UG6Ly-=Xf\i1w:טkH||N 0 >Br2Wvd_*uI{c0+@96>Wt a"JCrWbxS9CʕBU#eA@0.VXнz=~Yq0r!ΗBps,yH$+;ib[6xGmY7 KePbũPE}HAZ;3+pgXVfą[FȹʼPʁ"F յw#Pбm6MV#m֛Y;y @wydO,-zb xw&$JBoq9IDM3ZvGhQ8XշbQx,Jڈ\]@}-*f886L*U!*dՕXt\f-)m*- R/[G W%.pK4<;(/YiQ۰/3f4;g~}(MVP_E(͢Dj'Wri- J@7nQ`, &Z ԩGR t"_S@ |ʢUkA "q[#pDmjc碝D-&-MU|3W [,,+Đ]å {Օ\7r'8I=;eZlJjԴCXV3#Mml6OX~~%u'C˸hOlA|,cgϟ~ iů)DiUؚκ/$ Ь2J<=*&cNE ˔G-yEݖ\SDmGvtIkR3nCke{S5#=ܭПjqr,Kl?73l0}tG5@ɳƓ Y(:yRb%BkI{E|Ą7GJ6 ؝e%FѶ6 azٗIӴ#Y0~pZ:MKk|LBcqQZ>츽WmQ=(W>ee KDR֍!BʐEE;<&e̜Ba DCÝtHN5r#8efދY H8ZޑXZI}mpygt`?}x-V[VKuty+r7^OϚc>c>M,DW 5^+*^1R3[E ll(=J'-c{"5.A'ShL +Ol '*o=-)0i*ٽD>(3c8d]w\D)]$J])VefEsəEiuB ~!l\9!s#JȂӉ> gkjD(ؚJЄXāMB ?VZeBƥ Y1ƈ[z)dUzs@8@I[5{;\ߝZc6bLf|=]䍑MRRD7JX؝@4Mq3Cu\W9aw&&o~ޞQBgxԴ }KRN8Ǝm9eP"]E1s* ysۂB*ep zgBFm49V[/Q}ۦiC׽u{ n n">W3~6N ZY?WG<4 7rKS~ڦih<%oTek)'z|Ll)'z "W-d-z*^M-=iC*.?9ocbݖZ&ͮcئs>qκ6и L7'[R u.02\f2u&֙PDFrd카-&U$SU#xgʰl"=QUFd+CN?wM/0E|ӳC q.=`c[.ݺL\JϺʵhLJ;! "ҺR7(B'@- ľ*1Jv1D(YQDe:cQv!6gd;UCCb g>¯}tV6D}tأZXVOFA٤5alsb {_lӄa[aZ#F(.̊؂ uh۪ɖ^j6"F`G{S/S}G5mυPr*!IcW;#]Y /5sm" 8CRÓ5͐oӽ, uFLLnWoZo۴GƬ1vnd(zTk̰ӄ?~_;{ٚC**¥TT[R"om#x̌q %|׉uK,6wLa$/Z@sUDs` DĘ\\HGiRO}i0NTt$cj:KpL)G؝j:H3P<(F52H,}zէ6ſo.G*bӵ-эSyΪu`: cI0{,̪0A.a,Ua&BJc>$c+-X@xIk);šm Kd^P"$փ\7ECZk̕J]YvUQY:m{g N8Xkt0}Je/E˷hkNd^9c^ 񙶞 *X{E 'Yh.9ek, ])ó9Z.!eOQxhg"Fș}GA֊vI\jA;L+׼+q>(Gg]>bИN}娹{蝨um+Kj69n6ieWn#ăgA:fp|nQ?p!.mzߍqԍvLڷV>ï,!1oYY1(w\~o͞ #<<{C֟ʇכ Kgu0oF$׋-MqaYLT8t+1Y( gC:,# ^^ײN )ópX;YS+be;UL(+ƫs ɓ;sa ?]}HE]s\Ǵonp2Ye~n$ZQIgho+/ =+!j~ߟ׿Qr!?XlsbsT<ۻ-stTo<ėˇB4> ^4dh`wM|~Ez6/ߗ(77#i7tOY7FMΜm Tg7:Yb Sκ .Nn#h7tqyus,PotnsfDMf݆7%Z14WіN%?i\I0U:Ԟ#x@tf5(2`"4/EmI:7".᎓DtB"]Fh6^MXWP#۴W 7m`oN}| dft''!>R&2kމ^t4Zć^Uj*PG[1r&whB*^* ʊ2֋,xr"۽PJݫ C6ýK9Jy TE2:ˁca&}YC>/)Q+yl*dŜ+Z=+$(a2pNks*)]`G (֕z:zQf/Q 9Hw&w#}񧲺/{ T]fJՍg0SZy. ll)6엒tb(\!"yaH-}H)zQ^ޙ`oi o)d$ʋֵR2\ E^Xu+(|ej+ Je: 7T7)ʚ֬-IEdb~ m륨;TNyE!WXm5Ub"K;j5dDz) z7e%GJo92Jf1H1AGSF ;IR8f+ 4X OÎ;D;oe!:^-VUWַ:i"IsI#@QǕ=iN"{^òm^#g#>RkTm'4O }իH֨VNnB8^r/`tC5C;4Tu 4mX9b$؏ӂ=Х$ِ`< 5}A!t(P۶}F@Vt h-:!t,#h҂T*gJ~$EؚZG_j(yE# w3I;Ԝ!h,G)I%D^^? Ljh؉bSt%08۬3OنN6pSMߣ\Tdd &I|AN̆ўҌ*SMb=1s9j3/X0CJY7ی2Q}ox6KjQ|4o_KSmE=<(DmG%O3.ݕbK4FwD7OnehF g2ΧI&C=XDg &sWA8V.7Jsh2@ˆ^OKϝJi"|c&)pf2UAI\٠PR륯(蕚p`G. T#iN"eM>N7m3j dz%P Kg2 fM7Rf2-?Zm&S~ 9i M4ǦP[΂-W1`4:5yVm M4˦z5LeSm50oo8d)g6?{~ &r6tFK~LBqͲ^gs#s-W1`4C$|-'`wBqͲqw#"v˕A~#Ż-y!݃7L۞Gz1,76=Qe_IZGv֢tNqW!cph^(sA48CwS>^~uSwox')Zbۏz%?~zb}FtuB?ÍL/C|b_*}o޷{n|ns?]]Z]~u]CU~o{ؐP͌JaZ.^nFP +%}ġZg1V*e*swzȂSRk&V4#ZWʚؾx,7|Q}FF8Yd`NU%dkŔ}l[,]b/7kvUor6ge, 7tGīh߽,N+lo*{:8Z=\\_R bܻ*k[St2 W[cБ΃Vmqw -^BN44&YGtdHk=g M,Ʉ,';X]# Ed,71wF#J]ij; m:Qu*Ptjiw,)G]:V! jȧFiЕ] ;iD+վvT#h"hJ!A¤3b%{l(* >C6 Y;o$m| ݝg*-Z N )H@̛3[;6&n(F0AɺUۡ}CZ^j(Ջ6$G)Zuw|$UvkFwc)D,C`XH e'X z #"ƒ{+7Ookx\㶆7J Tb'i4TVfsŒRLH2'~I%"ObH nurvg$,{OM8ܮބ]6 H cfֽ\>Ӕ_,?tQ[_[y;ruK&mǂ=7n MX$o"y_$7⏭r_O X^063憝7cQIa+T5oK' 1q I05#-80eUmCq1o8sHJB:kj ڴ8vF]ƾ0F[-ZUR 6aYP 7~FFBg!fˮf_j n;iVw_xW5:rl90 Gš%7karKPMvD Y=q6!cMd0IRڧ 9}u '3> {/t6u|YZϏ7X\ּhyDd;mKG w?Ej|j۬Hm~_}޻To0϶~/+J'5iNV H1 G1#Tٳ&f&$ݳOmHF=c%iݐTipϞn,ukEyK2_%<5VyH< @3b:9sG3pl|'u#4zZx4* Hy'|+dej еj,slOXUM5w435:l Qj]Teh6KH)9>]l ppXbυ(߉n$8BCХP`wK+*i4-U 9֨kW'`"v=[.w~x lx!dϔR;uT\?8GkFp/p]nf^̡]]%=yWl\T~@`c:{-4 ]]k wH@ꛗ6 e;Fb=Lah W Ծ"mX '쳣HwiB^:Q( R@#+Jj2UXצÄk@lXulӎ%johjD_`~mVowC/,Pk^9febxce%y07&rqvHx,Ҿn gJ`̩Ģ477N%-J:p2s#Xia-Wz^dP)m SuP8'B9Qd-x Vp熴vAZ R*~y0qѻhAah8Y#.He*;_ ;wL16Wƪ ]J5$[_۶^HT;a+VuTF)sIu{VHh]rj* BP5X?[YYIlړgSEӔvSܷ#)DSZ `Ϣa _ Ilݸ4}.rt㩆 1Oyzs[h D[̆䖠zcaq 0@hc|gÙR L+pťMJ ?; h9d$P쫈Ր-؅BkMրtMwXgƣ@{Ę d  (u3L7ٻm$4{%d܇K`%l/ IfclG۶Z0RK حU"Y_ !!2r_uPl# 3 |z_*iYTo"nbV)z疍Oy2e˺~.G#H+͵*#VgDLY8'-w8cRY~¹_*ϖ[8v}l- ;<~zW{\4||_4ݜWVh$ 5xRUM͊t#s޺FF(EF2"(2 #SɥSVrNH[* .&%G`,t8!!4P-Ë h/0`ڭ4z5C --A'ҡ" (KTZS  m`bAD/ݝ"%bսJ+(i%NԎFz7b**,cHrՑSIH48Xw S,0\Lv^Z4iFB\FL7[+Ai2mdV8KZeaثpZN#鞯}JU8me(?*ʱ g,.VeTP/.(mzrÇOg!`aϪ(᳖xV-7ZI}<)1?Sԍxxһ@eϜb"!Wc?~<Ak&RXUW~s05 ~ rjhr:P ҈x/TȘ l8-La9]T_4BIKO[KMYni]xzMyx\%{bzp|j$f7MJAZP2jc\Kfݱdq R%#L@i\*{O 0JW&:m\Rzz ΫǫUoswu 0`C+4r2Shm}̫Z6We=qH5w®-WbY䅣^=hMEWN &LmX+׃Qr2X$σC-O)V* J*u3'kFĄnU3YHiEGwюazю MѦږ*3)dPHs0S.CHЩ wY="؎:"k5ŸYGjxz \1גBC,9363,<3MQ2i2$ZfN[i'2ʕrK?^gOpBy˧g{Fy\_uE~~ [`J}lL82Td-Z4ڑu J"/NP&jiS*#a%\0[[.A`,y[MnK=u;YuT1}[P5)ƅ1$f[ D,rOQi!IeEdK]ZDEp5r xpB v0hC4h!gbxƳ=36HYQ-X*x$aϩUcd"!Ͳ'3zbJI4gcw)=g7kq@E_ g8&g"5E\}Bj@x+ʍPjʗ%С"odP^Ծl'[q\dV~]1gӟP}>{1%\ bɇgUHA8hGqEEj (vG1iQBTC\E cQTEQ"z.$V@~@+MAnK G}EIڅߖpU/ ˿-ZZ14~KbCT ^>~_# e1w;8g܁wEZ3u{`w7ADVH('f~2ba_A)0f/Cuxc:@W%F[A$ ˺9f]j 5L)0L5Ғv~Kˑ&W(to E<^?Z#I0$QB|&x5 cb]žH>R/ɂQ W)K&QE2. mr!8R02S\|B*j!H #C*M]HeikJ59 DÌ6*oƣDBʨR-ż6NB!ػiPSU)OdT}VLOb+ 4]'9&LMG@'Q4~/T StA&RKMHYTРwo؟sb&FX<9,{ND=sbAF='IiMsbҢD;!PaVL$!s:MnCSc3s}q͏O0g69`Nht='=qnh^y@=YJF.CDzNIɇ9? cvy5l'Y [f;Q=@%[A'͍,V DD.kih>tK}#Ft犃hXe9DFUA$ZÜ9Zyi0NrW߾Ŀ k2uGY^7G9>6 Cv@k:^X:ⶩ)mWr.=uRip8w%EmRLFO65(j 廟ڱ@j}/? ֧^kp[Iy _=qu7Vh{vF؟j cԁjtN rljwv82GoC.TUWuzl ]T-=jn'7yzs1_HӖ{!?w"_d?vAhpsNx ]Pʟg_2Uz:,A;WW紹aO3q]Xݓ|*I5֬n#ZQ,Q%ZQm%id^+cj)E WT{b'-=i-U)&jO62Z%B\dp$F̔,%BryfSLe$OxZ[FMEjf:%*NKeBlBKRUⴵ4V\*Qh./T[%sik)8-*@5kt;m-uQ=Wy&$FxwZ HP/8hUnj/Ԃ[Se x^Y^s*7+˒_cAZly`XoxWno hXySlP0 #6qA$-˻I^бOƫMT9j%>PzKmʋPIPocu OGX\ RKMZrMV iSu`cx&H|_&¨Ry ߊԜ 'ZoyǫM\ܬagļn&.M;f&H߷n4f/ِ?ע8sL\o63 D*BCN4ga *Jd+t2M],yZD J ]RYiF ͂s +ЦQh@u`Py)Q\*?ߖT#3+mjȖ_ڲ`*5PGDJ(1J`z#lq{iw3Z{h š8Ӹ,z8HlAF/@b$J^X{G5t)i6[qz?bb}:Ϯdcu|۩JT{xvhA0 Y6u$:V w߁IK(<9[+\0[a2+wbFX]"s!?F1bmʏ98"KA5$MZzZN5IG(㴴oSuik)EhYkBK ⴴ TE"l]T_4P'-=I-Zj%{j)XW{R(J–EUYW^c/B:kWb'EumaoAw/;鵅*X( NK+1SEik)8-Eh ^zܩhNyUVjtXMEXBk9BˤI-WlD1f40X1ҊSltJPqb -,4R 6iۛmeJ߰1{*ÉhV-еBCצyL-I@)lgyp(_yLK ,Ԓ)'x{czdZ!0҇.v2Jֺ%]GAWbv,:ұVa*%&i{}##hhaIJ=+)^fnyR;TrT(H$lJ+ʸ0!sR+!eׄ.fK޽Z3Еt!`NkEڲ(1%eZUo#R;o3(?qJڂR4~[lH6r6_ylT7k,ؽ4=kp.fUY$W'iv]l`|_.r˰\XyJ Lg["^7Lr1q~|(nJͬgޓ-q$+L#H ǎ ["87 j4@rX6FW^Yyl=-ieTTiyhh,Ze\;K7{R,xhcf)@h,C%nϝ^[U(xȒj'ӊR*rwCI{(ޓM?sh~4ܲlrxlFSNȈ=U#L,3gUYb꬗,VKNT է^?\M="|U2k }Wub&Ð!d{zM bo):׋b鴹3GS rJC/-^ S.F0k7&&} 7#Ll:-: zNl|ԉ"1@ICI8y5Tt'T(92,n*%AhOz7GhcMv,S5j;0AڻD (FDv#hDz%VyD1KYZZLx#l) CƘO=p@5a%Hu-'o*H#`B )"zp$@oӛcqP ([=EiʠjN4ȻRe晚h 4bC]$H8O&[֞刳GN7{%lZd_sOX})3ceKƒ2X;Jx!\ROYl Ag1<#ӑ/Ŵ^[ x_'o݀cѻNj3;rmy~ |0a$OE;͍#wq7EJxg5-D %D'a~֊gd٣!Vkđw> Gm_f+3vafc%!ĺ-6ENҊ_5"ruQ9yWvb\t/7lMTb|ŋwhg3ql], `[6{:>JQ [f3Џ>Q}9$Gpd6*9=8`uW.[eK.[é.[ F_.[*`h׭'juq2 DV4q2ssnck/H9vb3|]q[A-CrMmщnJ i9f;0Vtmi~a[ZPtkf}Ԓ0sW6-08F#ݡ6j.,/# FTP YBԃS%o7QA6Pefi_Ո.cw ɷ9UhA迯Ty_/.C+l|r3]bmAp]̪$O|@;C&̍Ru$Dxlh3c1oN`IBr͒FS=C[[. RD'w6xWiIvڭ E4Ga'o!oi7NIn4H5h#z=S>?|8Q!!_nȔI)c[RFqz}Y(=Sh(Ǝ,7sXsI `P$;ov½X.":/RbvWMNJg\g"v~)o؃<$_j>;H Lbp*Uo-<J`@(Gځp$z0vsjrॄ:`b^Ve\#n0v1!(񎑎(ʀqL{\|s+(48- &涠:D5s-O}Av$tnK![ %\65Lu Н.L`ϡ'V/jǺp<ӺFqQޡ v]r[ rs;3zu*H yuB"6eK_2H3;<33VߵUx~e)ĈS- KΘhFUwHʷ io3"3%LRHwT"iW^$2YIv}uv^ʑv,  8FTE>~zp.R5J.]%齸cP~@hOMCgo T,lB+j~tP᧜ʴ~H%aT V2Z",0_Za F3'79度@<[ӭhȩ}]jCRs5ӑ]R+3ac_ `C6"%"jF@=`ܱ@4aO8 c}5ƫ2XO,`P,͋Iis@JA9'%V<11lOOF _Fo$qJm!gXnDxOPJ E`J58“JQR bQr(&LO rZOvyp~XnQeN13ek4~P;({|L6:sy5eN >P:]|\yLGBa%1v,Y]0E2 lA O0)ɔQbI|%-[\,.P˅CMxv{SsŎz<5h:5TOhQ= ^r4$4mN#ߛå1G=34c!Z4n9@@xlOE Ӈ)Ã7Qxoeɸn\rX!9޳m>*6TR9b)H9\!$6͐{ Re,ɠYIّ[xbh,})w2延)xg!̳j PpgnA_}Ѕ ,)GCz# O!OGAM$?CR!H>E6$-R:'W; Rθ. gB[d!zGLB%Y$T1p+F R1K&V>])˧GaQ{_ÿ}+:aeKK߾Y >LBJu}s}yuF cM/> 5t6_2ݢ>;yFշG_% h*\j{'&l}bb$)5KZ[-DvOsb!5$?}s.ɻJy%p ԇA{>O+ޤ=uDk&fBw#QwHn4wpK;C{O -bnb~G:in9헣UTv:gF0UoZ,4Q(Jn#(n-8,[έJߠۛ҉(2Dj켘mB(L4)\cRrٹIkG8 !݋$﫱$T 955pz0A|B"FˋrȌ)F>E D}OJk5\v-r[6IzxhERõF.ɭn0zi֎hȎ!Pj?͍87;3 8^l&c 1*79IW_A>25C#Oa a. Rf X8T%۾8Ψ`džF="|"ó$.( c :؀<q5y{)rh 3:HPENx̛nBѓC  PI %zuNLPaUh[9U> & q%)[ mmWn8^ ShDH.53 #R{%BXN-c1eڀ5j#(JJ! :E,>d+TnFibw3홈$IP\%L6Mܦn?=omS2Je*e?\-tg? P3Ն)]3(%ڝD[o:xjoݼǞu]ӴEkǵ1BH)ܹv*@+g,1NR(wd/~D7pԪjK}*Q6Nt_ODK/ \R5ͤTɧCpi( S6!z[~Iwr(*5{}` J]VXJ+8B(-,%A8" #"#}}Ye%C5sVRkV)C%B\q*/^mz?-Q5u)QVb$54j* a`Ľ9J]5UY*5h:]Ԑ ? 4<~`<^@` 7j0DK9d»>KWZjYx^B(ƢRy-x_co|׍ɷVAB27)az3H\p8#=I~`D$(f>al#Uş6iQ {^E-;j;{w{jmϷQ>F|ۖvO >*V*h눔NQ/ ( @ʫ:)E`9+A׈5.X9u?],VNCQis'߽0_>Z3g&[ HJɥIr-1gBGb$+|l0Ǎ&$J(z5cWi+ZgAGE,ճ~QhAkJyj=QϲVV0ÖJr"!&}Iї>Ǚ(xZEˊ Ē#2Tl&oCqDz{=yq|]l-xxnl-V-+Zh*UX@.da8ЀxEKz+Z 5[pr+E P.PRULhNK_Q-~泷@?բ3w]q<ş RB%)Ol䣞k^'ƶ㐾B  ^f $6Vģc@mZ#̳G@Ǧ5%xE }J1dJg}hQjP~%.<J3ֺIy|ocS~D)~cI,(M149[N2Cuu~w Oa` f><\\LoxJJNL6 lZ( m^R$$ęt`AǼIRR[Q]*bQE ʢ"X/cwdԫ`$m ˯`m!rjd*OPzY@8,DFE+`R#L!ԃ NLg@C(2=1LK%Kob+ !-CYdpCM% h/I֡%lgܴ5Ђ,]K1ZA" i=J+:S:hv9g+Yk5wn\2x]b;ID$\^{ن~}5T2C300\u_SJ@."OM>0n?/aqE==Ve3Po7[e wh]!ՅD/Ğ$b;7{ٺ"Hy[1rRoM;0yo']zV.J[!m*h&P=tjJ}ۓF$G^Y~Jxg? ǹL\8m?sq0*vT<"= Sm#%ţq1V N?g"9)3Χ(p80oT[I%Z DfNR#ӉdDAٚNJd`w0(NZOzmԓo|֓vV(k$ Mړu`Y`R BeceRR-r"6]:|@SB8Ɉ KD- 'BMvKɓ"uΔBhU)hTk,cPe[Gж{VRIASM|U*:VAT7&GG;Gh=`Hb&hK`snٜ&D5*,P0GjZ!60+ q B 1jQv{>MNAARkZҰ}Rr<O-0S:=Er~_|NNөVHX+ʅN-[CH A ?y*ZY67i]n(ZuTBr\ګG+竻WuVwTH MfiA6ݵ@:z`]>羥N0~C_{ Iljpa|$לC7͑ם jWӦƴIw5~c i7?ڐv('RhMm۫fgc[rbi&|mǫF~[o =uz %6^OYèɻj $L$DQ2g!s G[WO}b<8 LYP.0U CPVe8TTJYg=ӕ 8Fⴗ=l1{R TFKGlݦ& < os x\NB>>O-E)7M$$=3TKT3 شPQofr|Uh AF$3M76`B$T"SLj ja&(KKjR%diBKWz'8bN!qHB>uʗHEPAQJ"L+D8sq~@1T8RoĐU&E%%x % 0Գ4Y"(Y>% 4HD5:=9ډQm߻6סӸZH=wsY\W2 %zKi+Șnv,%A8" y$w˖OׯVjjK}*IQ69׀2YgzL%Q| AϘ1T괡gTg\!8OTq_]ۉ z)֘RKG+YFI:(e,W?YxfnE_ȌbvzXzv.1QFy8Ho0~?Uuc;n.Y7>0JJv]$S޾tx5OChH U 2^)+2`*=T0kz٘)A;꺭w~>tI5oJf$~krA. ΉzL(. (DQ!Q5 ڍ55R'|CoީQ SUFPi㖯M1.XM 7M:˭|rXhx?u|U5qdнq&1sx]wܗ\ćWwsrx|m nkM$!_fXSk3݀Vs#j\N;h#]@Ui m E4KPi8 -)!^#nوj)$+mE,(P>qQpcE)OQ,'(Jm'ԯP#9# gıq24[]oIiUB[/MURoJ}I״t,۪B2yG m*~#Ww(R߾_n#n`/uF}}.l*u?$Lfwk kcֿ?ܗ6VA?|C> Y‰<g00N#0e =li;[YLieȔ(5s mǓV8̍CSZSx'z6]<6@:Ԫ^|iJI_yB݃<#Ck4IEDѠ4bǗ{;_}h*¤xL& \`|8/xc\"H z8ZGrkp'W!{& 3!{-\/(75<_:iVél[ b#ˎ7U'̄L0N΄9Դ.^!Y?{Wq /^VB"N-d;7F=1C]Yߗud~صPe_@68%)qx8Uum|1RV2kYҩB(:f1b3Sha L*I3^՘̺Q=DQ$znJe@{4YGc(<1[XVr0nQ K>`qzbA0 #5\´L A,!>jJZɰ5jjBdJ˶mR!u eߖjCj*p_L-Tq3;FïUmz Pm#ߖ)b+-6`Nr+;L*W}fP.o_&zxWBjo%[tz_&lʅγ?ONv1ZYQl;9hDnAI/ܽ>s^-ƪMEjN(I.&}x]k=uvg.X%6Y[?r\^Cn}^wgk{{רs0ٛÙ2m&+Ob -֤V!eyoSLltZJIRk蠥g,ӖdK5q2mit<-j`Jre#+|F1[\p4@ker/U:F =!ҩ%2ía,}en#(gxmPq4/,?^ >1k=י`qrD]wrzLCu!/Hِ1Z8^O` %nSw6qʘc{{siR;8iȻh YzSFٛh٥| UqGf_Թ88pӋ3/ww&gKŷdr7{wH/ɺ@,tm-m+M1UӽS]V.cTJ"EOKNv)%Hu~n&IQ!ON?{f#>gtkV}8bLE'`\R'~GX?b-Ou ,$_1_ EeBIFyWn6"aA0z:iٮߦOi6Z NW=ѸzIvqk&]k UWu[?* Sa|O)Eϯx 16>\\Ppjb :z=ܕ[ E׈Rv8߯K~j"z܃:1puсz`GCpҎ_;?mpL)$ Vf@}Jue֫ͯRѬydIrTi ݝ0n/pv{Aڛ I5uop9tد}O]oQM5Rͷ85,mfSfG9Ed^4+^Eʠ്ݴ!Z~_} =3H])&IUh˃ü9ʊDpu0N]{)CAH., Zg@{M!,IXL;ҸwM:ɇIطlD8Lx: `a<Qh ciUhoͳMַւmo{U9a|:Xc vnQ yKfjetr[w h}ƒ6P@w>Qwv`&;{%[$ގBq!/4W "Rns4#Uh+}2Gm=gN&ǧ#F$oIA6:Rو!6 Oe 5m_BѲ o M`ً{(/k(F&[_l3ctz$XRD*8ʌFĠk$֜Qw} (njh ccSfHD2wSJi偀fJJVHal4eTz85{F&8j^ T/딑z>5)"og٦` |zJ`w}$c-8/_?>~\7|>]z;SuP_>]՝KF}LlN*S۔ۣ fMhbf?-K>2X"J]lFة]cޗt}|0Ȗr],/]RTILcc0r6ܤ~wP\Mx]Q;+;mu#VD.}o^3<`C$: SlT`0? KtgiSƏ @y&|,RwbFWv4-fRP(tLWyթ>Ewo;[=ɜؘdoL/bx2kfjHV$% Ksқ僣sU7R0iq17䚤!A1veNj˯fz9˦/9o\^8< JTR/8ݐ/EL>Y z4DBtY0Y|Qk/FvvqiJe>ES{ϖ ҧ)CBS˶-/^߸R6^}pXRs^6銍ax xXwlIsT~]ac lTW>sd@(LۇMmea0=9H'X d*[3,hkD儧+v. i;&r7_I\b1Xct "2G{)v[9`2򘮰)S2x,^B # RO2o3+K.r/rJq T3OS̑8g*tJ :e`"pgv ls(OM(bYNR;aR,xk 0) 3s> oE3Ď Ԭ > ~HЛ좼4xуԄ~oI:vs-.%ܳ5GlDv8$ǒ"8C)y*iql/.pR ΩTdMqp 2R2|if|Rjh3GOhJNBK Rj%RZznZJ3OOJN[rz5Vk*["!sЅt~}TKDX:cSXÔd$-ד 8@aܾn;\&Lj#AaL<^Ze1b2cҰJ.`I)e%&:FF׷e>Re~(_;p6$ (1,\d+LJʘˆH4TJ,+4{ptթեE5J_iu!P\.Tui\R25a"2 :o/Hgjd/=k\%K' />QfRi2_h4i?aQ$a.3 HIyH_C Bp fB L"Ŏ6vX9pigkԫ]WuNW0/zEQ.HqiNn$>er6(š*U!|gP,SE:fR29`L}l>{.tuy9&$jˎsC-N(ysBt5FH!HW9%Db5T@Y5j^'ռNy]U͕  Gd`:7s$HJ'*P'?7[I!/0m^ȢɋCv٧ mmK'/3zzf4d/3.~U$Ȫ{|ᝏ'~1|Q )Mݝ.<:"0t?%qJ*:ۑ`j!v= #*$7:Qx1*2q#1puDp":syps>7`m32tZͳa\<L{02 /we8؍&n9c4r렝مY)ϤdТd8q/9pKMX"lL Snލ]<9,Aj͵*FD#<+h~|p"@E&S/#/V;hOjg̹en R͍-LTP Gb#ѸUdF2$9]At;>](;JMjN^w昖..y@p˼Q\^JQs>Ȝk 1T<xDBn}ȝ©_:JQ~./qvv{7̓W2E|c|us]|_WowXMa؇q3瞨>_TE4IWi16iLS0`byZ8CK'1M%CsVF?TdK~klɎ荺QQJO{04Ix2ן@&$÷p0 @pz{=p Ð0CO~qǮ {L3OOb QЭB4zz(nxNf-A,4>/iM+_{=dW6{ |bW+o~'\w~wLأVzF TF; bup!YaJZ~ROW-&I)n*%e{ԂdԲҔĜJ`Rɧ G+OJi}:KVEE%qmd|p `dƅ/Wo|Zq?M}~wU/o-|,իu`u:eҬ ̨>O!41G蟚co}ntY[ O\W`#tki9Ylst,Hp |v7u驵( uEQY0aEjE/|:9 ^Nڱ6'2IX Hn4pQeY#p>qQI1WFzhZ[1l>o4gʺᰔ˱.$ԒHrTdgd8cyS;7dn ;jrz?|&dS} ݍMXR11oĬn^̽Wtm M4ɦ4;ysgus <-IF6gaO> \݆/D;6Ńmm ^@r~|7- <ϟ.0VZz@OCnz檬VT>n!O*T6O5ɨ Q=ZHi|Z(2 XZ:uay^uiE,mKn +,j*TUX]( J;hrLPd 1xd/Z~cJ~ב'EpDa$?iGPvw'3%Pk#?YkܔЈLpjB ?y4޽/uvFUosI#|^]B*yƤN2JesMJMK1:g1ἄ(43vX%k!D9C\O n UaYRIU!bxM}&'>li=\S߈1TM=j3ƍP#dO9Y"icPg(9 DOO.}=I]4NÇ,]ĩ啴Vs!0p̩@Ufd |JZ1VnyPBZ{ZESҐ78%+s2pnW63_66gbx//VnZ+7~]u֟^nU]I4LhVo~&Iy}zeby>{uw}g;+;AsRJmLsփpgB޽md3@tj] g"!g/X38'HQ%X݀f#X㲽ч00!e.bczҴ6!Q:h҉5w'DHcwMI4 19w,7:/uQ1]2+T]etےDTZ QN4@ɱPl7ch;CNj4Q$hEE?F[,%Zi$/uI?+^K[ c`Λ2D/FnT֊"nȔNM-8@%YEPtS;0^ ̴p[kQKE- - X7\24}Emu%+c&- mZ!;֬uUv i g5ZFnv{n w zlh@{_Ʒe;}_dV䶝zZS?tf4G t:gBf(ʀ` ؘGラ!|'nB tȥ *\O508I%D L5XKݎ.y0#)2`U`))G~~fTHޒ~f&YMai{X}|'wpՓ;>rHkt>_+~oExb+n&ҁbHn xMr#Q/uA+9a驦&&# ? 5GsRha)j-'^![t%y0+$=]+%!tN) zx&q2j)ǕAFѫ~/U'7b TQBc-B̦HbgHH>xnx˕aa}Ս0[JGzy3p9]+/!p#XЙ)I6R+!0'&Zay͖{9lO`yi2ƴP=o$sK E'2!u-5x*!|ͩ߈PH&O)Ň/DSlʴYwMV%լ 0cOF\Ät]*8o0!BpM)%N]A~#fu3Vm MMJ=?GtDXhe}~ۑW|X n)iHFHm@JJ1W6gPryU ImFj!d dnAcmKnEWjäT ̭~ەڀPus+ ?TکiìT+aJfR7R>'Mbhc 'a(}_Lm%i)@R  ua,wϘ!(Ps (]V\ANVlg>i%L`^l'|X=f&/'>أV/ҭLR 92|pMfcn( Мڅ -77HR/Wo|ssȳ7s}~:-|+WE [YL,$G؟ltS∤[s߫OI+m:(0:(tO1JSk]Ջx]If([MŁoo.5tQIE )VHbZdס)^Rjƙ=ӊ3%bT8{枈Έ#.c c %{߬5D,7CfσJg8mҟ?ع?`]ҏ|g!Қ7\I3Y@s@sl5I~'č [O5vHOwH]ԍB=| 8i'ag 0+bSM|۵VkAzp 6MJ5HӖtGWӓkqh|RͿiqQs.'w "&uGtp&qX"AqCU3]Rzy.) st9վO݌I UtUW*Kl6׊eKzCьDvtv^5  #߂ڷ±X(U_$NOc;ɟHS%n*"|Z@<֏CsSѴaZ 7$TkAN¡(9d<9|*:H~~nPEozu} O|GQoق(S9]oGА\E{AxSY/MeteGg7"Кvw &.Z"@bZED@xZsۛ>- E詰N2'##rlz&z!x~̜\B*o6st4N֨͌A@*⪮bWU`S : a_ZN* 9M͵ָf3[ qK6q_UZ V=:YЩՕd}#վD|Ąa^i+-5 &iXM@%t.N+mmkGx Z :Ua4 4ϞG){)H5c98lTS-ww=T)- 6ߔ} 'k-nݐ]v{rKYdߦ!BۊL&%1SGflǣɟфڿHB4f $FQGrsg6S W)tMpu37DfJQJKxZ޲д|2rkme`>\*f1gzƹ_'_;npTICBtթp >~ž+eIh F\YmɉJ]S b,q&ռ:]~z,u"HT  ) `>V ]`рiP)"v\G"?|~j/e}WRJXza ĕBbDG0 Edf/X(&VPJlz(  ךOCǔo Rtу/X׿\f bX*>(Eg-LŞwpNHz$\7AJ`$^2Rhwl5cKH;,u8U#bNfCdUTʤۛVN`X_9hRR ϢEG *>b“֊ƿ+$I9UʋQ)|`ታ 4ErV`a5MfH,m JVcI#,(KR 3Ϟq㌿!QpVZ0ByHYW,l 94*JRU,`N q*z|Ep|Z u%_qXJKCݧNtǟv6g;MyywDž_j"~~0C ]&3te PY ])M aS宛RZW [2 {ȓ)}_SA84Sra-E"Ε"ƅ'J̳ PKG(L!O#x]o/(ئX .b&#eL?uus'7 tXmm`c_[QXG`_7R ]nv:VEAu.9һnXv}1,4R$Q$ ):}YI8ILDiE<_*S$>TNaVWne>Ii۔R,Z$>K2o?OFáORmJ<\RXE(~-ww=+SLeF,ꅵ6ES\}(_/< dQzQpƚrȱؖn88q?*UzZ@,jKb<\KuO}؛jm:fj.V|Afeb[*8mR|{?9ӵ_'`$ps?~+Cto[)? &ɴS,ݨ1VOGysk#Ҟ}G4+W!:GG?n:o3Zh  PdOFsWp<ʓlɆKAA\%tORA?/~^&Ly.N%x]#, ƹ(D@TU*_6Qq\R68C_4r+&щ܄MնU4Vs=s\U׏P}}BX7 v!>DӚ!6T؛ -/Gh+q>r7J:=S)Z{?rv,U\d}=$\86ѓ|ma u$̞z'%}˄R6.<<(1h)(%=GBꇑf;3C˲fufLD)-fif2HKC(b!ɀPf"@?QNP< Vyq]58`E(Eeyt`IKId&ADF/a=)b"*>hO4@mSLQ('I3_?;* ؐ4YO3KkWŚ1X~7VBUc9Oؙ W5ꔾWWn߆S{eMQ9@0%K[km[AV=m7aI,[q}%`l@8]WHMÍ6N8 ƛtk@ l~E]ێ?LK7 >_]_'!YiC=+vrlj!eq)v盏?D]7J$eu[uzWӻ+pG-OZnw _?Vw'/Uÿ"3O8݃%/{췻9i |UY>GoKޜx QA{mX䨘`=Ogqxz3=kzQzё \JVrlhD5ykm$!LUH!KT=2;gT+NMlsR-լMkLEP*i)tΑҗآ iIKߴbf6[83Cjçk)e/&ZJ;jcO-kLM1 iiSiMށ CV 8W DPd "-IڛFoQ-ER$-U:T_lSDxZ[ozmRhi(TT' i-fI_=-U6OK[QwkMNZ2QhKT_lSͬNZ3RBX 60.fMNp1 z@9&@af2?{ɭ/t,Vl A %>X􅽫ci%k>g俇I=f_fcjEW޿~%&}<0q+<LФr<0"SfRr<0o[ԀeD dkQd=xbc HNA֗x4шy`ξB<:*y`Fd.V X&(6.$&CtQZ Hk-mi_k. ֈR֦t9@!׆w g5wE[5UU@%YhxSVJ]ŭX*f ֠VUzƚ ;Pm&sxyje?IM\N{o =]08 =3v*>l a_|8%!1b"lC=c~(J_~("o.1$&H.E9&y:@U(GiF~\jo!0c0?D2P`m%ZQxd(+g'2ҪҼ S:Q, [Rb<<(Ԇ"~Bq웪.7kϥSYUO\ۯfwCqT|K6k"n~s ߲ZuJiΕ6xJ]yq qP̊24dF5 Y7wzZ שK3v 'n?=vd{Q[+a{qEOM%GvNo-9is&C!1r& y PB8X9")B51|8=EQid]߈5ݜ5o ![I \I "6x"$k$M)9.@fT܍ FZ6'PQYW̸A*QM $M-VJfVmCI[RiEjCd)"Y^7%rK|vңQ3g?P fvjSעi; QuSTYSTH 1Q4s$%+IQ&0@B=T U~QV{={17!ʹҠ%h$0t3eQ0G\^Dc3jikal9c(fՇ{V~CC{ Ȟ9hi})h=;zI@km6>K7}ClxR!a}&4ѕS1=[£5J7SS಑g`<.j(k(7 f̄uC Wʭr1O ?׀{jpthQ-/u>p<[~ߴ=ذH\{O_/௾\Uge@(Y(Zj:o6 fuΪ lkM/T2_g Efe +YC-9j %rsTl9'x~\o}O*=kSc 7]w='$lm'@GxnR'rӹ G8Bq=Xɷ3mG/#DaE.uo 'soKjSx'Z CqM}j8q\v _e//!kE[ö.c\UԿ_n}݉Kxb#Z~bX7l)wۿz?kI|WW#?fW7;w>z|Yv˻e+7tf$8SH}9o>1:R0ưv,1YH,Ɖř@76.W %7`qBm'>AjN[F\Hm2`p2o$&HcF+*N./y7,>W*E rZBT>!2?욞 |?ƈ0[vKoƈ~ww?M̓tWG!2̰ʒk2BV=]<|Uy_^rIYyC/?RPA뚥Ax+Ni>QOGź-g[ D(v䕒:a ҔFPM&d]A+ X[ֵ\չ|s2NAsNEZꯟ 䨞ow))zz찹C21S3sMu^<l{)nsO~t.kLӯ圗|bFV\q$EJPh ֓tu9Th{Ng?vq|ƃа4$t%?JmgICDW\3&5el%oЅ4y->]QD}tFu!d1UЭLLܐ[jX<`L,|\" $,; vSwX`3:l9~_:.K_璴s#431)cУc432|.ELazx>1-'3F~zv/c1<Ȱ?'b؏.({Lg!`sF@9:[%d1-;NL:׭Nx5Q }eCtLRk:đQR'dH3N z@d pDSXY3q@꛵]jExlu8:|3rMx#;є҃fƩsmjIFLv[\ߦnx6F|!SZ ;-Aܪ}E[6B|!Sq!=odFڰiCH8>lq||dk=πS,S`'@EmdR`Jm<(=( R\Ήq2#1^}imV3{e_w:ս,]i93r:LiΓ6gWb-\q!hNWKhyNKTA.tqm$pBIPyxoO ^ r0Xw_E5OrK&mIٮ.m *᭚OO Ƕ5[KFE^'SQ 7tc6XrwCUd)|g,L)(ɦaZSǫYs s NAp.ȍx&fB;fo{8:팻d9K>(63$&hFJh <ֲJemQ1lZ5xhjKfeGA[م Bf%)dna]ֺBqDe2S톫ZfF_+WI?FW8+4R%AŘE ӓ5+-{ޏ#Ha22Oe\%ü?kPH ib$:A"8;8r)dh 'Ԇi&aJ1@,gI@ϸ!DRd([w.1yMF/d>4B.wsq-RV m&S$(iĩvnڶk]fo(~FSPhNPqr8;T2saz*2TIWU#楇-%ٝ{>_hC73!x '+H}kxT4:{ ly"j`̘h)w ˎN`' 3X*iV:e`*pPsp lԮ꣯00cC4u#H`$Ԡc M -qG7:L[m|I$ȥݮwR͏ѷngnG{dPp½P!78[ Ιs|i: 2|I.l)o{$՟EZ g˻(Θ32X Fy _E$!XVĶG_>T?WL[(~.C[{[\s,?"+,bJ*|AX9 RY!_"x!"Wa1fTQވM(?9"5 d/5T Sg)h4s2 jFq|E<^jDpqiPWيdўԙV+Yf.?U((  if xtḬL'6]9 6Ҧg3+L/Ut_BX3TʔA|xC劕BJc8L.cqn0PY:o(CQwf:.m\I gV&Z9hO+&!Gz F;T&nVyVGl'-[ύ$5ŨGEJɾhNfnHCl leQglf`Z9s23djUh|\$-gjLh{MlS1tG\dz/G1 5u۴Xw Xs;r@ v=zv4G{= {ĈYV6 rVtDSqV*H./ke>@qP' V/4*v~cK㌠,,ܬŸ{t>yaC˴d1 "ڰqib3xy.q=rvqrx94<Ӹd=ˡو1 HD2d@Xt0a | ;H 'j6ytSf#UlceFN V?_(<:!(XeY1%S`bXaH&zFxEJNVV!歰>}~{)5j t&{\eEZlVv,3~/ fQ L%hCj7Mfa+^<`KƜ\rqp4JfDԵH`A)dG.~{{YE3:͝w+:\!*ެ|i08܈&(S%\u4NGh É%W݌MA؞waO`L_~x\owm_A_D_aTFᷫ}?Y!a8?oh>޾_Qi:dcվs^ljlH7=D!P57H~1Z|oߵ%~s2wiVZrO~vGOhQD&^ C3qJ ۭl,ǕhX"mpMhZ@N{M!p oM4#}A*bgv|@ԸI>le{QpzB"A孠*<"3Df ݫ9'^F =zt1:USJуYGYkq;=*^!۽ ݊q&\!h\pYz 7`!ʋ+#GAmrL{p.h#ܣ2=Ztpvdߞ{4ytSL_vG/qh_~  ŸG3 { +Va:KqG9o8=J &)d񔃬9\.UȪǺLxŧܷKOI힤b} PIJ%<琚Acah/߻vF_Нsu',3_nLSm {X g8ylRkXN! A>/O9X)XP6GEGF,ta3*?K=j`Pqu]i`5s+ L$^P6ث]\y \X %SXQn3c6fuucELt[ T_=[=:쏓m !8F'˾ܵd(1@|L>qWco F_שd1Wt0FKW! ]e:tk61OuY3w(5cߙž:\tC̤"2Q/l&-+@żt[6>0*l&bTwε} 5V7ql6Mе{`d`4/z:'?[^R>u<:mZcȩu4jJQ(rq{2>c*[ʼnK_7c"_S ۹=NNWgk^Gz׷mj;^3"^}~Mءlp3H ~3HD 7=ح7ywsAۯNatqp3}|NA}wc?|?l*+WR9AAi+AinR.1f6r6rRJ?~~lΐUc O-/ez3p5#qR m>ޮGsb,1a'FCg& Zl 0&h M -EgY&۱~c=o c{ك~x:&OTo~sw?s{0+ 5͐h&O[ H9}L,2ևq8zXP9cSo7$gO& U 8֐t¦ktނR`.<&nl5gjZ h"E<{+A wQ]M &P/mQhjo)3";CLTzBۘaO3 {3{3̺<]YuH]!}0 H|q9+re 3օ/eOD!$SX\^#I;Akw_ 缷s5gcLF"c'ax4dUa#%s'lw@Hz-Ȉ@hh-/Ol=EY_O,zs5ʴeyN^j5ɮByzq-c*TQS>S2@U5JPYľ aܵ}s|zb<wO6w(CyJ_(:pO]h? FG"Ld/5x{caw:K~Ƣ|Lv `A>zGt~UdvJBvu-%%i=8RЏL',[xUϮ)*QP!P Ŭ/DBxu2 Dߙ׉%rvYA;a =H_hH2` .L67*ݽ{ׂbﱕ]5̸"  7m;Esp{1m' Ju(9ePNS骛qAo/5NYpL2){n(]ɬ3zzj$12;:n {kT yZtnV=nf Ǻozm#LnZMpƚ3k d]m|[ )%e|@p[F}w[[Fe+㾇PE_ͫ6ݡb}TYE$y56t >~D={K=bvE<)ѬŸ{?y!BN_%۽͜݌q&\nIqiW~1j<܃9 Z]7lF-S&4n2Y26bˆ+([Aq.Ld#aǙ=)QNܗq 4(Q\e՟~m~N UX_ck2(3;BMmJlWJ 8ѣTuO@rp_W^˞e=+YʵىʞakzRoqVDV5`Ы ZFKpӑX! 59ݾԚ)zO-^ {@{]'[mȡ.]Ks$7r+ ]lo{|1a`-׊MJKI37I49dAT^A3ė2ແl.-t:nC/38Li8i]\ԅb /B4TmvR4}لs:Ww?|U.;ʾj;ݾ?;o6ߣ?u#.6?w. 'd>l"No?| {Y4jL G:,ShWDnzR{J vo6귋>G͘fkBX-ՍËRbl|n-dTzO>/oc(QZvJώUí<:hmUj3 h&1"j: km:snpu1jz(3_al') WE2,+:W kl|OaM !'*L訟A]vM?XM܇Aȵ=bA`iXA.R<!$;zw)韖U]<|cwŔVԓ~eL3e$WP eMU -|%XC,D y]aMJCVe`ƺ'd}5*D}@JYNu_gFP ,2ੴ`clZ}F#J(J2Q/o6:u_OKYd-Dza0u_IfJ 0!fF@0* \T#"]tyz3p%HkȴD#;X&asÀdRE5)oa| D2}e.kp,v5R̊~i.plOj|QS1i[RaޟJR-Չ Eժ WsyĔ7NƗ`c6oęsH 4`OA:NXCK-yfLH/ӯY+ ծ DX + A#x~ EuwfRl^[t"9IE/Z$ (9Zz7MR΁"B-GKLTtp}􁯦 VrY;yc6L9O%bo$ ]JtOC◺DlP..EXKi03;k%AJ+R̪Mk,H*,ֵb@׾ຮ juq+j"4sMdBWPQCme <^UhvD`dn%ļbj v۱q1eV(ڰB(0xI]J{ U`J*Þ @x4Jd Jd﹥I4v«!MڅVּdcu6L<xb}; o#Ow 6v1|&bx@&-|Z( @ `-4cpqjln7m)G ~zQr^>p0] iȍ >f0KƳ -Fz$2% y~t$mz$IzO<5n z1aN;:fZ<, 14二ZZl 04||Bf!H_@&YGG r+%0 Vϖ*}GjV ٤DJoô );A^dpgK'YvVc]/nKL#vۍ4iw}wc#WaloG7}M6?|}2|h\n-U.u2ttA7 %ŗGaKtw!Ԓz,sKؕI?SfnMa*o] d!_fٔV_FE-F6+ءɼ[~dwBpͲ-q'q{7c"n$-wλ嗺/D[68}@IƜz>EƜd|\"}4z ˧nI=w2%NQH danKmfbgmQb-iRc޾F{_vmaY 5$ sl̘x87n.T:,4&TI!<'(4rgP PpD3.Mиdz VH$C,-yc^s;tle* [eV.8UNp'Eyz7)i0<:ӥ(7ވ@e%,l0V~^-=eκilbg2mfb !eaV4r%a tBRF'<[Dm`CzbG.kDY@ fպ8*wTK[})A &EɦIIj:^ cG*c9VK vd{,1#SJoS"yc*LQW9J@e7*j&.tI"XZbI+݉-z)Jޭ{E5qb֝V_B9IIZ2 JkPo?+YnoVn8ʩ|gƎ҆F>' T6?}P7z.!.SP~ov]MíU[ckZUm|is{{?#_+_Y6!|Êk IpF_ 3^qa'Eth K_+g=#| F3sÝc aCS,';Z; cNvQ|$t'5ҍvpg_e26dCW/R)6E] 7c6V$rk{[5+夦I0Ngpd#W_eimSohޝ; ]f\~ߝ6_>|ұ+g9vDSWX͇+y3zՅa!߈ك%Y^zUBS]Y^X(*p 2. OA+gp4d0!x,0:ڽi+2̀j1uBhFФF2X±}adaaMH VҦcwʀռ.VABw~6^*!@bhh}e2=\uy:ws{1NFL@o"c3Yא:iэY{=5>'=cN,\C}O:Y}w%hoa D#HF4ٯ unt%kmOG38+XN07+GV^:6h4w10cўa¹.ϝ.wA-][0C7ᰕ1dx Hzgqo$#!.ez./5`zؾcL4;_j+glzb%]zUT&%&BpͲ)ů|Ex\ RL']یSλ3$z!, 7 &Rw{-^<F6)d- 1ѻ a!_)kvNd2vfwBrI$'&H,$pKe)z'R=)[+mPinIm‡/ y[)%^u^/=%uKj#=o+eflI+=%BzV߉c|n-6[vR aHVRUu(KK  0pfsйiG\ܧHuZV4/s4+O1B"TnU}sRiVd*RiVg>uMIR2kAFꡨm{G; >Lo;-u[V lORhcl)g`oQ(WI"wT7~mQLC8UR9*YTҨjP%p@%TIG&xn} 0c!s4Gب6a6 36ϧ 淦ffyc va=QuMT[I]+B]ie+!u d!c`5R^j{Qפ>O9MRБo ei5aG5Ij:Jq `'Öaʎi~4b>*Ӑ'ur4c3f.KZwm0=E ]`R=N [=>nZ!e!Y D+k2Z6rؖR<_LXBԶXP(]y+0K ЌPĺn\֮qRWh0+/{((߳ g ^S5Y/H(腩4`KK PU}ډ9l2 Ek>3B@tJ.Xc0\ kVKcKs%HႳrĜLu7oOѲ~ԣuߣ nE9k8\D, Ѯ؆jU"4uUyh gJ[砨EMd޵q#21YE8ޗ= ,KDtxdH깳#)#qbUjw>q1$j꣨6#M"Wf\sR2D C<˸).P5\t[l=Sk$8jzlDQxu)gh'a a>,R4iNqrDgw_â/RqŊCTۤZb3.VW9JӸ9L)gX`Kjْmk&=dcwjYy}:qɺK!ݲf>%ҕKub7˺բvo}co1J$|kpBW_}(- \>ۣJ'5͇Ǐ>!z@GG<<-1le!s /ҢtXb;7IqWc ':n:@nQ{nqOLb 4s.ϳ}㋓&ňUU_)j|hn"D|<3k5Kݞ]ۖ"=a}vp7FB9lY-gl*}*/ejau94]B:W]G]1HV0!=:dijpBnKXqO@Z %fXz/!k_0'Ys^NUΏ5SDbf5O{hj}Jt߼}u5oa]0Dc3YyMgFW8bU[Ve>Txx颍 |v1j9zr-&cbT@&X1rsZ$`Iiz'  4RW!L*K̓, G MoO; o{/aZPn7zk$^@FImV} ;M*IJ-<.*FOA5!y0F>}Gnjda16chɮ=V{7 :AN8i,9{!,{<l_x}:l^7P.\ d!reSHO~n\5N tw;]tn0w돵݆?)2O>@n 'n}ec:}ƻ0:mXޭ?6fwB&٦?̫::=y=-4N0ﶨ⵩Fp3@ SZ!mRm6~V*2OiX)f. AA;(h},GKdbWV*a`I((wV7UY9 ȵMm~-f&Z|7)1U^`@:&2긪#sfK`r Y$*Vy bCèͣI w*,V+o|檊vuD/pw4dNkn%xة,lDidTipkTJ&Ҭ zփj4馇iDz6B\O⢆dw̄8:$kyGxҝ1 RI#^/>E)N DE&H}O4G_U'ԊG,#prDkNjݖX%{- 7l8M"p;x{t(ןkQȢrB'3sF@܂`%Y:>"ܷg(2Ғ;rRQ*ϋ {]v:1T#v6.˽H'o>yMoS*XtBt@!c9Gr2.ei|<$e^sR>U>hbӿ=X?<-U֧Õ&xʏ@ wVNH -hp[˶![7lK޺sµ9-(AR(K`B8K#FTB&X2.N]/#G`m}vwOD,ƣ;{5dt[z6oۑdf[I`: z01t)DpNu㮇\ߌJ!Ӵ NQ#/am 01Z@ Z]a_ZE`\/ J= ,ȕH!뀚s1k҆be؃/Ֆ1S8ÔV '8u'}4]+̙1 UdvdK dHs>k1HnLCtVw Q|{Ţ91W8QKBh(h6ĻP9ghg$ %8y>"n;1OQJNn)OPJ8FPk?vCGoUնCj[Zg1L]Jw.e9zr5&cv)-5jLu .M-+:1i90HtAh1ktS˪l`A-յHx%'tBԂ]6c ]/<*$eGsPӀ-TZAuuPZjM ȄdB΁@_}gY( 3'87w"'`K 1;U\^y*U*.*U(O',-s|hW'z kZ q[oz1/c95 ΔxBs H)< ] )ɼD0]$|-lq-VR2.H$Fgm)X-(-tymeZyؼrǼDcL`vO%nc~PyT!JmRʙe5l`+as'To 7 |#K63~jhP릆B*y7'ׇFGqe jBy13ZA9BUc S,:N( z#WVv˪QKy{cZOT),&WVӟrd# K.Y*(54b{uX4DG)_+J ꕎJCcLz@flW/ (r A$p5mI(2$@JW,SvFU>0+F=l'j|G͚%c۬EGj{ʸIR3-Qet@IY Vh%$_bؔ!eBA4?_z:=:4AJ&ӥD|K0 vR|<ߒa629(Oz3חޗ nCHʼy@z49-'΍d _IgdF-J3'jemJsҿL+ v_VSy=_4 &Eɽ'mWo_'='=L(0mD;KvWQwCHcζGf$-W٬}ǻ۷gw?rC+dsk@:]# PD3u\OTZhNNz|IKQ].nbxu[| ~|E>9fUW91F|-߶x_cy&6R7J Zd:xgAtqѯ8!Sh#Yc;Y4d(c%!ъjFuj N>Ș#q>'2P:laad󆃚ؼY…-l}?##mr-]Fܖ|!4Z7,H.AUԔnO>TMg]h-`[fc!IW| %O+BW+HB8u Z*o׻!M|+MHWQ ?s:䊡k>j'd 5-j I1;ݞc+bwˍĢ/%n/b}N ]$]N v&R7h[$^2`JSX!t)-fe*2j0!.\r,DԕsZxRA츌yt` ƄY:1ÏSɷYZ~؃㴔VHzKa>N[ D+ [ۆܱ}ޢcC&Sfh3 rl;%1v=&c|qL)oD[1~= dw|%&J_n?.K{E.HP)]mu/<$J⽂DՇ`P8>J[l:`& l=w1M[>0L0{ 4-3Y,lDF 2""OFA3veй;1=׼7,$1OBiTF'# 'zv-J* |(K2P1Ŭ(MZmH7Mn0P,O @6v[Y+KiK%`0PByRp/3NOGU!Ĝu["'G f}obpPWk l`<s* ;a x7O5\Y(Ծۤd .u(s؋C 41 ;TB*ܩytgaJȈ%E^Jil` Äl@:Ŕtg]~̪O+b7gkD"\;P ~[|G]>w{}ǟpwƑŘvAD7pY^78,&q 3Y~Ru۩ه8*(rQntT~2X C F.ͬ{_ѨA*lV| V^XAЮ|9nnJ>eñM hCȍN.'BY!pR*N~# Bt#Qq98by=uRx~O1M 8Ʃzs]tt6bt}8O^(Thp4׋xP{Ud"Bd!e2 \!>e5{h]DѴʧo3mizW"DϜwml9KG(#${Y/Z-`ԅֲ`'K`2z {9 9sT,c|1 2r(gb1dmZ4nՋ.ɘV%Q\\xE7K͕(Ok"gvx'NRLo1M 9bOo' Xabqd 4̌I1lzJ++گ\Pё«gp%o8wEḁ̀22[-/VfBQ =J#]/c-'P ҏ#YxӮ=2^*&1")?v998I&VI(` ڥla!H(Q~,HffjӾݖէw:5AfbXq23{'+Vuf "̸Yx\[`Z[4B"N>_M6@8*+NK"R{9Z%BU"FhN\Q `Nc*!9"d\坈:i) v7QUB15 Flǧ*?h7ej&1׶eZBKMq)Y4&c3X!=қT ]5 = dnCmGkx:-av;BtZo;:FjJ !D8)t,@AI.?a$ r[K8rs:jIyz|pѨыFBt_}˯ʭ\AzɊ?jI-|YԒύ N6Af!EFd5r{(n5y8'?.7<'?l V2--v@vnD',}e1h!,-`S$n QƶX6 Ӝxp۹ (Td߲l> iq<׭&AZ h wK!(C 5LAJ;DǜVU C3N}ĪnR )S:3ֶF0X TMVY6 rJ} )w'"5\UkQ;T#O^T zЏ5{<`hPQ:\mAo;pP˴dfcs1]DpƴsWpi*DsK\Ag/bD 4nE"zd̂譒ЈZ͎]t z(58 UL}(`W7eqWqjPxD/Oi'CPA6bNKu8Z CrPml;iDo"zYD/M/2Do%? Of$6VUUU'*P7z5"-_ xv- uxo({3l;NJ' Z`|uEJKQhz:=PTOKM6eMMdJϨ֦rIϰf:*%Tl+O<-۲R$էۿuX.Y'd6~c7E9߯i?jt~=w?nO{ms4i{vw9~ExVYDoN|ySޣ#gJjNeV[TQ+r'MƎrבb_LKo`?))ȺhKRʴTZȳҧ~;:/ZKPKA1SiVI8mrz>\>MbM>Ch%qo!c-Mp\)tr R.CSQאFp:W+ŗ1b|,m- +[PbJ6:2[ǷǮC`{o ӑ(EF /^gg|㕈hF -Ԟ/_DFֱEV*%t1aղ0l.j<"1r0 f(= drNT$Y%FH60,ZU87*h% H$lһ?n$)H"Lb#'碣6&dv<_m:>_wbkywܺ_66 I59:jE^mY 9"mM<O;(rFB.H*}-Z!"cV RC 7T*|aazu, c^0(heލtb:CV )#s7 (1tW-`r3oeZO2v_)p/sLG?{J;Oude積/8h¤ $"UY穎&(SI:yҼZkIGPܰSѱwK9Sm7ևU_a+%dɎryݯ"\ J܅?~*֞ Zە -1T[ƳM7wQ1M'h)ZEFd%xlF&ja/X'S9S` eb ДdN! 3Lz&gAFBIFR N I锴:- M EuTݡ̥%_Dh΄x%Y+dھA# ْF ע.4"wv{1 vQݗM?Kس?/^G:v(6$&D",vJ$;)aωԲZ&tҞ)2+ҲZqwKew6,TBh}_c]і>&Wt, /ӞmnT!Uy:p+b4Xl ">rDfX<`f @i (zt YTIT03, Pe!E!$]w_t(ڊ<1 UEK_ƒDvm>[* j`9/]KPKBJa1_>\ֻ1`ûIK9uP^^fIe R@9Q ǧzzSknq 4:^="QlD{JS9a{y!WS H4t~;ꂈ{i8UJb)υ1o Zrk9lzAN3xt>p\/WwNBJɎed>݁Fol|p V$>ʭV9Xp0}3ϹMĘV5;TlA+ZkL:=:u)mng{})NKNX 2Jd6gD v/y^m-%{.ΑxDf"6QGoZNMCТj\'--rǫ|qӋKq?o(qSrZ5Ov«Y@r<\dn>Z+!D+5iƝv:gz8_r U}8.$8{vo+-EI[MR&\8CڄĴtW}%GjJqkShR(Pm|*Y4Y +.TjAҼq08bz^YÌ %yu>9/1`0AdXB A~ӝj- :]vtNӢTU. obvLW2ﶩV\|Ϲ,\_c!DXj)R, KT+/p筥\i)_sNBK,%hYkeZ*RK [jCQ^ThH6P)~pTʫ%ZЪE2jo["S7G, Ȼvo[hƧJ%D v2;exk4Z''C,6FƷxrL謃HYC8mrOSRJHW,* ]z_!n5J*tҘ8:H@EKvEb:_z䒋NAA '}%K@H8WJ2OP)&ó - fđѬ/D'^>K1F+rrMR46"I`BCƖ׶F͢3J26pL[R81ntoǂfM%٣]ĺW{2>,Z# w _Ud8,!|)J2%N^UI"#<%;\\3K_y' tL %IJHȹm č7.E!̂d~-ZhX5?6VH9F_ECW, U F'%)AdGE}d|݇a3 dl+xGQ au[BX!?mAs|B\v[a;vgL0LZHKIg겾vFsEϷaNd?J̾]_\Wۤyu;V,<,zMVD/ jՅRxH(P]6Ρ}'+K!D`KQ(IUBHWFj^%2, h)*Vtn7G%{~tl$(73fۆps׋l&R^cL8w;@,*88 3rRhci].hE.g 1bp&Xc5xύ#*VskvtL^޼r珕OEi,O}'?y?Y?!`sE]_8[@ xm Yanbo*4m)TÒ/7[1wS|w;T+}7?3#/R>cgG9?;\\6VL ?`4V+_0~*#Mr&) Z|w׎MvկlՈ|>PZC}y@-%9)ە6@`s3ސxHۣ~2J!7ČPI(36OO5<vi]  3B5F\W?_Wٚ]mYK`eYx' wZYu2UTL|#؂\^2>8CM'zt'mSmi '˃lh?M2j4O!|h(NOo[75u˃6m 2=]sm UN'qX7$[_Nwnf$S Y(m UNYhCvN_SCvhQprƸa2kjE26^9̚PhUq.|Ԉ #4rv_9"}@?%/;5ZIVn6ԑ+Y'1hkk||%IiLi&Qh?Rw=Ԓ/b{BSD34\p;|]֊Z4d=[1]D=,ԍrӭRȯi>Zem@{\5[^t՝EWouL\˥EW+2**2hhgҲF2,f| gvL15m>gW7>?$,mqz|)NpWKB[+|G'Z1D-debJؘ*)9BW!Jp .F{ﶩR^w%昸H? -EUKdpҳR^s ʓҗ~M5e좥gG,oݐZZ6%rEuF:L>g-`A{YK:2ﶩ͋B-3c}uL:=Q品dtMJdr %ezt8^wu/nQ XF5k8;XT[hVYg;h|+ cF-=G--2{R4ҲoM<鋖|v1= [Z٧L/g /7١X0B~FPnZp+=,RE&Uݖ`+<Їy5cϕCZ\O+Q˟?b/)4::Ia%IcT"IREP # "v™=]ࡸ|)SG븟"+ڧyY.`zL,_}{q_E_._1D};Zc2F,-:U!s  _B\4x܊Rv^])&N؝/HiFzsՂ`h JW.DVK_ơJmlt)ݡ&%OqOBG2GIGP(2:f!'t 6AQzPpG[-RTM&Dd]*GŦGWˮCFHΒk8PD-P($`|t6FiHVXhzCwʂCcχ.o,ZQ6Gjyɳ6 `=M@5FABpAw Fi=!(fjI &Z*Z =)\+!DE(5Ex_EW}چ>kzKmC܎UtQ'fصAtdܘ3w2`1fhboSx}ђsIV_D oTſ-#ndrͭm3_7SS!JR<$4^['9jH`8Lŭ&g9\Myc 1*Sq: [:0ʔ0=(rk',az&#GVJC%Ly9&P´0PbYcgf.`\Ĺs[$y^fvae%L+I v'%hMgXrwC*Mm{Qx#f\f!bpCV}ӆKPyB w,x|HW7ʢ~q/\yo,9R珕'I5Tiϻ}'|DFn1>2FV۔CȢH%_@qV.ǦFtZ"yv>5k}{Nupw# (gbFG׎"X0JxU<'ceV;Q5Utgby*2Ӹ)rqϒAxfÈ8x,mxZ1GPdnٗqO[0Q0Qxz@v-DGU~{TNQVӟ`k jC[\*kQgFR_)y*;:0@E/iq({H[<̫_5nęReu& i nSPm &X$y䋇pmb-JƻJiZt>/V K|exFXBEU+t5v[#+y{jNWz <)сwG3JZDbZJHu8U.HRXW/{p /h!ZXvz(qGNhq9BR{PM[ԶN u kDjR$Ԛ9D$DՇkZn{U^@E{7+@+ΏhgI4$T5g9W v F$LlTn\ʹM2u GI5& 6%e##I:֨q*jENԿpSC3̰F}.un~uœm ,3hXRgK[Wȅ3_佾b6%=7Ovw7dm?,o?pm5o?$wKb.[o,' B1Lbzg+t]]7=ly3{Ww-bziVo<1`XkO^}?.VU$7cfS]@.2FN{λ 6 /Kp8&2 8H*0Wɿ]n3<4>hy.}٫Jj[ `eEI8̐"% g8CT9g FZdRBE\;o(_L>4$C91Sî4OoMOf09JϣVscrs#չH1`jO\DͣӃɹJ~⿒;,sE@USweF%OoVܫ]- n~h 5 @yj. 5vЏk~a{LfswAKϕ3[U..c tx7תL^)5W'J++Y3͕8Yd;'-т2A!$*AhUTj :8. 2R[PLUQ{K 3=m 3j&&Jd!nc Df.*kQN0rʉ<^q9<0EJkwplUgSJU蜩$" ,9(M:"&-`b.#tŁޜafrQ|oW<7I.57-HI2]5w$Iy͝vA}_vtW&z[kLE`|F;Z*ܣ,:3x͜x'zF{i p , 婜Ǻ 琍8֦km/mB2H9/g=9 khY)&`Mg 8"zy ƞðu >r2h]񢏭Sw.A&b0&,8I'F>C;wIyR1_Tk!>С{S%HBt@ծ dk#FZ_ŁAdd2A\6UAc{f 䀶怪3r6lVxA49yqC zg0|dά-[_IL^LHsD4-թGC]ɐZ+ϵ)4ҡ%!TUe$$x'p"ۃ*&Ό/hRζ#;oZeW*{o{ʪ986Z<;OY=:'w1H|7 0у^6n}m L92mպRЙ8ǨvQA71ƳXF !1R僻Q^s'W _a: !$=\$HDVF \BEc$ 3@KhrFe6VMl>+:#RtVpaǦbp/K5H3VA:wo+>'0g8"z;hoh)4v\k'`1t%Z]0uQTFˡfx'\YPH X*[b;lh퉺;co$(|F*9 I0,=0)OL$`80&'Cž'GGx#9|PK.nJRõ(Jk2»"@YrkfҺmɿˢƦ,]T(|\ &0RF^*"R%rNoܾ$F[? i|s<#;yvnsHF_uIzivvw,f1@chMbz G5|^*Ʒf}|op,MVෳwo*\6꯹$tQ,vi7}57[ڛ&ֿ f=Y]`{ʾ-I"a'?s.&ZI$qԱr/ґf@ʢ䓁Nbkxe@K#-C;PA .x€UkAH $6àMvz#VI 植ӐǠ~ 5RTg JdR HPކZ`?YJL)Mm@)-NɒQbp 'vV)Dj*UV"4<cNmU&j%_3潍}*!SzqI7~ڌ<m`lPe1k0PϺ)֭9~+i&@aGYqh ;$euݺ@Y1u]gY,Ɍ"k%=dy[iZ9ٙ}d|āCF=m:ov{QLjmX1_ PǢ&p30sZ} lș6\r&XJ{"f0OQCO9Ç~ʣahc^z O@y_ ']TP`wooqC7"%{8ɹ#yE@s);;\QdM!•)RUV:y#WKR:W |ݨ|s>.=G9S!;[ iIQN531WNK6ŻOXo[Fָ"7I4$|DM[4WŇ< F2Gg5YY4Cǐ'/ӡ k\_Κjq]nw})̛I&v{L~[9gخ([_%:Su:y+OYNmDkU) E*@gM;`Ұ\c`W!}e9]I e99z'vOPz4|Ѐ^CFH.']E,nV1eE%\Ʊ`h"!?\v}>Wh #}17^*@R >f69]>H׻׮E>H-+z&gb<*dY |Sɦ>H0d32 &B̲R1RGAp1޵Fre/ 2-X| G ;Ƣ];o?{yhF.VŪ_4N(3Ŝ焛)>E$nF* v{H%RU,)0C뭕`=Rx{UWVbj& n u}slC/egau5LF(,n@5iBf!6XgBZf(:A R/z{18'< o̶P f^UNKȏO3 ̟KhM$YNQ}[Zr,!ZpV0zV J{2'O?6ua0QF $GFYBtrO]!+ˆb+.xW hT5G "w=r7ыx0 e <]H7~>b҃@ѭS;@P㺧sGm%UD±,b 8"v >,..axi]Wxqpi]b(AٗbHaCa-TP+:DY/uŵ`H4dOomjNl>mz=|lJӧ&>ɬArg6,޳v˧a2:&Q&j, .:#HE =%|蕄 *|0ѺdS:tAK`P{0d3ILЪ9냞zc\U.;揵JtD e/aqi&iޘކ͂M"Z`&r\ w.Us_p9 x=<ܹ<0βJw66r?iYihb>;xs)S Vph>n`8}om Vw?}, FZ¾Hr5H?mD`4_¾HRցHatK8Ж"x "cd[F(1P,b_,dI7bQQKqҡOѤ+].:X^ x@*[j2jFY4{۷X =4I7ia&/o\ʒ@?FrrwCyH\INuY\,?tnE,+۟l.6: F~kҥJ/Pxsss Bl=dcj`Ƶ/򃽑?]K,Wc]v4KX=`qAb\@nm1f3 MZ#xj|FIGݷ B lؾ mhȨ8HD lFQ%+zSC"ݴZ~0'iqxᣓ>C8j5͒4N\ab7< "7r!Ϥ1ИSia]q&w{9BF'p_s=5*} R Wcm6Ђg)DL&ceaIc( G\v!R {=.F}@h1k( ~VgG T,"jVn|3>Мhz}1-֜ gQZp¬[҉(0Y= |)cQ=udU{Zsmw:Ԑ#wmń'ED>:KF8yY.ʧ76~qm/BQIF'EA^ dek+t~Jb"~~^]U!yٝJFdC^~H^+`{'X;08q BrDs5&q;w߼ ]-mA÷V %]'Gs~fEZ[(˗#,׺G\V~[Jik cg;m“"c2I3PRF C=ZSYfd䉐2ՠ4T"ph~|=j"U*F55(27N3w~P@d$R?P:Pgp=N@ V65/Ra$٬6&ZEĪ)Wm؀U9 X>P¤%QA57ސcӃK0@ITJ%X8i@"? adlXZmBËTVzbqH-B>  (dIp]I_ G(,PEm͜A}2Bë TF*gYX2Gt7OֿKL?L8fUڰtxo+h./%vP  ,p Gs !n.nNɒ[;7Õ}}AL@mqY=G<ˇbr8v߽mk静Uj.Un3=?恫h]~%؂߸dRUE֤°/q A_O~eV}N^ެ+,/;Ґ\E[i!M!Snmy:mb2!iڗ{Z.4+W6:ŁN>@[7rѺ偏Fuce*`֭}ӬuBCrg3.$Gi K%YVhˎ%eiO K I=yjh)Z! 8-ʉEvw ?.w/P{GmՠtI@ʼfX/a_$r!9͗ \"!bBzb% mckQ1ː@$)Z>(fKFH?W{r%AGC/UHٺFOHcݴlۂp;l0 Н[n͘dQ6H_d)Q`.#D~!qxݜh2f1aʧsLSþ0 %],IT0A7K@Ns4*$5x ALģ8Mj ⶪ_6$Ygշ]]+ tTz.EZwʉRݵW|Eg! Z.kjH$XEZ3 qԞYs9s23ցt+EQuBB4_շ{T wx15-+dZz=5\[K#aRp"&LiiAEKZKgƜ3|ZzՊhd-=c-Ye\!%7[LY9WRfDfue r J'&sg\Q礽V(#Bmu5V&tޢ.ģ+ɯ.!  %t|kiZTbʑjcs"<:e9a#f7@@EPP>F 5h%k^A5rw$-/h' )(~ַ3H]n/mz%zPHf]m /)%i.vq;T۔dؓ1[DpKE*4G+/6z.3-(Fz4 @ @eqTpi ITaC~n!9&o4U!G } gr xѧ!3N+`,BPgk0Qnី(xaVmlb $-_/7ao.s #``emr,md II8SjK2HeRNbvj-ʠ" k&ʵY&`k e{ ~)l ^'M!r`dPKcyyv? &n5=!%Qt喹lB >6nBjϚظqs\K>7sL5`rq+qqճEd9[mY6LEgVNirXاcޜS%P9qy͙u^l %KXG-zA㔕q!5H y-,<\m`4lG[ TLpNȲH.hy"&@.A24U>T1Q~W/k;wUrWQTpNV#:e;EMhZhS0ck7ZIG Q8TM&'U.JN&FOBg|S/10#Wcm#΍m6 k uhS0Vta-hwBu> J1uK(:<#5{*;En Xȷ.r'!*D -ޤzvcm6&D1Ķ#]δ;#R.YTygCz똵%X_LB@^')gH*S^uk$-G%lG,D^,lò:g+XY-:u4 %C +׼Fラ*Q8&8Y/6ûxw=|%_6ǩ: >, 0ޙ#"8Zg 5Y}XȈv4@]_tǶj4-;71oxs |Nb*}j{A/]geY[H<ie硕-)3˦rJS0 :ho:%Oܭ9C15hecC`Y_HeHu+ۭϓEh(h $ݴ ΂O,7^^c/ܐ"H[߸u܂3(緎:fmrbo/kCd!B I'W6;lKw"hBZyYW* U Zfq&z2m"),G\DϏX|A(gO8&\(PN  @i p *bDATh,dzEMpXhEa,pyfFYxEJkp EęvDc޽/u`zWךB3 ˒ p+֑#hZZEJTv¦N$ j5(VmPoM)QQ'΁Al4RTrԚ>J><,~KARtPX>!!HFpsJI9MzD\4˵(ھM]ӑ>B$B|^C(N^3PN 9pv盁23PVOg KgdJEcy@@@BcΊALNCb3PFnk߲:vo 26zHl3P v !MĉghE&j@uყ  AB3PJްw)\ۖ[X~|-gV~KM.>ƹ!7KRVM:pu}B{t߭~~.7?ٔSw5[^k#SXš Nh7/tImZ0 vKu/`_7cQX)v"-> P%{TVuZ>?lӇ],0T/ʏ~'ذ_^0>VB'(Gi y}hv cGćQ'6"3b.yP_v?hw܋^v"UHٛz$`ZY==;+c$NE1oŵ'qfUGcw$!Ѽ2 2s.lr^2sn 8T Tii$T#ti'">pXHJ#5,&K8GwQIZ 1v`! ky:b R%egiy~U:9dIK;E@dmrl[9M(<ߡ~ݮ2@ Rax4sU}z*F֥I"߲ƖQ_ſ?.?}uYQzC<'[;5Y6cvaI$]H$~$aOJ1\QT0ʎ v,>5{p9O{}N|UIFoLCWM^V:Q̾{Xf?XY'n˧O+& (<Өuc MC4#zt́ob^q熬-%0 3z44 %s.g &{E)?!ɕ*X-caÍ%ސ@e($,[ `W,YҊ={]8R9dMZGXwK!]m .u4Q~2nF߻JwF%vJor1H@aJ($&z.3-(w#x^02ѸRp鼠 hZ('4eSa95"} gr dQ>>ݨá~(*s-} g `;|BqAɉ{i2e"6,Pu_)y^W?*YzXٖ6%׸S,qvޠňͬ֒V(8W؜V)yE乵ɨzέӶ`aWII/!8j66LR5{C\'( ٨DE=PZ49I 8hKlcQ S -7@l#u\on/M)Eh|-L%g C%g M~ Cd!4EN\ٔ33'K"&[z )75DH!(a:'H`Z;]Ə.ģf/\o]ۨ% )hUl퐶}kI3F J@Zl#'ypx- pГv`4:w",_C,8 ٲ'Pe^@92q`$gWOoV|͊K+0&JWQAYY/KMV sl;P5h-$0xԬӍ'_:-? *͖VvyZTEYCC! nt[emHPcMKQ7}"rV {X8yX߭E=Z~W2ZHpV;Iw\BXqW4} w5NN;.p Xo; a~  PUh¨u^ց/7xcsA66aPeK#HtxAb%_؛n82KQH4`;rq,dݻM/7^/ Y !_xi2mXX|&dT>}d hCown XXu4K O #AL5J[J2i䌋fAhv5[?TC9Xc*e6RF* K.W C|kf.b泽g^y(ZðeUDW 1c.?NM8*!>|xyKCV_Y^AQ;˰s"5/9wyZ N#NOeHL 2(!9?TcP=^ĩQr)"m;~b  gwz H OH)Uk+btJ.^ 2/^Чr/&)Ia罺L2 ya[3.sϜ#Uz5Snj0&O?;%= 8]&0&.`Rw}/s1L4}9>ϓh;|UmVͷY56Usn~y=ѕF"bRT<WIHAƂ tT;!q rl.෋ɛL@/|_N(y@dߖ_nwNn߇K~v9)B ۓXL2f1$)TFH/,Ap[>pG~s{m ȹCw.^vRĽIb7`,,3s)q+MsXwɂN̫{id ;K,ijӄy..?-d/#qUV"qaˤ,wF.yPdjH ddTlP҈b:K*FYInu%ѳޡ JA%4zl7SՊ}򝮮rx„2'w%; -8;y7O+w^suK;+AoX0]7\oVe!y W:yE9poqX|SUoײ'v|~\d'lyӺ;ƺZ>5?q">͝e/G튜5qݫ,Z#(_絧 D -WI#~g pH﹃W|"$pz!hj  >>t GSdA9!& |b`>}87؝ v 1 bw[# Oe.,w Q$zjg,b\x;bH PJ@]cs`r1\N]-%3Q.f#t^hpm֤p#D;[O{_]99HBʛ7}߂Nz9RfA{ Y mz8$fq|Ėe'me|YE]9Lv9]3^{!j#-qMnM'K-]hT :sKqWCܦ넾:O؃6B8RK sp6|-1l4lU[= L!Ǚ#D35Q#Ccz3Jv8tQsWQfjuzWTZڷՅip(kC/KuSttRzwNRk -nZSCr`d/_[hվ!>ܸ|ᇘ^kI?qroo}T:JdMJs4-2>YZXC]]mrHC^)-O~޴nNºŠDuu;di4ֿQ}uBC^)8O9UsúlGn}1(QhcݎEp4NuBC^v)R 8r@5|e R>&6E@t'vJ i ) S͌2X XmAZae1Wnh[JFOZmF+|tw;qk2l*vZ^[/+J\Zy.xUUJwRNPU-$*PS[=tǓ⡪t[OfPXnM-ځwzD4($]Z.[z Wϟ&Og|pr߼z__]V~O^/݃yFg uʄE,񘜔(Qb*gPԑ!)c36B6u>iC5k9Htnj*bE;Qi'*sAD]tT%ܼGk= 9r9 apAcr+PILt2$+o+Cdz]D@u**VY7.43$5fT?|\(a>?DQdA2fq2&21U1D VN1M̍7"?ٝj,J͛7Wn.ʴ![tcj)BTk.ʟB-3LzU^3+J~x76&O(J*wM!<g%޳R{6{O1'˄‡ ?忹|C}&w4&6BX}K ,lD4K$$}D"("Р"T3*n"-"iK$D2*Nf KƏNa4Dd-yL&<{ssUL3wzYH2:L}qdKZa~*.{mʖnWUNThbaF@Zlyuwܕ-BQT3׭$,!xꜫ.*'- x;/͖ʧ+8 p};J༣dZ4hoZY'Rçg U|#5 go+H #C"Ru Y&ˑ$NrWK*Ktd[p`@#Ct8UJJ}8cPUZU\ W*:e5(f>iX-y`Dy>ѓJYmtF&5%eBDS;W@;z,EZ BՌ TB>1H.J++k 46dJVzC\m Q3.I4vb_wV4'Ow-EB3&[eILdNp:1F\z L%1((q1ݸny31&^:vv[<]Of)P"i*xZd6&*H\ !‰K9dZ 1Axz5>4R5G<zWWmh$8L&mt ;wC3H=CgiDcT٠\q+,)@ƛ[h0 >`09P-*< 1KlrcmL[w7K`B O V匤МRف>JFo_*aV6;B!g xMڙ`eղQ#5k'-u#u`U)ѥA( Nll 6^uڴ|j 4wIu>(8v#6k !Q74dgsWƳ˧:W-^؞dgj a6AU;FsztZ&G-El 8hkC 4v:ѷ3 yTߛuI5B#Ah3Sx65T7J=SS8Sk y*KN~ްn'1/%m|$Sḷ9u_Zh݆Аt6ǹa4k9aGNJ1lP-cUOJV:˻Zo֩6F˳>U-^-]>7K; =ç)!Lq7jC6Zsd/-u",l'#-[Q_r )kzlŕStgB+8wO.cp0] ˽%cUnmv  k$#ug(Ś%e1gqR¥im. I~//J~;nL@}>v]¹JzjEn_.z '|27bx`_#cqЂok0- %ȝN=|gy;uv7AE׷E͚=' FNC6 ;B+2X]|Pwu@UzjiR0MN*N6"}=5.ekCclqYrF E/Ix;].I+v V9x4*!-);I3 Qn(gሦv MU|q RVWi$:j~g7qH1Iܝ}Ėg \0hi .ԁ'-px`qN qqi4rIrBL"]|]΄2 ~쏅c։f=P_^bKZPY5*")QD) 6 K&\jƕتFm|`"|`79 .UPL;eJB?l%k.UKv+Jk0 .* /ad \D,(ٻHr#+Bk ɸξxo 6~2[ p -Mʎ' 0$LB2܁%}zdAЌG:}x#bbz4Q\ OFz0BN8>gp r5dڮ&Lz @( dgŃ[s'j|jrmVO34NG{J&]==џLF;O:*f0g4& XFbz< XqYj|AkJq=,Dsv=&ư̈̌ I.;3~}I 1倓00 kX.RX10Β lU7EXUA,ݺ"22Ŏ7JQ2lm榸0F4`>=4N@FIjnKsZ :o\ H9" h"/Le`Imp6_3~w  kUa=#(mQ0LYv|wG!XȲ~jh}=M 矃i]H{^tX6~ݳ.(!3HIKL`)'?6ʬYtrkի JaF'X{fFu ֗lhFrז$-x+LZvzѳn dư׭ (3 ;5 ݼwVP]<]+`Ը|}~cPgahywN8d9 YpJ{?E ܎OKQ{(I<}]t[K2:/V9;@]p̕SPSunJ ͌S 9>x%,y:R[o*䢼@ lEkUyTnե*z/ji5Si?PipM}Fq\BSTJit 9(uY91j[l ]:֝/UԷ;R;9USUL2̉ɩEyŅRa}c=UڢRךЄv\Xd9UCjA[<2#Ht>3O K;;ZᔗD{GlݍG⹜f}e؞+bwH ۅRD>JJ䴫Ӻ9JT"w((8f-L_vFBKqlvuGUH9{Y*|9WF|yc(#*x\i>I6Q$@gCP3-%8n- v{|xXGY;YS|ȬU/d>TmnUg**Uy;x +hR^]i1 "3Q%ꬣtT;q*Y7'V:!β5pF' Y@Y/P)6RS*TI.UѶp3enȅ+ԦF[ԾR &z#TqE5"B1P ͥM9v<4yUԾ2Um"6ʊJEL]^ YчG &fpBp;=41F;-iM ӋpLႥФ51[H`w<^XbZًۡ48ü9yʚA뒖f;P\4xl3+apGfA4 N>cߑ1Vdg8?Oap󧃘A+dFjIWniVM:ӬlʸƝ,2[u[IR O9)#k%weeDWUp.ϵR5nImԺ) .bmiiHB (v"$J5VzVveRDab $m|NIRRRWVS8WULm@[V⑕xAAT#z0xXէL??nFKz-꘼IeG zAj W 4q?};%vu%,30ML/nyWJ%U}-+;{壘hUTx\*kCVL{3Ǵ S Ǹ.X-;_rZK{JғkTt׭(ì^#>ۼd:iBTUhH 2t_G83pߌz zF|^ABb`jyHԆ.iibzN"FvI)!㡯 L>s9X 6WE6 mD/ 7$ɥĪ"wd~-޺ J!80 =er 2Ee rJLN+ԵUr)< y,4+=H9a=zρr®03}CZc]\ KU?Vȏ1r>(8˟U/aZyM19G-VuesŽP9PJ zɕb=SPzP'N<$0/&v'~]t ֠_܆m чā(uYSUemˊC-Vi밗P|b=R N-4)EX5)9(44҇eĔJB\d:Pv\ ^ȔFB7O mq][f4&-8;``EUh?s6)G:2qvRl]؟6/[ @nz`eIBЎ; | ˎ&c s2`ttg^p:*3δfJi'HفwJFLy)x]P8>::MGAY7' }8 %pX;׺r67Mou^:6+?҄8}8#RA{ÿc}Ƕ}G(\ʑ/JPe\EJ]h9ϋ oʲهA\]VȸR[_-|PfU P})7koMLtvXQ5o eΛ姪Ҧz-DQ v2rgތID_7cLý_E((%XU+k0ְ ]i+c*̀s]WacUXN5 )F_O1d"{1:ct 2m9íYy;/@h BC[Z T ; //g[%^ |JWLE5^)"HJz,F'ih9|:ZT״`[ e M .bpq3;ŶRTl%.ʱBk`,tmVuas"]8y-2NiW7"G1՗_0) ]X< Sf_ D!gx"qs^M}u%M 븵ն@%BFtʠ_) 4!uAW܊Լq]4D]QЕͺitEtʼngW0nהYuڡ^Mp$1Oq?Ft Wi7ousԇexo3Ƀ#ޥ0 KNv"ψ}Ar{TgX-zٻ8n$rzLY|ȇEw{./l>Ɖ+ΌGk؏.[nwU,V^XXi8CnAV QQ$&&Iu;j]HlMN=lJ5KnlrW} 3oU&Lu~upTf`8uu{VR'LV1!pL- YMNe-N:r 7g7K܋r]k<2hO1xц ɭ&[(%%'_4JsI:FO(f #8n8(E,__䧣- ޣmM~:@`f~-$NcHb!}'s3f kĸ&nv#xL,Y֫:4. J.MғF Z M6 Վc/dQ"q~OKo(hzǤK$ɘt !ǷZuEڊ!3ܖ3DY3ar"J:K<2T-6SQhKQ)5E,1hȧ-x>9 Naʳ}nB.8Ӈv6+L7Z71-MLnkbz}qi7 kE_[Ѯm7jr59mm;N ގZļ]/}{{hI#ڃ҂3Q1|0&SwƞY{zyt` ߺ~N΋=[WW|~nrm05,>ZhB^>QѰfI>4Qq.g@DYHDB2D99G_v< {cH \15LЉFȷZPYГ|UHqeưsobިdeZfzJ)xOfL'-0GT@ m>虌3W(WIMޤ"TB;D{߽4Վ$cH!)!Jmjt6׎;Ih'4ߡ7J!X5L|d˱ʰ\/A=MTǙK]#\n}^eon/Fxg `VNAq,-g E8T_z!*Z_E=v.V\pՁ[`ӥOӓsq?cOs#|;ZqIш\޲)<[MϷ KgBZ 1F(p.F)@蜮J: @.q7jꊀ;\S7/CT?wIυ"AJ壊GŠBN ?7V4;~ i޸+NyLU|aZd&{VInMeicl LXrE#?F>+]|sw?OkA5ZN׹it+?-g"q<4.pEhn_B/˹@wDևyd*+/w߿:Uۤf7%|1X#Ӳz߹*0>]\}^ Dٰm`D6S#r̡\V{/ίӟfg~{C䯋U\|ϟP$X#J񆤎Vj}>!4%۞: B&9}^k+H2o<^>LO~ؔjB7@Kdh.G%j#jɓ=V ވhy-\ Q:nSA@]H(p3T6ʹ!: 43eRjb%i뺖 2)A kBL8L#0N|3DR= ұjkޱ$WBR|{pP4-3w_3˘} %a{f҆<.wbЈ&޿up[Ba"(i+aʛN 254"(6R g[&'ɲg֔/E tY݊9 Qbqubٮ^.=\Z"cN%m ms',];CZx NH'U#nw¥٨O z o-AbmP} zl 63d4:~y*f<|#߼#]KR[I'WZ[`SI}a͢' ./tiUar@^{_yA_m͛_Y3 õXO\O} Y9^.KnSU߆?FuW|bѽ=ŚDoEY1d*KA A;Խo>0-䅝14#j:4'U1g4AB*iT|u-yCK(DfNn/@͙S}ݮ DٯDȚ\iRsFG_GSERÚ=Q3巩6T/funӎ uov⅀^-oq*AЀ}uoy} GKoݼM+~"")*_e ]ƒϽޕs^F9/j=WB/tz1ڿmԺfC77up3Rh- Fd/l!/Ljkq]Qd8" 0T۸+Bнp4Q6|Y!11AEtxGA:uY08\[ɜ>9FHX-D?#MfJQJ[G|;kNT3:dl3mf~SE@Ӕ-x6LaɚZƭ,^%~F7:?19Ֆ;^ @=mXVƞG!@W t$hF!I{-AYtU 08tK U9r?sr"lYk4"%u=X M瀙jGxd)v䪅W4\ gն ($f!dBo%2Yd@ H8@`2Q'GQɡs9* GD*l{9ʸU/G Vx%|42ֲ:h'N\IRHHYQ>:7G!OY쾪> 2ڪh(ZXZ$KCTX!hˊ@!J ,G.eE +^R ! a͔b$$ټBDz=Zh;nZ;Ce7ɣ+2U);CeQSʹzeFqw? M8W?5 5 eq]PpiŠQj:q4W 9z¤K͂)2è5glZ0xuטq~iRUf;wyd5 mB)܂Su3VW2V)V9b>)%o؛m.e-LZ#)nݜ|qH5Fl~SM,Qi'䲁aSPf/0e [UX:a?C͗K&c,]# t0l^yp5ܾw Y-""Xd=]L7W"F|%KsL) yȎ?^UX~F淼꫎Ѭᡏ򪧊02;Fqr^R'И|FN,\|`OӋsYTBVR<$"E 1!k- 2X=ZK "ϩ[h) k)0 B -FA%hlZP Q {ḳk("XyXR Axџ@A`ڮ\6r°T3,ei`]ר}`͂V:yDX#EVh` Y҂W F3SD9ڍ޾;AlI|w)H7uxh4O-;x./d"_\bro/ VM?7!d)+lSXZBXxm|3HYL򳎕1gmf8x|Ju*@(bQ7Η]<-ҴZ'~p {9*\h:F0g `R?iFziOq5()|)W\@@䀣qV:}܄Xp}dDGܹҒcVXZ8ӜY:Vja,{2a Q;z&@<ȍVX9w*Q1JAsK^Jc2j|&,1֊N+ Y!Qsju۶'@c3m>c6Ϙ r5j-67I&kI$k<` qgA$!*`/WӒ26Ū3Crk '.F'ҿ-߼X[^ߔ,[l g>_b׌N(Q,*y:kd!T7Md6\ݡ`4 Pp|X7C0O^ˀ:<eϿɀ crA܆Q[ ٻ6r#%wƻUU9rv_J5O[gBRR_cÇ(S뵤!htݿ^EHn-'1P+1QRjXWXAp% L*pYlgb"P ŗc62x]̇?fԁ~PP*Y̅# Ǝ̂hpLIQrw@Kbd1%E_O; 4(W\jHὣJI<7_ڢ5hF~m|h4WwϳO7[;];{wެl F૦CH slxs? P{CF>V90iH/M >h88oZ[//ڳQ\|f(Fni!eqbF(?Z 8g@5}=\.>7u3ɽooו#їaD3{؋|-ZP:V_ErAXg,jvlV36X>BJV9 )nXT+Vn^} ty/IP?FF|w)']/J@at{ݜ}`t#Y0sz1A B|\N@6[{;ShM!ǝbY_Ay<^~*XJܜjE!;i ŏˡ0 ͩ֌3{\6 TaS-&|+NbFXe~W'K+l>{1L^봠v?(lmZc0>^\/ci=Awlk\ӇlQ]r4VeuWzqH%`!ysz'/.1p/pԞ%lYey֒|*SJ~۵nPIyҺIuefPh݂ѭ\[ hN16zsǺYL1XPN;|ېQTwm‘4[hMQ`F ܎Osao4sEhxdNfPi*SmWDPsgIe*UYQZȌd:S3{7"{כW[U}s٢>盻(+4pj5'&ўTOy\\N|Kf|u*uzh>^|^./߼ ~S&@De8~rrO anO&KEs sƻeDSYP21D#d"Kc{"'J uzD~/ij .:F!m_4 + iI$ӽ[dT@#Q6L FRjL*1vͩZ/:{6##F֩gN:SUZq-815>9=tGA~tІ|*SM7v: }uQh*G][pTGֆ|*STP< |cn:η%nmsϖ!^x=YzﲻR(}tQy]q?RcۂH%f_mdže%"H(F IEq5JiW,J!JbCe}$*PBT@6`2jC$PIJ$mn"iq @*0H[U*\=Rx. Z7ec8%n=Son#+ Ҽp$'77{;ŏaYֵٝdN(6f*2)1%f q(YCܪ$ ]$C(TBN&؜tBX(ƧNb:XrDjzYAw, #iw' (R8o%aŮ HbևYg H /w2pp3-sl$(̌0 ?`F5 8FhReg!^`ym+?{g{I&W6L-noi3okA_[A_[A_[A_W z9KEZ6L.'? j `V/~JZ~K7iݵO50kʭKҋũe#$")\E8NЙgqbXL'IR$&@hF4#JS#xQ)A)9(m'ֽR_CXș:E_A=s1pp'M" īb8U*93<"OM6:LX.QsJt!H[?-h)i>Go" Ubnd)3RQARݖCet9"`[a-@<]DL!OΩCShkG x-Q]gx-'x-_&*^Ux-oYh>K%xP)Q"xZ'@X3&f~c7'pPepS,U(ͩȔq&F&'b_ڊ?a~3o p󻽵>yJm[ t!ږ₀:gp[q' 4}wŗGt }q>ҫG{|)m_q',zg X왳o+Ո#~QYEx# ȖTsA4SX:HDJ-2B8TMQ Wri"EË6S.I'_mC8*uCL%q T 4%WQX:R YʕL847,d)p 3۝:ո[y:݊:LR59rٲZ[jngZ*{Phީ֒P=i ТcϽQDpVjLwK ͫY}wUv([ypǴa9H56MA[ZYghlBlO?ށoX{mT{o{o^Fk]_i*kߝ$F|rN(]R DC ݟ'?`BWRpz. $LP}mWyPWىhr*/ %!o 7+]iV*bvgneo6 ܺc(Cj:!{Y`4T+G^T5H\Z_թfYkx(\Z2çkp;0T{ˎ]А\EtZt;ɎuT1XPN;|pށ+A-TѺ!߹ԣ8wкEuBc݆"xkhА\Eu9223hRZNw802q8.(mƐ%{>yΒ=,Y.ݴKPRKQ}U(z>m-BY-E 妥Նs}iktL >2FұZvSRA P$k6FIT$jL*SLClRȔxLde<“Z7ݨ:NUjCym⤥4cT_թ6vmwSRݴ';r~ʉknZ*Rb(W;T YKOQK7-Q}UuSRh}ASU6dGS{U<>U%+ .QU'[W5cY ?:/R|}m t$Z,n>OMbG,Q6XBѼ cӟPOaW$1JYw)Đ@ "lR! aj̐އ*{xa}1'Mͦqh ۣtLBWb޽ۗXf8z{n.dg=<8uT-6?Y%M/vjW/Ge CRq:I冘'DwyU5l4+,F2t+,FT)BFPI]wHwmo-_k55ti횓i&SGy唚HS,K,橊YG Äxhۋ1SJI(4<1D< &ZHtrrBs\"$Zql-rAS-'ʵE.k"5*GTPZ' dƭEn)Ln {3I#ƣE;*E"h^Qq T"eA&C&_"ij]6eґe>r;{YGLto,^95î%7zIWuث^ʓswփ/&2)^Ŏcj<9 [O5|9% X3s!#jfX,<9&̦EMS!!dze̱p-E+ bܩH' <<>JDZe\,z}i?\]&#.p>|RZF %6pkLE2wx&pUafX"V e_UxǻY@ iv)4(^:FNdѝu O q>S΄$$`_lr6;eo2E /U!$.1#>7嫾M.?/mLH$3&r2R ۔k[jK^TܑF)gKe2aAeKf#PMl|0Q8c@}9 Fsf!s2eŜֺl弯c4{#g8`ߋd%pٞ2@Z*=F.H(BЄ舩t)~/a < ŴAUZ_;'bPac@i PFߋ4T! jp}D @5N\Dlۑ)>Mgy1'B3:̋W:+At;@Dnts 19ׯKs\~-eip:KcM:Yrr01?% a+{,! T{M >4.맋oLqApxrv ֱy)n cÆJbhXyaqDTie8Ky\Id>vQ6՟7VxΚ;m$<.-E[HLlvK|ҴvKqL]J_mm vS/䥵XZP~S<pJZqJ3 tNws?`0H'{uӻ70^PR M\VY0کL ںb3bJ<~ 9;I?IRюwptyt{ ќw <~k$ҾJM}K{yӄx^ҵb:(qg~F0,B63FFkstt$d""M1 !x!8kZ;>|k_ee) 1,l.5a~rpsBۍш ZQO{7E' Do), ŨDa%xi TeU7TDLm$HaHI k æ<AԔ%|s=ftY)PQ Kg֫8LJX嗯w")q "֧Tu ,`",aF ` @3&B")J%]%Hr^X S6QCf!Cd>shR=i,4Az3MpG-q#D'>SgJ 4n&h1BHI8d%,̣֧4K)&1lsC#UV}YOcndz O~͢<P1t?]Wv~}cmJni19G? MXq\oK|;o*̘? f8p(Ej EQ(xbG(4GdBE8oV\j+Vv"͋ :iA_=@Y+S%Z^կ8jUqzHЌwqY#Q1r1ɢv0a4/ȹ,J)IFk~i:~)ozw c/ amvWgS䴿~HqOw|pM )j˝'܈jRz5t'{j\ty4HROTw:pwj4(|*dx"c|&Gy΄bOT $R][ã*Kb ,k Yce7PT#2e+,Zri˚*W%AUX1eh܀ iQ -`j`!hhqv,Fg\*RjF\E`V;IAT,ŕ 6j&fa rTR)OmcF oSyjs'S*RqLq^LZV|+qUd & `QmaE!3|T=:,W Zs0wNS[Q䟮 qD<91W OVx"'AUp`EPxQ(=iB髯@ HӜ)t%zfN(%4Y7dOpTշ֎Ugoْ@3~Æ) `Lg A3IZ$`FKx5꭫p!k3 &8'5bCLT26[&l 3<~!M DEa:|D}Uv'lm@-5⸙?u.Gts,dďidJ+jE[dE%YERզ.%83|]R*NR_[ qgJUr̭CͧQgCs4iR~,R5^[/8B(P{|=jw#~ӠJ: P^|y ]$3\YMquy9L `Mș`OrolDr&.4\@ Y"hR=epZ!CeobVwku'y,J;tCVw^PX\N=ˉ.'^_ Mdi .N丕s(9;u[8[,z@5)n/7ɽv2FYѨT$ڛj c\~\6s63OH4e1L>U#ď,]ʭ|%X'`hٽFÁ H6mʼn]ɲz|p/+̑Oy?'~vϷEuR"s OW뇱狗惡x_Ňv4?9a=~?_ nȯY 1ގ7\˘9Gt1x+wT^_d!?)NJMuz2H1nužNm MtMN)7Qm RLg;r bSxm MM80/v"FAl;=`=ՉFj=2;<-8v±ebm8ne-:~y~_"1!xY\Oz*)Yg/O!n0'+kLEhWs\o }zr9 < *1*ɬ򳞭1L'R;sCB횖Hv3!16p>_?YF 60Mzr3J"UHY+E y]fUK{Ix֢vAT]D:#l5Ϻ41(1B"~&`ARAMɔt2Y55& Y3aZu *n9s O5V:a#(g {1 rmXE< m)TU(AŤ`+*+@ fzԪCTn+n!83 -R 0EeJQW#9)( /apk b牙LheJNБo#f$pH`r5iZP`A{pL4@ TvAꌧL5Й#e7W'VsyFwd(DiLM/Ig(j:ӂ5u7V[9Ue$׸,w285%c*X,łcOqt.3+[.OnY DX j˪iˊac8|puycuů®Y_^Wv~^ꈫe`/^=? e㲊(OlK_'S rB'w.I`u{Qu"6QN#:[P-؈Й0:K%^I^IV /#(d7~=\`–Z^2z[R;L֢)5.v4;G{)s=>-Wr4xYGt$d /9릗nb g!2Lag_l 3vd ,qxRZ&UÓF១9a%Jy a&G(G1"(?ZDW.] FQۉGN奒9j; E%b0|$0JKt|IkQ1&׀i1Bi%(61F.MU&J#o| ?(0YH+mO0 >M#)5`4!  {;&"v4;)oz7+B:n`#Igll2"贰8?/؄p 1`Ҏ^L\`48`Ғ1 $K$t `2dd Gu⌾ "^&\)/B "'@6N 2s& 6] dE}AD ;ES`&qL0Ψg=jW{C y z( QS9ϴt}x[ + ָT8&L!!&#LD.ZD<| 3H >he/XvD}k>G; ΠQ 5 Ljy؝A7_]HoW?]~s{# ۊ!:-H&cdk$̄SS`Y{WaY{ WA+Q+k$71/vH^=xEEGxC@h;$:ߝn$PiBrj_&H K|؁Ymm#XcZ $W5kuMȌ&`VX Pu ~OVtКHZ *>6m"˄ m%8|t@߲ZzK͑:=+˿lt(xZ<-u_X#?M&.[v{XRnu]9y?| *N>[{rJKL.…-z4 xR;s9!OEzq[#46M M N(~p~LIK {Jgw&/q| "3 q3aKO's>6IY^=Ae^޼Edo6y~bcLv_Nܔ}(? gl)e2sHX'G8Mӎyʩ3[GX7S{YzfT+rݠNY83_ɓ+Ζy*b+@Wja5`8kLux^Sذj#.ug0dڈ!uԵåN.3qcMe vBhX9;N͏B.8HOx6;O =~)?}T-jgYm]ͪLRjgIZ]6-!m+D㓇ps՛>!%DտH ~~n>owͿi|C/j 6n78l leڄ"pMb(rL+<8-uT=Bp)ݡs>Igj11oxBSignm Mʦ8c»Ube:(n%˻Uz, 76eZ3P%*gV%T2|*Xz<^M(;L ̓qAKxDt -aAbZs~mMֈgm\2aI crk=VЪ9ԉv#8r\Xˬ R :s'Z'g)s:#׻ A袣A:BĄ; M*ga`kmܩ3Cf35[G@rl2pVG'jDhi}"\VDPmN>mh͕{ 1^ ":!pLdQQzi4 7ѢiۦmcܑsdA%h\gGdwGVg@H4 ['YƀpkI\e@e(fYBz!-$WOnWN G%(_k"  { | i L DIQ+%;^^Y "Lg_8'S D `Π]MPQԅ3 lv`|A.>KsTsK1bQQQKk N4h ^skXqFA@TzգB ll5?%ج-E? zc[| {0ue(W+})2 OgD&K⼎ť [iwXE!X'ޟi\%Hv"\%! A'w3s7dd`Ӻ -*&ަ{q4(?~nFd'v^s 2ϷE[$abo7BV@"^V$Q*z{C!Q[=92` 4cu{3=k2pTI_2JA6,t $[8H45 -BICƁMoyp4߶n5[m/5yR3cpBDspYH )ZӾ =_w̹V11@WY;s\cLtT[$2rx~. j7& 棢..IYBzx&e2F2vή>=1JelF lURϧ)g#җT3fNB^&O B'}-w!8:fX|{o)˿ƲB edb;iE՚Z=)4.}8Y .2!ӭڠd-R@N,5cuz#>8n}BFIӎO[_'[hM)~;nVEZ rL%mLiޭ>7ӻa!_)>[;w tjQ"Rޱͻez1, 7ўMLk 1)2Ĝ7%Cd<}lG.i¹iE T&h#\ACڣ)>Fp>Я6ޏK㧸Ӻ>\>J5xcc\YXE.>:5 "3;rp Xs w7:n@Ǐ ymuΌS񞉗8[m{ _Pk{_>|ntpvVWvn3{yzwo6q|mKqӵ}C'^%v{@㆜, uC7 _Oߙ>aגP%$ΒM7j?\PQ+h{< . DC﷤V0gFqUh= uDQ(D{O! Ju;_Ԛ~q/a#Y)ڦvVg1]Y)lК[3Ta$gzgH/VSG@?UU#t%~ ~w'TpKb})Em~3[Y}^~jRiª/ږ:|DJX M74%t[j?%Z,_G-&E7z hgk>`o="_Fkkx++$dpbldkb~uEϡ)8Q]m3 hLbAF³܇A5| تLƖ)՟NN7Y$iۏz?j]d%.w]ܽ|2L <0,4$-mty+B0;o V7J_xh-_3YWt? w`VJYgk|o39>+ ‡gV @.E"L+k FIlQEϘ6u&Ea-C-jiۨZ^ql/Q:ct|-I$tz3i \Vz@JM'P=fZ!CN;zD*FO /Efȣ*S,@]l>VmSYf\k!I3ɚ%vH3) 2Y/dlW2ATkBt$3%MTO.qܗ VAn8tpۧwDbyqG$p.VQ4jQjtY'K qT8Z*kXqF@T|n-cyՇMt-mo{R=i*Sفo(B`igrm:c61^)ՐS ;_5c"4 x@Q!=VBԁep!p[AFhDZ c SxTd$I'5ˌXgq}ErxcXRgCuV=.VTf=I}qg= @OJ_b/EJvRY43+}V*r4=|)R_Hm6e[)<+->%u_j R4=#=Gk' Jx&vR+ӥX R)T5>RAhTRNڇޮՏ'O {u}*¤mQ'_7j6\Bj!3e[`O6g lgkOޔ$rJ?1Xq+iZY;7h[}fI;99ZZ1y)myeqru nMA_PT4yYr4}jpi5SMū1n..l>r"}\U80@,qpYM]}Yt"r{A\KT3|yiySsTd" 5C !y)Ϗ>Lcak]˘M~*r𣦈* ꁟ(*@|ftP .XcPL3Dz5= -IhɑXRͦDp:`n#I/;K y,{`y XH''[s$ܤn< edݺY 5@juj۴FƷ$EЋsxK mm{63/ь_Y"P^_܌3J'UYFgW52JU ƃXήжN(ZΏ2){]Qkw~ ǽ y3{7{d;7`J$gx/6Ž;pР!| q:p*QjRFyxӠh|;-4YTa %&{aU:䫻;rj|)^o#>Mr"^'84<$_44G?; 2}_esODz}yG6(go.| y)1+u'cŏھcɤZvf#S}sֽe \O#gg9~A2ʹ*V5j+(DEMMZn9&U(|ٯi)2u R|8sN~0&NW{ 9 ,Ȅ,A;`&iv`#\o]w>E|׷d`:4U#eZTqxhڅC# yٴIF.gsbp1 r zU*'?%fZxǟq~l=OӖ`>P& ~zv-&cP,Z%lGuZ%`i5 Gbad DQBfݫO+-YB HhX'˚zY"0ZNi[Preh%Nۛ-^B_~DN{FCIڔBYћXE&Tk!{@` 25G^gYT4gvB9Jm*MY˜|Ҥp}ZC&k­>4kX|7Z߽͠.-oe|y)v!ljԩa(a=Lnl]w.ݾ*U+YWnlM z2H1nE_Qfng5$z),+716Cc#}f{ z2H1n "Xfn'%z),+762N9#Rhd!M|cTq [Ԣm yQ4Wőo%˰"OȓАS[ Tg {KBq xNRљ'WKRKX9ʜ׹8 -+˳]kJu7qb\W/ ОA; Z#,zN'9gq&1󡪬tJ"J(JbPڅU-PXW~!슘j!zcFϤL4n)VZjkcCMp*(86z&WGsƎM9beI{u(1FTx !xV‰ #+!jV-9B,/  A>4ϡ ,"7 kmgAD mӇ:5`ޟ66L <ǩڥ  ϺA4+d'8l@x^_|SQ`[)|l֡u DHS7 {n^[wRZr| F3~40j`rJA6ޫ^H|!Et;4-[,[t ZZVX{]-ku3{2Ly4IL&é`GR naJc{AfX f2'>gtҋ],4k`SH(8xǣ@GQ&MWtQ£(Q:#GyE%z=UN|-EQTGQ> Tx{= FJ~5ǣ`Gщgkgkk·t _I<%t?#S?kY*=: ܕf?o2l$D>: k)Kt!GpSAZ8NE? :6FR #e? N{o5"BH+%ʠ C}Оv >D8bstLI@?|y ې>qL"#x wHɥR`kwu՘2Lc)S!֕bYV׃m?ձإu5ͧOWYEC6z-VE%}}(µ'l"]I U6GORmZV]zgVx+LQ5b(k}őoz>z%);Xyj Z^u4LGJfn4"e'%hCX]pNx';Ӗ =N[ԛ+@: gi1;RfJJV,>֏߾4FT0 􈝣ȵT'g(Zc=5PExxbtm.>t-9~ "O[#7^poW~zv-&c,FyYw\hO'O}™m);Qi7MRm誛p[E9̾~]IW\zD+l nؾJ OjS &kj ![MKZ4 ohh <۠t6Ԣ5Jc4MtRbɑC51SZS"?1S<@`^ѮnE ;/l-]T?ſj&DT!h|L>ehdT,)}%3t5[n,H TbwQ5g.⠦d|h%5b:V 89zvb(2 hrVK'$DjDY+`3?HkA5EM m- |*MSƓC36WZ!C3Fh F 8 LhiJΫQ ґIfn4{q$2c+^Csö b4׋?_}g77׀3.(o?ptcQUhY{Rw.h} S!UN`K^XiB2 )$PWqϷ.?٥zywXtoV7/VfwC%t,@g39N.D$>WJ0 clktcm :@+ZjfS55"`ďKpfAlظ졆 SjC pƚ &,a hXv&ȸaJH"+6\  )GZ&acLE4 $ŭdFƽt>h(/)J|2織G^yBi{Ivo4G^\"l|]B{ V 3fNXiǦS`GDb5cp!&wltlMFék2&(itYKN3xPJKt>d ܉LJn sV3NY1ֱgH9$Pul'8qF#oD(\@Q`TQQPbI*wl`%szxzPq)?MEcSyǹg\z$w}?U˿՟ QUO|M(ݫeR2hM}aݪw<%z{s2)={8w>}5~Ṙ=vXDX#PS|5Ow;;[!*,h|c@\ ξue5ռ4bH>]D7B֭bԑwvY3cgk`U$&SyPPzXf=W5ҍ7J?R|FA},'R'8HW8%X^[xNց҂u?܂  ©@X /.9r aYaYPbn$J8!d&K6N5l2^peou" aɯS'05DC^$ A`<,PgWRXk"޻{&&&vti`8^A=z&1YS#֥FuϤBbTϝSKR֗y3\W\/V}9ȃ8Sm7}8>ևs߼9WԜ-fmٶض~r\Ȫ~zv6c.9&wVC0 c>=hRJ#ϪLF F0-׍Bjd5GD^ U!L'iu{m:2Z]v!nx.28i_ *Hf n:BЇ{H5w;KGd0t TMY2"gA$ N fZHgr] 4u[^pW/hށ+DGlYD(*i#"a*0}PW8JD6)pHlNd$olt$bS{a=y -盌! `L`54Qǻ׬p|AV"o-%kH JܾGkcxH_Sw!O ,=G^A,xtY*iJY}\c( _ }_x2=A X%Q3&~p]\кKyw͆߭FYT$2.C$yp]t2@"I+]b)FnpM<,g\_E<+A݃v~o]nk_syPz&Rb(;ͱF!z’mw= cQ)m ͖tYuOr%S'bu2M2EhLQpm2)ƴƊzX:FQHQR'Ώ+Okk(+sɸs@EElLNdK)F4ɧ&0m6- !f QMؤhP%Ixh('C!u MtVh&2/u1N1NwZ7"]oP*l#qS; @R6|0lM; A ĦQ*%fM7!N %RFm {JYV )Hn uGʵ ).0FhTVFȥ*=YH[VKX:#Hr9\nF;sM(λf$Y50G\yyN.on]\~3o|_loj!|r#o:OBX b-wcAR7։_isc>yPYXft'r?t5oaٌZŶ1Q3< O=CH䥰} wԇo\w7~㪇^}ŝUkwpnnP(glq4 {$Gg(,[J;x Iӳ `Q(3^w^;]p ,+XQdtGg'#*wnwVw#K>Bߑξ !Pk^Fk'>FVNepeT4bT&3Idp/Pʦ쳗%_@h̓gĕ"ąuFEpǗBn7?R'_nx&NOTLR / V }C ۲xJ7X%}6Ɋ&&+-uٯኊ:z<1ߥ+Av|STۓnZ~m( jwtnf O[n +;wE)l5}t3LJ4:CEЊ@lmJ6oѝ=U٤ J* MEfֹܭY`殬L.ϕ) d9r#sZ@NɨDh>kRVCmQ}J# :&R"?SPkcRĨؘJ1/ARX#~.w]h ̃ ?~j&@))͗B|x80]0Rܶҙпd<w҇</>ꛯYPw?»'Cb-J?E1a;Sϒv6XX;%ʠ%Qgc 5>Wa4 P |˥4;\vK8(p_|Q-+OyTi=i5:'.P(ϞTw#n eܷO:??HWMIZt+82\V ݢ_ī2w e`&'?pjUA=&O̿SH]'ulA$P,3.pjRc߄fasOVV{NĞDJ7w|{`K4Yt-8o^<*YxOUrV',wop7#8@<D$^j=v6vRiۋ?߇>e뭻)ome^ފF_9 ˛|{hm̽Yw?+?k j5w F1LS6snuN KBI>GT|^q-YEX &ZwbYe3&s}ڍ\/£Ivt~uZHUT<F"Wd+t~]o0Gݹ5̆2{;e*1%c1J&FKFkH h"L@-| t6Q'){h΀ZJ4*9݈,E{yϸ}:€ʽkt5q˃VBJkgm?9w;V~kԽL,=p8Q "(.r9$Y`.7TfE$4ȞkDoԒmK+|Q*S21D7j2MXg3=@$r6D (#m Q(dl3;PKQ5'%'f)X6e$&Ơ14MLh2M. kjDkn5sp~|ݣ +X8f&aG7ԑN֮_ 6bl PNQP ͈ah4̗"գC%:4E ./]p¸EF )C͆QY`FĴ{b)Uh Kt5zY3KLwgu0BNā fV=Kn!C#稞%T坠ϒ+9Gh%R$j2| rY\Jʺ=A^µA#+ԎacflRAgd5NΓ63"/mt "OD)QL$!Sqm H½T^RiTdX[I P`L}f2F*_' O"ۤO=*J*% *"<(+"@f*W!ԎԐQ2!p$IhH^qܖh"]IBt(8FE#u$<"Eչ%(!exIDC98J`$6bPXcQX_=*VT}TK-*oD ۋqOGdVi mmBa"KXP˪שBZ>uJQ];:4+ZHf\iVtCAk1Z+C%K6u͜~v#ne=â@u!d&agHr9 #ϐ7ٸV>.N2Yj% V3ޑNnz_ sRdmI /٘ȬC1up ءSE4ʲ'[d"/d++3++$w8.9h>pٹ+ F/6w! %}(;-! 8YGmR,j.RGvN6a9==Z#i u4PS`L?P'1+JFv]tDSU6tOt>8gN<o &Xm[WN5Hi<[,B|,څxmԩjMRy[WN5HcZ~c -ݺW5J>8g5*,X `(^u~_vㅭo1~Nmpɚ͞nMmL mUoQRќ@]ښZtEؼ@AWt5EnU}m( 1iIO4I[[J?dAdwd !t}.]xZ7+Ɂ, iF Y1@Hc%WRNUZ̿N>OL$XK&᰼>~K.֕C\??/lFO*K2ǘ68Odwb}3G%O5R us&X2Tss$zN~WfMp./s!4uw[-J $<j7+@OMos<?Tξvks]s,t*gn25(`u=ysu59q|WT &. V T<)JZMk<4p؇h嶶BZ%#-kpƙFhDZy0#δiG I!pt3 Mj"*/=ʙb_r-}1$Ucetbl_?g|}u\MoL4&ȠFTG7'h/T o_rjHׅގ@`-63wUXOZ_Vl,lCq/Y%bq5؝x=ޓ}^ T`21bp1>rEc3hB)ܨx0JG\<RZbv[r>V2! zisڰ9BS) GL%?$5'A% aik;3B/c5 /ߺ@&|C5:' zu'A/1\ahK^ѥ {Q߸klne!嘡Ο̦iV h`f(3{ڣg5p5#eK2ѸlXpic4lDQ$AK| Ifn FHDB++RJ*t]UO*tM=鼻.^u>kTwO_Z Vc.IUSGC VkW1*@Q|uVY[Z dd[ FF^JQ ,ӽd@i`9(>)ST{afBb2p΁砾\:Ȝ/9 ''0n\¹j#,..,"FQNIOw9dPͦç)4qMfWY}qbEYfY@2&Hd [dZ;zy>K}ԛv.*yKͫUW!ε 'qusa[gn C 8N{g\ttzU} ɯJpQњ}]<F*|J?Tقד8E(߻)Q8X,)?`߿;8iRtz'M MM,7_ _g՞Y˜ p;.f4-7SiypIͨBӦRjL :Z);U┝v0zJiFQU D}>3]'6x#T}= )I]yGz rx>|h"I@yj:㟵cX'RyRl(IZ d> ٌ]*X(&s 8i#bR8$"3c`6cH&yw6WcWg<2sIse7CiGpz_ϾPr̎Q9ݓ3iUg6obH3]0Uw,aHDp ]GaZ^wpAjokwVrwէ筷(OYg.8-U;0x-=jgeC@a1*h}iG]rb2* ;Jldݰc3y4Ͼ.cgj p ,xFw%Fz1~ P. xTzJ[\^]\!YoOziz q |p=RmpWK7h<1JW6)URye* p8?b[I]-KZX5VV R^QT5?ᨨ9s<^]o>8c~ùp~'}ͧϳ7OBتdB xE=w:Ӆr-K%˷$D2ihX A2@}f5eKG'Zef<)rLA{g3VQ= `_Z(Hs*2IY2 RdcٷFb0VQVPKr"xA!UI%DmؘR9xގЂ(#)w _VRnDHHRC3M\B6(i鈄L֚1рl0"a0 o&JpBd?Bc ITr jM9Xb4?7NaP_A43t.e\KRӞ2Sfw7&OO 8:K}Jj (q~ehtG s d_ >.I|8_iRd1de[6؉_ٖCx,|ܟVDkrqGǑ={_YX֝`8Y$PqkJY^j~-g9Նzwt8"s.> h>/H2: c9`/,Jj} R3AZD@\p&P.QEgyQ#\YB)Y4ERo:^V}E;+6C؋FZ'`0>P-=&#Z R"DS%ƊhLR}T麟ꃙ]IwFf |ha];LhSTRW7` Iɬʌa70<&\[αe+WĊ7 T1א3Cel)Pvۻ /gMrnBEX*-$Ј%#Z1+s35/v =9a J㧏N Eر$LޑMmðkAULt`Wq6p0{4 $'&&u,i_eʘ@0".@!xѠ#̣!iM 6zvG2WPN w'CLnw4^r ; W̮g1YxɟNV|Ak?-& L‡l<|>"az3['I,r7fןBѓ"92}t IJN,R0BH3 j FҺkJ;h*(a6(*IpOU Sitŭ>PH¶6%/[;p8b2r峈8;xR/Sv6xѠytD|NSo69~*|uR Jz*MfI>8Z{oLI̷- K8)S17 r:]fp1L>`W%"Fk 2i etflPE'LmG ̎bbޑPVdeί!"rR%9bP|]MDP&C(\:Y9\mfdR *_1˒Z#%&+Kqrh(a#:ĸգ!) ,:q͢|wRen/7Uaba<\DoGLsh-N}Y P7Pwg 0 f*XH5V [g`"@}a׸3+(;vKȣ|HZR iO4 14Y}7<"I>_r_ig(x/.L.a#aK_YӳKA P@#vfp6ohN1\-+htB&$MaQ;`C{ -gTwW)G+d6,%ӁTSY ?n>8ԄN~rhN,X2\d(`IYdJ?H46gV;8Ę3R]ȏݩ7@pQؐ-28=us.Вg'<HbZӝpJt<苉*T[ ʢtu]U=k|{iRY}l$c#GHqPΛx\#N5'CјGn")2rMVghdlzJgJ9DokMScdJ59G$QВP<ԗP~#>D)#!v@j-JY;[q#+Ӯ!aPRݞAG1b= &@YfvŖhO 5GyfXly" خ|+5^lǽFF$0-&5aacA7Jz UO?i%x4_O/6 W3A>zh7PpH;Wq%U CLyYXp#RT\ rTbH) Ӟ 4YAVF\U*2~-PiLK@Mk>[řiɩDk*;P |g,QxX0֍$Gt>SB3Ǝ>'Ez0*1\nČP%w@c%$.b[]s⴨76(S^{1UM}1;{x Bf@@KIyÞ8zDn28mSfhZU,Zk%zHԙ+=2a q \x 3ױ+@S' 4UkPVCLB3kW0:@YոP UMCB8{$T$g2 UuUWKuB,+_Q\B;PJp 09;gvnAnҧuݳ#h$99Q?=^uOKUks`C;; _ hю$[V-ᑘl.͈ -a!I1d IKv7}3poLUfJMe&E+Ӄsq7}sTO1=5_eeAi!-GJ硡;Gne>rHg Gj M_ӭ.5NIwFo8e U]zSw9fVvEUϬw]6A[҂ FXI)q*PX#²F@hehuNP|}Y͇áќ4%X:UtЕB5ֱ0{YA]V*] :TdL@bIzgv$F-?{r5ٯZ<=C ~hBCcV1_:4fw>~1E+o/{SSP~z[- n& ֲ# ~f] a+>bt2x>Ie&׽'R=w:3 [,Qw?K6;D qh_ScIOE#IF98]sm0ޒ8CEU{5i]=,KwOz&cABhcY4Z{hm ]_(yҶwo_igڻɆ #/[v4~ǤrdײϪvꟗT]4ŋƇǏEPR/LLLLj3y?9ZcZp 8,2dōwTeX]ʫ[9Ř;̫B.VyhZo=K)V:˷ߣV.Ey.eϧtJtbPcg14"-VKIqRJH֘.%I{vCΥ3DZ;Fו"%ļ3sJk[\q'5T< W ;S_2nqԄXbC%솕'Gek ȩλ, ,ZؙoyJgåo&,)}3Xc%=6d d7Z[ըxJ=h[5rokY(&aM${e~] \@ V8[Bede@Z`7x"sΓ/ a=Y^(d c4Ssݕ>q޷[M/eI҆?ϯmD,R+{P8/0]1S }`}_Tϓ)9[>F+z is3I&+/*Sz4"x6owAkzA㭛@d.N$ړ)-/ @*R<^a2@Ϙ(+{mteQd A T*2Pc`sȬ(x(Q#%h}(a:#ќ| c]ښ\G|W,J(^~X5oN_&i'qNSwA9yH|(QtPnIq( Y-Hml}x]?IbZWd)(bKIWt蘬`{xxf%e~=oogH+'};ZVOV"?Tu!AآJȽe%YA߱EǗ0,|ctJ7g48mt I[/N@\ c_cc+@E$Jjj k/)E-+ģVtK^e< N5/ZEΥ(O^X>J@EOǣ0C$ن0o)6#xUEZ3|vOOQ33as'vOSFyn/tSDiM2 #X]ͮǪ‚/|F8,tסR4A@[|F[l]7KXq/>ěyu(ޜ~C#R@uAa"]fAgOϿJt.H="~yx_`Z4SñZk; HVO^.G$hy|&Z+viԪc^>- Gl*?‚irYbbgBUUF#p2* 5igeɌwbyjν,) ѹm, rhb딩54Q%=ժ.DLK^ "f+YWd+A QL-iKZ6{ͳVgW7쓮=~,K1+lBlgGW&1grqqmGy~X@Ѣޟ(ҫ=KIfo /zS6=#5/f}9S LHem|ȟZUYx x@Gp7$BQ (%Zȡ99{}qZ!p7׫xZ{ljʬ)^ԋD %jjAbνʴ:Ipj\oΠ68'ޞ",r9-EeSזm@QD8ԋc{rxTn%nWs4S1<D` ˀvdD٪7vwr;T;]xX#9뗩~tL9֦k/~#|H !Q`AZ?zSYPlow dM$e]oԦő/_VW͹ڜ;T,/ykzt+*e$;׽QZ31Z.0HTf^A"hA+g{8 P4%@4<߳Ү[g+< tGa <۵~Ic1pݶJ٪"꓍VF~cq$qn1wq~IŻ|3 IEl *kBRٟE9r2ԛgIk7o~kvd$L^m[TjsUoP{lTA,xqّFnw8*z߼uY9#9y!?[?o ɌW 9e\G\nhq{鏶MO$ή,by-;bsh2=kƘJdhM3ڐȧ.7&Bo$ma{/2(CYs_1x ~6?bIOk`S'bU/iH:K`';IC]'&9){/7M `asK]K)g&|3p!\AY$F* TTjC6݅sBx1@2[lVL` XH NC {`$0'.yÖT+t1Ę׬Ul)>ʧTm%I]7T?|'n zqw+Q̊/?]Q d ~S%AOpmwęȵ~io]VrF5cbE 9u*LDŬ]o۷.f؋Y#;<9ɹ -` aME;$>uE(N49I'rpDK&ZP\Im# +$̍;iiy:Fpt}g$d!ftjyHBL%Kt}(!+;r E&4,;ii [jmmbF NqD)̓'Cey}P~P ؜ )fECJRt\ɼL4R\8m\8ˌww72wϘmMt $/~YRa'>Mt?# - %tk;ȎAD<WW62s L\3VkLl(FőZOx>7q k7aܳ%+$+֞(XPD#wB`SmNGkI,Y q>b2 uJ@>L%2G3tdjL*׊$A|0d~<,w qݩ2'_.?DBt]ĐύG@K? ]zϞ\Fh~ɧZP=/UrNV4ϒ4T "?b>S㔗.|)dM};ᏬcC+^% 1'ήimզ^ٵ]Lh7c [7 ;rlX-/<RɹJ&#BV1H*"޵Gua?w;?S}d^7[8q~"E_o]_|<[خZJ]]^2W m8]mʯ;m?mz^XB_V;]i) hDBI- \na:r3^ Z%z~&kno]b_{} 3#y4ͯWޕq$ЗMv>/ [/l\ش(Ra^CJRռ$a[$=]GS]My Kq-C!E75Y=h\?E)mo/'n;Mw1Ɩkӧ֠n͉K4oo^ד_7&lMr^8G5U'=B?z3Gwު܏jTan\; G[M'm~''rݐƤIa&ȑ8iwbsl#ON SNzYE Liح>rY.oI݁]hDN}om͋e>Q;ȍ42cR<AzYIjIu4!?޻ș2d=MۣZ=_i.|&(IR{Aou)rr.*ЁX[yE!E8 H(l޷vU 1jؔXnG9sTf> De 陜wr,;.r%diWXխ8z9_?] 0)U~GY9;y3"iAۯZUw8f}X4V F[g/87WVύƗ{N_3pn8 GO2i̜TdԛʜDRvO]>fFF/Y~2ftC.EVo\-⫝ܔǵ8_}k t+4m<(KW3 F#͍ 44>kUNL4ZE/MKӯg +`Ko˦0)5 _>gT{`"r8 R7}v7~o݂9qR{' 1QWE'f^Z dQiuB忍CBx5u[grR$=ؕ>d NL762֙ {ɏ?~OnZ4?<(GSm-L]T*Wyk0<_V6 J5"V@3b0$Z2`3٭&%#@<+ܒL5wZeQHYU{)RD1 @2o|E gi%0,p#6v]mCUMIv& ^b^YdRcMt{PqW rAW.-F[pۜ68bn3"=\WRM}.ȗw*W2Fh h[PAY|eQfE"w\JՓ9%Xsyk,O 9нM/M eC-%_]B,w'4_V2v/FIlDky񩓍7&RXQ,-yCl h>5$TT!K{|1h$JV4. |}U%pe))QGbީ'Ǻ׉EE$S0Fw^<^6RefZvϦ5z:P;xtv|}'(t9(EȨjq_PidOdxqmI蘍/ɗZVȅ?1As/0Ԡtm@\Ur0 V[F h_ LM~=ۼ8~s¹Qh4Kz=OFGtv|Qo9;7+ѭ )M#h,f=VRF%l,NIo< d|>7Kۣk;A04c/oŸޥ.fybq>G6'׭|7pp*WmN)S;Y41CkHu #D2b+,D:pz4\4="{ ɩ/z2H} XpN-Fҥ'GIi;V҂@턪a ܚ,LevsȲ omn߁t!YtaX*|lTqʸy0nk 40e)H ҇b0n ˩xDN+;Ań<g\P$i2nKeԪ 7n: 7nimrjԊJ]|M*yxXےG@ɮUd+B֢\s!$ o Q40*Us1lLAsu\"u[HmS CGɝ@FSGB#`%'^@F$9C9+3ЖqlKrGÐ+JȢp$e{ v^x2j!s*IʂR{Yc//UTreѯD: !i%f[#4Svp*g=FXv paAje(ܬ6FHZR(g:x1PD(b\"=JTk4g96ƼDn/N;Az0;¤QI"`^ Iduv^0!3bK`l#T[qzؑU }6(##rUsx|_ j ^=>)*T6y.Mwp& 㭩.P2#ZXSfɥl?7v.y⤢^)$!A[Kװ\sH\bOGFo{-|BYdJ|\N/]f7Oxk!h2xuX#,7!;+&20͑^[%>XiF(h)wsoyQVAXraJ{B]pyLW],B:H2a` Ɍw~)A1'B02(2( l5]f#hNB"8TpyV+„5&:neL?ZwՆUor6kߓ_9jE_f2ڣz𧻓ħ8U(ONNNNS?g#aw@cJsQ)*鄷\rfFHr U?_j)abUCT~KZE+Q+GM~q}}jܴ2ƭoB+N93+^bJoŹ>?@) rVN_QJ XH!R(ӾT<VG9o^ osXk.m zf&|PXkNz3nEFL|bhN\"`xȊ{\+K e_{W|7xwY'{9RQnXD݉誮wi(0{*daQ rODzy0zTkTeuϑɚ=۩ >^IڰO)-}KDK*!&颐SZ>_ÕnoSSV ӕF[zdd"*Ap ZrTiFYFYjuuZ, vX3Y&GEQk咹lP@"1 yID=MFmR%YXd(,mT=6zIk[iD$p"ʐtutLZUsUж _ib_yAQamj ^'PٻnWd*Edu ,b)kjdҬY CE:}x.}G=O*dWIzБ`6AՉ JpyQȼDhBVü`qSQZ~pGv 8*R*Xg?|]QvԝƠEa ϩDQ#|e@ ߮wu4IJOժ+X$f.%, lf+JVښde22_V\|ٲHgSF9/F'hExٿɻ Q"lZ2'˗Y^5vvC8?scÓa~MmlusbHpʹd FGwƓHx6RdI)Fyhi1Ώ| 2)"H#Z+Q8t$ yl镑ml{Hb00j#)mqWq5vU\;Zru v8Gw@3fG{.Z5hwrUlzPOT!4+dh湿Ӯy^>=1RsHk']ϷgwПJǂG3hVh Zň9WD!ry%?98+7^FE:C CZ;]j:tG5C3J"rCaDV$ Q }"'l0l^9Y}HdSH&`8́x- Vii9Ij+~:2]7t 5e*暄8D3b+s(-dzodI * ˌ qKR0|~-#Eֻ/X#Ρ1om;zcMyAm_,,|b_2BX<[a {]39 fGLk\n5&s\x9As*g79-*ڼQ"^ 4gmrվ=̢@]\ai}h{,2@hMA|ˊV7)X{A k3nD CU[j ,B.R+:#rNmyY ZO5T>}?/yA뾠n`CգszCs~| f rme2u?ݯK}R =T6} sGoKC(⃡||Y^⣼>j>|i2eYٖp\2vݏ` ww,pX^{A|*)9/WA+Ч$7- &8 pvA pDs5ڍFm:Fİ%.JfOGѣOp q.RΩ$TԎYo-}=պŪybʒN{:xԟ%T)o eISV@Uj: U(A8ݟUj@{ޝW7ʹ{pwOY4LZ@[ D Zk#Ӈ4oC.n8T<Ն\T^6ALQklj=hcS`0Xƅȸ 03҉yN1,y WGj zPc5UнU0߆@J@ 3N:X޸(/$ y@JwJi X9Eբl/ew>f%Bٷ/[=u9PeDWCK?lKL^)&bjM(Q¢KYQfK-E:pqPA[e,ZXһSFT(JfTw;NxG" 4<1BJ c9@$P(G,@Q9'9ZT$ EA*Q\44={Cʆ]A6-64-syE!ihcg}e]a(z)ca^lt ߮צ1\ߟezhrs}⟊P>\J:Ƞ?[VZQ (er sCkwvjܡ`. 99,Pyc- Щ# z &(;NJ!9ca^ĸ rΔXAmPWaw`a~2;o͠H8 0Ad$Hz0.[B0F$q'+$sn;Ih0P' dkQ@z{@SF?0T=Gk!Т|Oh aVtt\'Vsu ժrCs?=J\7m }{BӓkqfMU^H>[(ߖ:&cZ\:Ipե`3FV4 :*8d3yr4@8MɒWo;ݡv@uۿnj~\a߹|tma HAO'/hcĉT@$ ͮ7=**NPɘC@ԣ!8:CIK7ѯBH?[,/by?Εf[OV۹d^+"q a%5I76aQU \}m ,8WюK>5"(ѠXtLځ&ukhdk~n/-α] Qڽ먟$tR-UvqNqDaazd#U [Uc5є|$CoC<) 1+}(yIX!LJ<@7#Z%!8&`KJ EkRƪ?hЖ@)1 k[ =7ıt+ȑ2 'ٱzkPar \gi U/+ٻ6r$W}و) Ȏnx{\zÁvKczHISHzTsbbƶJUB"3HЃ`6 mULN0DFȝ ڵsX-4N$D,(tԼaW'd~(@e)dNO*s\L]ypBNWW+&Yc\Nm[,K!4SF^Y > J Zv+eZL Y3_d˴.!yF m `[1:@m=P,Z?USᄈߙs UJZY@/?1?:iv7`}z榍[o:3Nx"^b+`Xc,{.ޮ*p-cc;EoV^}ͮ_VOVOv)[w<ι-Dz+$%cOvCn-uk,#kw\% ]g+=E`G29@ke*I+{ pHTuP3P{,|fs;rBq%e/@!ےRCH`a Dyqg2^bPERǴҗ~I5 JJu-#;=2\p\Mh{}S\ux"M m`S>HG{">!J( 9dxvlR$>8ZF8o7P-oni5V >SIR#ȍ։25hHXk26/~g#N+SW'(EBFC}Mw|kROi\=}ƻ_Tt9FT4.h:j5XyFS 8xqtC:@ `_y aV)+H))2թWj gi^ CEi[RjQ$cfeϷֳg@1մGbUv}|yRYȑxbo\{anpzx罘ki}4'4ߨ_=zzvMn'cQS{޼SOͮϿՓ̂1 ̺5+8X8󅞓J ĬAbOaJa^IfWOz VLD1Auz#}g#<?B4?"}4ڃ V8bQ`.(IE kH6*fh2EX`%.p /Svj^&q*`Tւ>z"ޡLJ3yY.H1%՘eDZldR(ʴIucPi^׻}-/O>Mo7IF1#zǬ:7K3xCGgG-ݸ5p֓ <{߭"} רf#^ h5`doIxH4;ZAMXߐ/)^pZ@:>*l3u3j^ ge\Jo'b_v||Z 䉚v/ .@L ԗQRcF_VE?*‰l/BFI,*p U3!-URK F!TAMV]eҐ |A*VDs&sԬ1~BdAI)gHsRv]rR!)o<hXHd9P^3dY>(*}{{FS } l6LTl!픗AUuMPrIB.֛:ZU~ WBow^wjsZCiy`+DžNOZ^ r[ HI6k+ۓ$\p|̓~I5wt.'Ix$+-\pY';)d41݌JZn2p8q՛c}E?~I|eT h*>͵ga 0B;@ 63$+*^V^<"ۏIw>v|<$Of*^_Sie-QZ=r"uk2/^%:(,BZEMR7 C:) uI콑ȡMKL]ȪC( &꒔oq"dcrEvcÉRd3Y!ѩ?w%:˯ꄾƙF+OǤ(ًkް@,!0__b JyQTzu΍QmIFPyb!Jۗm%BnAr[ s"kNahSEvTPŤG퇿6B SEK f~dVc,g?ڻrg4p:2/{GDndGヌB5A'hM*sTKv6;|}A|{ s?U؅p`PId=x*a2 i&߿X-(-qJLV<+R7:*tZӦ-1&砂K۶'9e:؜ c)f|/nUlE!ɳH# sڛ4 %c! B%i–y6liJ"2Nj?b,%s2+mo mGC=et䶷jRZrz`VgZfEVD$wZ{ȯkKdaXլSBYSވ9oO1DJ`B( B@s*>:E(tykiNQVajD+6nLn2,A6d=?>[pP(1:rFɬ0,},b*Ҋ+ՎfʯQ!2MRv=i(Hq>uTbaY{YUupSjTC](C7rv̜47DӚcqaQ'ױsNCO{s{og:lt#F<?N4u96"k .;ՠ: NzjU(l<U=< b/X3(gB){/AWOn D^Zo}q2s6A&%ʜKEr>Ť7萉Dс 0wmmy%r%htR(wdɑd7ٖ.%YqF";Z\s%3JeD*%ۭ4 L7׬7)ΰ*Xe|XeUFfT 6B *q]5Lr(^ 8K hD,3ml0,zMwot#o`fh66G֒oC̑#9chtDM2х yp!s ߸Z郶 j#VA0!k)H)HSgꦠnj(0Iг(Q0[hUk\^!÷m@-A yfS[YSxp4ֿYt{;A7C㥕@c *%BZ97,"(n, Z\$&%=%9u>4Tc]TnVr|={;7$9[̀ҭh7rW 9* BJK*6 MbY!>])J"G|O߭ޯk_/?g?]_{?)3o<4q<(Z+v;>2Wž)g;[vRz sߊtMa]}E@LqSbN,;_^+nSLZSa!)͓o=ۍ,<QV Lܥ ,ce aj |2 sc#PBH/,q ɩzVy|r/X7i4/ӼZ^s#{la<}kaoº㜸H B+\iFHC8_IxIk7jT(n_̦{]&Ku׆˻2^R=guZ-g:b3齕vsͷۧ1Pd*}b2\˧;*>%VX1xQny݈cEGwJ1At$Bn#8Z}]H|}CuwCO C"FCѕNo&ֵfy-hr?*NNZv`* `>FT|ɬ5Y⇣_r׎H=̿5.ݸnwϗGxS:-:}|տV[C2q~=MWxpmћxa}q3 C 6}gy7hq/΢}xpuvQ[_Nwtn5hvhVvC~qSʜ\F#Wn}i:}EbCk<圓B\mr-#(Go *饲'Ҭ?=,gnϘ۬s5Um民sqrUfl(HI6O.MP2[,L?)ǚp~v"yU2$l&~2iJi5J" lc>s ICd[a6FD9FKaa~ߗ9) J=ehږf7%E͸H:E!yLV:k2z{P\b3 "|l6Y/#ZiQzơ H3Wd꒥|5̳aͿ\\KpI̧ga+Rj4sbqRxװb:@tDH a!Gz9f sȭ]Ԕ.30="5}f٬Ɲ?}&%F+V ӎNfQ4;;_Ro@8kE\Ifջ׆9 R>CJ7vfCoS[ (,\eU\v'v:4PnzuL}XڃYtDҠLM#|6B'}٤0>[VFl`kEi꽙i3ov,g- &d X" n9&Q@D?ˠd{/ȭVReV;ckTt?ILJK%#qױ{ȁUt-/ A% JY#Y @`|&Rh%1y&DϦsXq۷i6ɑpsIRzwJ3xvV{Jw/|hwo Ũ gS?7)`1GEP¼.gEJ^ԭ|W(Ι!v:|!l̸BĘO^HL)m\1*l>|N6mhM3yyvVwb0p1AlA10gkKѶLhxc/s!&UbBLLr! ,3G}"Y4p$l7.=A\.ݰ.)׀]ru&;f)P\2iڍ;Ў@ 0j ~ _&Rc͌[H-0 ݑ%V646r_蚨Ϊr6r1:ǒcHyrs\V )bɌ 6bm$F +S$s:V9Vcxc l8,E_DGr]>`Nj0%剓1~4vbЍT!J%|u&M:,v -Z_"qҰiCT\/\ @|JE9#׉D /q)I6Q59˼Z\% V#L'yF8 mM??/>\:_/&rROE`k1d]\**:Qð}blv=kG`>A{5SIw ըJP1%iI[O? BbT%g$1$veOJ,o6j#C g&*&MX]9뽣dD=+H!IP8MohI^`'8X=_=5)(:TD=.!hzVYwieiX1ZHN~1t$#L:brqiF0R >M. Xbi">#~8e\o'i'5f(W1!In44E'L &,sk \6^ٗj@o={cBllsfm#'$7*ےx 9Omǥq}8\ˊ˧ 1qc /* n޸ȯ0$֫#Ovk'vX[o>F[ Pj#om#f?i#ؕG=z_;j!w27֟Nb=zrdZ : 2|H 2;T7AޚAͷۧ1H̎+J>Q-HVz]_`gDZ vpHO+ 4ܮMZqI}h$Ԋht.FBъ!Eǭ+EzH)Z %=b8~tgc֎oԿY[ ;2nq,M"}ѥ9ꦿwfYOm'\m)@*zҠKRwtjzޝaykYO9v(NMa_NwtnҚnj!8gѽy U:l&‡pr{6OWBaL^+0ȂˬIG)1^^{j"AQG`ÊLBCV87$Bukcbs߯y2km[+fKE=ږVjKco͚rS]sST;M% 4`QÅ?ai_yrtdK.r\ ?Eb]Ctv'\^{X ꠬p<,$k C9vEIS점u6YzRBvwv=Y9Yj_roep``4P0p`uZ:ZZFqp`uf 6)hrzzZCQ%B$DuK;fX7"2j)Xˠsw45x.iZ:-ߦa= Y5{: c-DGZ(f3OW9mr=zK{I0Kk6$HfEQ*pQ"E@Xd9; Tt-!DwJU"u3}A$xI 'S2AƵ$Cuܔ %nH?ҙɍ"+!"Z⸔,3KoX0rAlǮҬq7vk܎-W3~(Vn'˳蛕j+-T`J>+&kClv#'DJCE>ڻrQLˁlUe_|ze,?lRdl"Y9/'c¥ShM"ΰ6iB{ 3OY0OR'F$:Hsq>ɺBT+؛[Y "ͬ׸%[:KcKoN. I2c?9Tq0[xkEK1> kNz"m_S *mh6V Og+huhyt fc%i9k|0R-Bͨg,@lT9PyLL1ctb^wN0Yőɼ+(_16uiiT6W}izYYEhX: NPUI#S] P{Ra9 D*0 ͞ L13wT)ZEߠRJwrT DR$zqP77Q  '錾=#f>ؙ$Kb(>Gtln+T4zDLK* ҜI@>^=~+ܒ$V.yb D%HpJ C%]c>i fC!$ɟNmgAc  bc gQ0*zCoɖ+%.y Zg* |QP(,lX4cpaemWUc?Ӻ<]c$iE3P:/ wްۣ9ЗsGoN \ād5t_;@?? yFZ3J zD۞#^[J,hru] au#6@ &UKeO+sK: ƓSXw O7 !jti>Y " 1p[`Zن_* d|kc + ;s"0\ ]ZQ0S=g+ށhC}=D}? ͞u}EtB{'WU j9ٞ,͖>D0umBuzyDg#=÷K4^LF}xoM˳ AX.TBG4`ZŒ$Y-0YvRIhU}!AyQ1ۡb^8O)$SHvCٲfvRQB/8qϪday_&$^;M I'=mm!m1_R0Gf1u$^4 Hajx?ͽ>~qs?&,F4lswܦp]4/7I  B2,^K3n,. Y F<- Tlƻ=sߏ7I;ߏ|] HeJfMG{ԑMܷz j7tndZ`=LM#}`f\h$1 :G)+_gFo:WӴNW?JrgUϪgUjb0\<3SQg̪UX;)@r}4ҹqg!? _fZxGWzTEy%bXpgikwo6hklAϸ >?[Xb~ÿqm|_皜%+\fI?Kr-(z@M cJ" -S̖U *i夝haB-vf91_Ru—l\,|-~V#Il>͒ŏ_Cc~ ?p LNgCK.ˣDždPAAϴdZ/Tz8D/3ŵk^ɇO]gѠ{iLX%;>i}wTc̀'4進G# *țףVȢ逪U\CG]Hx娊I[-x%OM ,uY"4B~y!Ok\}.g Du̿q sVB-]\ΐ4zGק{VH/G~55VL/e٭vq_WJb)y^Imi-h0;_CJ2]JOvFF˴ %ӣrrv1.Ɉ3',檻:"jqLr45/w2z{zeI$xt6[Rgjx+No%e4 4"Xm&+R&:AFsN8?]SA\ m@j%|Qksz":,smΕ&'Ec+;Hy8dy֥ }q0J-n7og!h)A'4p5.ST̢$iAމS3˧xYs3IɅ&ݽ zJb\Y9Q" #Qo0iAG5[`-iE<=!!!X !V60F* c7vX2?3 f}dւGRkM7 sgWWD[<4Ͱh֋HRQ2FwS6%_5ʵ)Z>;b]>YצhY Va 0a]xIڊ4KRO/oGk K1 3&1WzGffK/A+O"]3+}K'$O@^L޻茖Uͽ ^-/^,QciPȫ`Bhu-q"9h. +^ш~>l3._ǺQ6O23qL/m (|߆|R_)AnX +k^$h떥$.)t"Ii.wK,[K]z6 _2]61-xلj鉔)9oKZY,rmCy_3d';dNCO_u풌2| B!z @8D:Dto+ޓ$+^sGe@ l`3F_O$ն{1UTH)U)YhY,VŕqeW>"` 3dR}5=v:IpK !AP()5CU_WL%h xM٥ %%#@vSjoZYt EB-)Z/-ЌehNĽe2ЅX1Um"]蒼4n1*#&a R<^(0lO`@2(8|O4FUʨ=QCIFQھ/zF+zPk^; ?)R3&Exh]U{p2I]īcFyCW!0W  pV/rԹ`||dď@Sϩ DeFx0W '| M>%wTiZDd:B>"`WLp`87&ajp^E)3ߩ|,i#XDDY-2:sn%rjEɐ|ldo,/،]e谺0MEP_{&B)TV4*9PVFa<тR Ha= !sG ljܒ *WY&+ [DzE>LBi_ :h=sB\]&'hdWBH|14%b̻7-4QCW>ա(8X!H {^IqAmTdۿ|f<2:)p-tz+o`rQB$ؘAbT9IѲOͨwНXy'i?ϸ4OOCLl޼%0y!y<հ[.:%Kɤ<`F'+ꝳS q9AC4$)ԜFK# ~(y2A #O6 BUA2ZT2ଦ'i jJFǵ;ΜCG 펴zzx䧛]\kFfژ0?g񱒹I4joOGOFrWm]~(N9K>\X0Tj&_ΏB9u!'@љnB<%gE.LJVo>,"7'sp:}3xSqMN %P _8ye܅ <{sᖄaFRk:M)/!%Ba<3Qa `e{w[sKcdk*MP!-Pxcw/V;U[ "RY4*a:xn]vƣ\\,ñC=T9yOܜ r#a 7Z 1b]V#w1??G=Bԩ%l ]ڣoJEvnweF)߷dtZwa0]2z3jcT:Bv+}X@uq{+$Ú _~_%ia+73 x>< Ĺ|rZ]`A9x>|V͇HFh<-7,;.3Q+]B~֤+#?(kPT o P(% `, + C*2=ݑZ7 0*`(2wH 'UwVKF~TtjjvfU 0( ; `~+w 3$*c@;]iTE$/՚w<՜S(y&Y*Q=QU+62-Wf#ޠĴK;"AǔRaM9nOx1U"%ibR[ѣc}jC e7rʥPC#be >Դ>"j,4AAIxpj!P\yTZP02jXOc`]~3 ~Ke9B"X E!5 }sIle{iws˅%$D,N5wϓteq+4I׀1HGT`$ݓ..I:VXCs2^Ž3҉nlƮrhvZVœ%de s-:%(dIuB՘<9pNR׮Ye:r|Nҕw] qWGf.n):cr=='qty=5lv2'Khehf9x̱g;n//NՋL _zِ}G&F3u|`ri~gf,r)Xȋto 3I5@N!p10)B (A@tu8"@(*8 x.p\b dL&-d{) )'RH!)6^qn$0ftV{W+ΝK&_K zÙj%|2ۇ/7hAyX 4yp6^hZJbQbvzXm/&~jBޫ^^8h[ oG˶y4hx@OE;rUضVke}ŭO.+`xHo1sԽ|G[1BwOv%"UlsۡN(Z(L齈 uPde25>㬝fNu4h~6O'ɕ.gUgCz/B=ūw4?5q};oe֗PLgku9tR$'kF.7:6uᾈF9+L@hjwAW|_iҠDtv>+tk>ڶ[ YvBBqX6DnܟnE70ZtS\tӉȚ Zt^XUfmU #nWߩu}b?͑<:8\/, (ʲV JX}1KuLM%V^?NgwQj~g haֆI^<.-p,HBxa ya/aJ8;R-?Vpv%px#K:ՙ˗8$(ZW@$x=?Xn.=xۀ1N}/:'],]&v"B*v-NDT6PƮrTIi|6:8ss:r{'kC`5UtxCVXA4r SN/yʊx(ʗE7= Lz5^Sy\ƐHdm *IÕOFV-@- M̖Z\XܑȚ G5SsY3I|<var/home/core/zuul-output/logs/kubelet.log0000644000000000000000006047115115156051220017677 0ustar rootrootMar 16 15:13:17 crc systemd[1]: Starting Kubernetes Kubelet... Mar 16 15:13:17 crc restorecon[4589]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:17 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 16 15:13:18 crc restorecon[4589]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 16 15:13:18 crc restorecon[4589]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 16 15:13:18 crc kubenswrapper[4736]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 16 15:13:18 crc kubenswrapper[4736]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 16 15:13:18 crc kubenswrapper[4736]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 16 15:13:18 crc kubenswrapper[4736]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 16 15:13:18 crc kubenswrapper[4736]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 16 15:13:18 crc kubenswrapper[4736]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.679276 4736 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691269 4736 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691301 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691310 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691320 4736 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691329 4736 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691338 4736 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691346 4736 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691357 4736 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691366 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691375 4736 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691384 4736 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691394 4736 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691403 4736 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691423 4736 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691433 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691441 4736 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691448 4736 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691456 4736 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691464 4736 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691472 4736 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691480 4736 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691490 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691498 4736 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691507 4736 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691514 4736 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691523 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691531 4736 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691539 4736 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691547 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691554 4736 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691562 4736 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691570 4736 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691577 4736 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691585 4736 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691592 4736 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691600 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691608 4736 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691616 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691623 4736 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691630 4736 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691638 4736 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691646 4736 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691653 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691665 4736 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691674 4736 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691683 4736 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691692 4736 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691701 4736 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691708 4736 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691716 4736 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691724 4736 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691736 4736 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691746 4736 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691756 4736 feature_gate.go:330] unrecognized feature gate: Example Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691766 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691775 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691784 4736 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691792 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691801 4736 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691809 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691816 4736 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691824 4736 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691831 4736 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691839 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691847 4736 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691854 4736 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691862 4736 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691869 4736 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691881 4736 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691891 4736 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.691901 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692059 4736 flags.go:64] FLAG: --address="0.0.0.0" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692079 4736 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692093 4736 flags.go:64] FLAG: --anonymous-auth="true" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692136 4736 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692149 4736 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692159 4736 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692171 4736 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692185 4736 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692195 4736 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692205 4736 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692216 4736 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692226 4736 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692235 4736 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692245 4736 flags.go:64] FLAG: --cgroup-root="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692254 4736 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692263 4736 flags.go:64] FLAG: --client-ca-file="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692271 4736 flags.go:64] FLAG: --cloud-config="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692280 4736 flags.go:64] FLAG: --cloud-provider="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692289 4736 flags.go:64] FLAG: --cluster-dns="[]" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692302 4736 flags.go:64] FLAG: --cluster-domain="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692311 4736 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692320 4736 flags.go:64] FLAG: --config-dir="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692328 4736 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692338 4736 flags.go:64] FLAG: --container-log-max-files="5" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692349 4736 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692358 4736 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692367 4736 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692377 4736 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692386 4736 flags.go:64] FLAG: --contention-profiling="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692395 4736 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692403 4736 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692413 4736 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692422 4736 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692433 4736 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692442 4736 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692451 4736 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692460 4736 flags.go:64] FLAG: --enable-load-reader="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692473 4736 flags.go:64] FLAG: --enable-server="true" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692482 4736 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692493 4736 flags.go:64] FLAG: --event-burst="100" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692503 4736 flags.go:64] FLAG: --event-qps="50" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692511 4736 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692520 4736 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692530 4736 flags.go:64] FLAG: --eviction-hard="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692541 4736 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692550 4736 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692560 4736 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692569 4736 flags.go:64] FLAG: --eviction-soft="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692578 4736 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692587 4736 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692596 4736 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692605 4736 flags.go:64] FLAG: --experimental-mounter-path="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692613 4736 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692623 4736 flags.go:64] FLAG: --fail-swap-on="true" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692631 4736 flags.go:64] FLAG: --feature-gates="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692642 4736 flags.go:64] FLAG: --file-check-frequency="20s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692651 4736 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692660 4736 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692669 4736 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692679 4736 flags.go:64] FLAG: --healthz-port="10248" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692688 4736 flags.go:64] FLAG: --help="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692696 4736 flags.go:64] FLAG: --hostname-override="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692705 4736 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692714 4736 flags.go:64] FLAG: --http-check-frequency="20s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692724 4736 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692733 4736 flags.go:64] FLAG: --image-credential-provider-config="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692741 4736 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692750 4736 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692758 4736 flags.go:64] FLAG: --image-service-endpoint="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692768 4736 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692776 4736 flags.go:64] FLAG: --kube-api-burst="100" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692785 4736 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692794 4736 flags.go:64] FLAG: --kube-api-qps="50" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692804 4736 flags.go:64] FLAG: --kube-reserved="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692815 4736 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692824 4736 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692833 4736 flags.go:64] FLAG: --kubelet-cgroups="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692842 4736 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692850 4736 flags.go:64] FLAG: --lock-file="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692861 4736 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692869 4736 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692879 4736 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692892 4736 flags.go:64] FLAG: --log-json-split-stream="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692900 4736 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692909 4736 flags.go:64] FLAG: --log-text-split-stream="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692918 4736 flags.go:64] FLAG: --logging-format="text" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692927 4736 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692936 4736 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692945 4736 flags.go:64] FLAG: --manifest-url="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692953 4736 flags.go:64] FLAG: --manifest-url-header="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692966 4736 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692975 4736 flags.go:64] FLAG: --max-open-files="1000000" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692986 4736 flags.go:64] FLAG: --max-pods="110" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.692995 4736 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693003 4736 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693013 4736 flags.go:64] FLAG: --memory-manager-policy="None" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693021 4736 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693031 4736 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693039 4736 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693048 4736 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693069 4736 flags.go:64] FLAG: --node-status-max-images="50" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693080 4736 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693089 4736 flags.go:64] FLAG: --oom-score-adj="-999" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693098 4736 flags.go:64] FLAG: --pod-cidr="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693146 4736 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693163 4736 flags.go:64] FLAG: --pod-manifest-path="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693180 4736 flags.go:64] FLAG: --pod-max-pids="-1" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693191 4736 flags.go:64] FLAG: --pods-per-core="0" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693201 4736 flags.go:64] FLAG: --port="10250" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693212 4736 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693222 4736 flags.go:64] FLAG: --provider-id="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693231 4736 flags.go:64] FLAG: --qos-reserved="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693241 4736 flags.go:64] FLAG: --read-only-port="10255" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693250 4736 flags.go:64] FLAG: --register-node="true" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693259 4736 flags.go:64] FLAG: --register-schedulable="true" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693269 4736 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693284 4736 flags.go:64] FLAG: --registry-burst="10" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693292 4736 flags.go:64] FLAG: --registry-qps="5" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693301 4736 flags.go:64] FLAG: --reserved-cpus="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693310 4736 flags.go:64] FLAG: --reserved-memory="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693321 4736 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693330 4736 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693380 4736 flags.go:64] FLAG: --rotate-certificates="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693391 4736 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693400 4736 flags.go:64] FLAG: --runonce="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693408 4736 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693418 4736 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693428 4736 flags.go:64] FLAG: --seccomp-default="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693437 4736 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693446 4736 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693455 4736 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693465 4736 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693474 4736 flags.go:64] FLAG: --storage-driver-password="root" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693483 4736 flags.go:64] FLAG: --storage-driver-secure="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693493 4736 flags.go:64] FLAG: --storage-driver-table="stats" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693503 4736 flags.go:64] FLAG: --storage-driver-user="root" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693512 4736 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693521 4736 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693532 4736 flags.go:64] FLAG: --system-cgroups="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693541 4736 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693555 4736 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693565 4736 flags.go:64] FLAG: --tls-cert-file="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693574 4736 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693586 4736 flags.go:64] FLAG: --tls-min-version="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693595 4736 flags.go:64] FLAG: --tls-private-key-file="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693604 4736 flags.go:64] FLAG: --topology-manager-policy="none" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693614 4736 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693623 4736 flags.go:64] FLAG: --topology-manager-scope="container" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693631 4736 flags.go:64] FLAG: --v="2" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693643 4736 flags.go:64] FLAG: --version="false" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693656 4736 flags.go:64] FLAG: --vmodule="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693667 4736 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.693677 4736 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693884 4736 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693894 4736 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693903 4736 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693911 4736 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693919 4736 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693927 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693935 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693943 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693950 4736 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693958 4736 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693966 4736 feature_gate.go:330] unrecognized feature gate: Example Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693973 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693981 4736 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693989 4736 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.693997 4736 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694005 4736 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694013 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694021 4736 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694028 4736 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694036 4736 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694044 4736 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694052 4736 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694059 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694067 4736 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694075 4736 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694083 4736 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694090 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694098 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694154 4736 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694163 4736 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694172 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694180 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694189 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694197 4736 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694206 4736 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694215 4736 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694223 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694231 4736 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694239 4736 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694246 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694256 4736 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694263 4736 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694271 4736 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694279 4736 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694287 4736 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694298 4736 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694307 4736 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694316 4736 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694326 4736 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694335 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694344 4736 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694354 4736 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694361 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694369 4736 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694377 4736 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694385 4736 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694396 4736 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694406 4736 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694415 4736 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694424 4736 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694433 4736 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694442 4736 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694451 4736 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694459 4736 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694469 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694477 4736 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694485 4736 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694493 4736 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694503 4736 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694511 4736 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.694520 4736 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.694545 4736 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.707834 4736 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.707886 4736 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.707992 4736 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708011 4736 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708018 4736 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708024 4736 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708029 4736 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708035 4736 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708043 4736 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708051 4736 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708058 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708064 4736 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708069 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708074 4736 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708080 4736 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708085 4736 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708089 4736 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708095 4736 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708116 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708122 4736 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708127 4736 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708132 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708137 4736 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708142 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708147 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708152 4736 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708156 4736 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708161 4736 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708166 4736 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708170 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708176 4736 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708181 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708186 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708191 4736 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708197 4736 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708201 4736 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708208 4736 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708213 4736 feature_gate.go:330] unrecognized feature gate: Example Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708218 4736 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708222 4736 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708227 4736 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708231 4736 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708236 4736 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708241 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708246 4736 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708250 4736 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708256 4736 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708262 4736 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708267 4736 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708273 4736 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708279 4736 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708284 4736 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708289 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708296 4736 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708301 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708307 4736 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708312 4736 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708318 4736 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708322 4736 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708327 4736 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708331 4736 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708336 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708341 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708346 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708350 4736 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708356 4736 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708361 4736 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708365 4736 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708370 4736 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708374 4736 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708378 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708383 4736 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708388 4736 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.708399 4736 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708574 4736 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708587 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708593 4736 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708598 4736 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708603 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708608 4736 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708613 4736 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708618 4736 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708623 4736 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708628 4736 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708633 4736 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708638 4736 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708643 4736 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708647 4736 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708652 4736 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708656 4736 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708661 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708666 4736 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708670 4736 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708676 4736 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708684 4736 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708689 4736 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708693 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708701 4736 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708708 4736 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708715 4736 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708721 4736 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708727 4736 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708733 4736 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708738 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708743 4736 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708748 4736 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708752 4736 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708757 4736 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708763 4736 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708767 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708772 4736 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708776 4736 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708781 4736 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708785 4736 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708789 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708794 4736 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708798 4736 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708803 4736 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708807 4736 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708812 4736 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708816 4736 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708822 4736 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708827 4736 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708832 4736 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708836 4736 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708841 4736 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708847 4736 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708851 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708858 4736 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708866 4736 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708872 4736 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708878 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708883 4736 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708888 4736 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708894 4736 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708899 4736 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708904 4736 feature_gate.go:330] unrecognized feature gate: Example Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708909 4736 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708913 4736 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708919 4736 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708924 4736 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708929 4736 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708934 4736 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708939 4736 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.708945 4736 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.708954 4736 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.710690 4736 server.go:940] "Client rotation is on, will bootstrap in background" Mar 16 15:13:18 crc kubenswrapper[4736]: E0316 15:13:18.714566 4736 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.719432 4736 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.719637 4736 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.722633 4736 server.go:997] "Starting client certificate rotation" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.722674 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.722930 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.755422 4736 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 16 15:13:18 crc kubenswrapper[4736]: E0316 15:13:18.762356 4736 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.765194 4736 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.789476 4736 log.go:25] "Validated CRI v1 runtime API" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.832282 4736 log.go:25] "Validated CRI v1 image API" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.834996 4736 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.841328 4736 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-16-15-08-12-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.841395 4736 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.865518 4736 manager.go:217] Machine: {Timestamp:2026-03-16 15:13:18.859960884 +0000 UTC m=+0.587351251 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:378d5087-0041-4b41-a060-d9ae2cec6524 BootID:6132c463-6382-43f1-ba00-8f3804f19383 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:6a:9a:69 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:6a:9a:69 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:51:28:20 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b5:42:68 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:2a:3c:ff Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:81:86:b6 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:42:ce:ab:85:5d:02 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:16:c2:f9:4c:3e:a0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.865892 4736 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.866149 4736 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.867611 4736 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.867957 4736 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.868022 4736 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.868472 4736 topology_manager.go:138] "Creating topology manager with none policy" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.868494 4736 container_manager_linux.go:303] "Creating device plugin manager" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.869461 4736 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.869518 4736 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.869894 4736 state_mem.go:36] "Initialized new in-memory state store" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.870035 4736 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.874658 4736 kubelet.go:418] "Attempting to sync node with API server" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.874712 4736 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.874760 4736 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.874788 4736 kubelet.go:324] "Adding apiserver pod source" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.874815 4736 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.881894 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 16 15:13:18 crc kubenswrapper[4736]: E0316 15:13:18.882059 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.882017 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 16 15:13:18 crc kubenswrapper[4736]: E0316 15:13:18.882186 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.883346 4736 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.884695 4736 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.887839 4736 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.890398 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.890447 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.890467 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.890482 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.890503 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.890529 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.890544 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.890566 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.890582 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.890597 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.890618 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.890633 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.892407 4736 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.896761 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.898212 4736 server.go:1280] "Started kubelet" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.900467 4736 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.900699 4736 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 16 15:13:18 crc systemd[1]: Started Kubernetes Kubelet. Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.902279 4736 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.904029 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.904095 4736 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.904801 4736 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.904828 4736 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.907273 4736 factory.go:55] Registering systemd factory Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.907362 4736 factory.go:221] Registration of the systemd container factory successfully Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.914807 4736 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 16 15:13:18 crc kubenswrapper[4736]: E0316 15:13:18.915022 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.915330 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 16 15:13:18 crc kubenswrapper[4736]: E0316 15:13:18.915387 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 16 15:13:18 crc kubenswrapper[4736]: E0316 15:13:18.915744 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="200ms" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.916725 4736 factory.go:153] Registering CRI-O factory Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.916801 4736 factory.go:221] Registration of the crio container factory successfully Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.916898 4736 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.916970 4736 factory.go:103] Registering Raw factory Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.917033 4736 manager.go:1196] Started watching for new ooms in manager Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.917618 4736 manager.go:319] Starting recovery of all containers Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.918030 4736 server.go:460] "Adding debug handlers to kubelet server" Mar 16 15:13:18 crc kubenswrapper[4736]: E0316 15:13:18.915934 4736 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189d5b1acbab6efa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.898130682 +0000 UTC m=+0.625520969,LastTimestamp:2026-03-16 15:13:18.898130682 +0000 UTC m=+0.625520969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929643 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929696 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929709 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929721 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929731 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929741 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929750 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929762 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929772 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929781 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929792 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929801 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929810 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929821 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929829 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929838 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929848 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929859 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929870 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929880 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929891 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929901 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929912 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929924 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929934 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929945 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929957 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929967 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929976 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929985 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.929995 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930004 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930014 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930023 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930033 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930044 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930053 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930063 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930071 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930079 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930089 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930133 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930142 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930151 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930161 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930172 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930182 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930193 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930203 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930214 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930222 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930232 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.930246 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933479 4736 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933509 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933523 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933535 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933546 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933562 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933572 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933582 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933591 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933603 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933612 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933623 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933633 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933644 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933654 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933663 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933675 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933685 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933694 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933703 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933714 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933724 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933735 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933744 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933753 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933764 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933774 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933785 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933794 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933804 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933814 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933824 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933833 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933843 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933853 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933863 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933874 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933883 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933893 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933903 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933945 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933955 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.933965 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934008 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934020 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934029 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934039 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934052 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934062 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934072 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934082 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934091 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934119 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934130 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934140 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934150 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934160 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934170 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934181 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934195 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934205 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934216 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934225 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934234 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934245 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934255 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934265 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934274 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934284 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934293 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934303 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934314 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934323 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934333 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934343 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934354 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934364 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934376 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934385 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934396 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934406 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934415 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934425 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934437 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934447 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934456 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934465 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934474 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934484 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934492 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934503 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934514 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934529 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934539 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934549 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934559 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934568 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934578 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934588 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934598 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934607 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934617 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934626 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934636 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934647 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934656 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934667 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934677 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934686 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934694 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934704 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934714 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934723 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934732 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934741 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934750 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934759 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934768 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934777 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934788 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934799 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934809 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934818 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934828 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934837 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934846 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934855 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934865 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934873 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934883 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934893 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934903 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934914 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934924 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934934 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934944 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934974 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934984 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.934993 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935004 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935015 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935028 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935038 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935049 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935059 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935069 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935079 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935088 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935098 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935154 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935164 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935173 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935182 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935192 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935201 4736 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935212 4736 reconstruct.go:97] "Volume reconstruction finished" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.935219 4736 reconciler.go:26] "Reconciler: start to sync state" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.944457 4736 manager.go:324] Recovery completed Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.959336 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.961210 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.961274 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.961290 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.970462 4736 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.970491 4736 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.970536 4736 state_mem.go:36] "Initialized new in-memory state store" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.974296 4736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.976667 4736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.976725 4736 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.976759 4736 kubelet.go:2335] "Starting kubelet main sync loop" Mar 16 15:13:18 crc kubenswrapper[4736]: E0316 15:13:18.976933 4736 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 16 15:13:18 crc kubenswrapper[4736]: W0316 15:13:18.977693 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 16 15:13:18 crc kubenswrapper[4736]: E0316 15:13:18.977783 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.984167 4736 policy_none.go:49] "None policy: Start" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.985013 4736 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 16 15:13:18 crc kubenswrapper[4736]: I0316 15:13:18.985061 4736 state_mem.go:35] "Initializing new in-memory state store" Mar 16 15:13:19 crc kubenswrapper[4736]: E0316 15:13:19.015418 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.038358 4736 manager.go:334] "Starting Device Plugin manager" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.039186 4736 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.039220 4736 server.go:79] "Starting device plugin registration server" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.039798 4736 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.039814 4736 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.040377 4736 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.040478 4736 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.040494 4736 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 16 15:13:19 crc kubenswrapper[4736]: E0316 15:13:19.048986 4736 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.077212 4736 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.077355 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.079054 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.079090 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.079117 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.079263 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.079544 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.079613 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.080221 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.080254 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.080264 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.080365 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.080529 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.080577 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.081537 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.081573 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.081584 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.081667 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.081693 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.081704 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.081762 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.081772 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.081821 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.081836 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.082081 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.082208 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.083028 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.083146 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.083165 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.083681 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.083747 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.083759 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.083760 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.083836 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.084546 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.085738 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.085786 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.085791 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.085824 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.085807 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.085895 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.086160 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.086201 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.086915 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.086961 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.086976 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:19 crc kubenswrapper[4736]: E0316 15:13:19.117037 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="400ms" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137228 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137297 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137330 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137359 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137389 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137413 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137436 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137459 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137537 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137602 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137631 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137656 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137688 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137716 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.137739 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.140467 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.141737 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.141799 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.141823 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.141871 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:13:19 crc kubenswrapper[4736]: E0316 15:13:19.142590 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239574 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239643 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239673 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239699 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239731 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239755 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239778 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239803 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239826 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239861 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239855 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239911 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239990 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239947 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239944 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239885 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239978 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.240007 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.240029 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.239884 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.240083 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.240170 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.240229 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.240257 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.240310 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.240340 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.240389 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.240425 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.240454 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.240323 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.343450 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.344777 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.344813 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.344826 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.344856 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:13:19 crc kubenswrapper[4736]: E0316 15:13:19.345174 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.425325 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.446201 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.466200 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.476921 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.483848 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:19 crc kubenswrapper[4736]: W0316 15:13:19.485201 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e05b9d9b7257b6aba24db2f4ec5135aceb787f6526863760b23b95fc1de76f03 WatchSource:0}: Error finding container e05b9d9b7257b6aba24db2f4ec5135aceb787f6526863760b23b95fc1de76f03: Status 404 returned error can't find the container with id e05b9d9b7257b6aba24db2f4ec5135aceb787f6526863760b23b95fc1de76f03 Mar 16 15:13:19 crc kubenswrapper[4736]: W0316 15:13:19.498828 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2865688a4e8818c9c010e027cfc8758c481ce4446617fb77e493c564c205feec WatchSource:0}: Error finding container 2865688a4e8818c9c010e027cfc8758c481ce4446617fb77e493c564c205feec: Status 404 returned error can't find the container with id 2865688a4e8818c9c010e027cfc8758c481ce4446617fb77e493c564c205feec Mar 16 15:13:19 crc kubenswrapper[4736]: W0316 15:13:19.506859 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-480b84bd4155bc3ceaa0d03ed288012fbf6789605f1eae74483d77b4de863de5 WatchSource:0}: Error finding container 480b84bd4155bc3ceaa0d03ed288012fbf6789605f1eae74483d77b4de863de5: Status 404 returned error can't find the container with id 480b84bd4155bc3ceaa0d03ed288012fbf6789605f1eae74483d77b4de863de5 Mar 16 15:13:19 crc kubenswrapper[4736]: W0316 15:13:19.507827 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-fc131f899fb7d4172c92d05fb84885aacbdeb3185d1f3e8e5497690525f263d5 WatchSource:0}: Error finding container fc131f899fb7d4172c92d05fb84885aacbdeb3185d1f3e8e5497690525f263d5: Status 404 returned error can't find the container with id fc131f899fb7d4172c92d05fb84885aacbdeb3185d1f3e8e5497690525f263d5 Mar 16 15:13:19 crc kubenswrapper[4736]: E0316 15:13:19.518632 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="800ms" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.745990 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.747766 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.747803 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.747817 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.747844 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:13:19 crc kubenswrapper[4736]: E0316 15:13:19.748396 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Mar 16 15:13:19 crc kubenswrapper[4736]: W0316 15:13:19.796911 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 16 15:13:19 crc kubenswrapper[4736]: E0316 15:13:19.797023 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.898349 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 16 15:13:19 crc kubenswrapper[4736]: W0316 15:13:19.977262 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 16 15:13:19 crc kubenswrapper[4736]: E0316 15:13:19.977374 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.982934 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2865688a4e8818c9c010e027cfc8758c481ce4446617fb77e493c564c205feec"} Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.984256 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e05b9d9b7257b6aba24db2f4ec5135aceb787f6526863760b23b95fc1de76f03"} Mar 16 15:13:19 crc kubenswrapper[4736]: W0316 15:13:19.985832 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 16 15:13:19 crc kubenswrapper[4736]: E0316 15:13:19.985923 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.985866 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b8f4b5b6b3e819d16b6c8f03821c1e4eec04372943c5bf3f7a78609d967832d4"} Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.988858 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fc131f899fb7d4172c92d05fb84885aacbdeb3185d1f3e8e5497690525f263d5"} Mar 16 15:13:19 crc kubenswrapper[4736]: I0316 15:13:19.990754 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"480b84bd4155bc3ceaa0d03ed288012fbf6789605f1eae74483d77b4de863de5"} Mar 16 15:13:20 crc kubenswrapper[4736]: E0316 15:13:20.320033 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="1.6s" Mar 16 15:13:20 crc kubenswrapper[4736]: W0316 15:13:20.470537 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 16 15:13:20 crc kubenswrapper[4736]: E0316 15:13:20.470680 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 16 15:13:20 crc kubenswrapper[4736]: I0316 15:13:20.549481 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:20 crc kubenswrapper[4736]: I0316 15:13:20.551556 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:20 crc kubenswrapper[4736]: I0316 15:13:20.551627 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:20 crc kubenswrapper[4736]: I0316 15:13:20.551662 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:20 crc kubenswrapper[4736]: I0316 15:13:20.551694 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:13:20 crc kubenswrapper[4736]: E0316 15:13:20.552377 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Mar 16 15:13:20 crc kubenswrapper[4736]: I0316 15:13:20.893969 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 16 15:13:20 crc kubenswrapper[4736]: E0316 15:13:20.895550 4736 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 16 15:13:20 crc kubenswrapper[4736]: I0316 15:13:20.898639 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 16 15:13:20 crc kubenswrapper[4736]: I0316 15:13:20.997468 4736 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="9f2b714888ee00afb4c526878aa1fee1665b0bdbefd1bee7d45916128c68ca88" exitCode=0 Mar 16 15:13:20 crc kubenswrapper[4736]: I0316 15:13:20.997568 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"9f2b714888ee00afb4c526878aa1fee1665b0bdbefd1bee7d45916128c68ca88"} Mar 16 15:13:20 crc kubenswrapper[4736]: I0316 15:13:20.997614 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:20 crc kubenswrapper[4736]: I0316 15:13:20.998943 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:20 crc kubenswrapper[4736]: I0316 15:13:20.998990 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:20 crc kubenswrapper[4736]: I0316 15:13:20.999008 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.003524 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3f10f7586e8207e4e593cfb8c44fbc56ee1c0f25ec07cb0c9dfa5ee8d23413f9"} Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.003554 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c1cac7624f273e3e97ea9932a8346fb9552dd0cc7f4267a915e594d1b76a908c"} Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.003564 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b1a48307cae76a325a04d20bcf35749389bf10b256f4f140d2abe13d76b4dffa"} Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.003585 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5e5468b43d6997c8121c6f55c925c07ad833511ca569242bfbb99b4c49f9b712"} Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.003609 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.004841 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.004870 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.004879 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.006369 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6" exitCode=0 Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.006429 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6"} Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.006524 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.007932 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.007976 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.007997 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.009956 4736 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473" exitCode=0 Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.010080 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.010079 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473"} Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.011049 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.011069 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.011081 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.012199 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.013915 4736 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c57635c09555110714283b63413fc90daed8a5415c644d9fd774167b70c3aa6d" exitCode=0 Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.013952 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c57635c09555110714283b63413fc90daed8a5415c644d9fd774167b70c3aa6d"} Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.014004 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.014047 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.014067 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.014020 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.017029 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.017057 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.017067 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:21 crc kubenswrapper[4736]: E0316 15:13:21.504743 4736 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189d5b1acbab6efa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.898130682 +0000 UTC m=+0.625520969,LastTimestamp:2026-03-16 15:13:18.898130682 +0000 UTC m=+0.625520969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:13:21 crc kubenswrapper[4736]: I0316 15:13:21.898462 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 16 15:13:21 crc kubenswrapper[4736]: E0316 15:13:21.921799 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="3.2s" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.027240 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b"} Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.027296 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630"} Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.027311 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722"} Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.027320 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c"} Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.029622 4736 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1" exitCode=0 Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.029747 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1"} Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.029793 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.031340 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.031375 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.031388 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.031721 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"946fe60d370cae10ed8d1fce77c2e5e95a975029356d5c84ac05a620822588e1"} Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.031836 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.033193 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.033239 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.033254 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.036419 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1de312b07050fe5e198d06e375f105423adbccad8e0b4de4cc81c5ef969dfac7"} Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.036464 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"86168885df00f265cece2f87ace481f79f876e8cce8778b418e4d96e49d510c9"} Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.036478 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"25eaf0866f956837a0a1db9a3075cbf1183739638b6ce4b781ac949f91022dad"} Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.036514 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.036581 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.037530 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.037587 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.037598 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.038652 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.038685 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.038696 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:22 crc kubenswrapper[4736]: W0316 15:13:22.149081 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 16 15:13:22 crc kubenswrapper[4736]: E0316 15:13:22.149189 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.152524 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.156184 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.159351 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.159398 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.159440 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:13:22 crc kubenswrapper[4736]: E0316 15:13:22.160145 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Mar 16 15:13:22 crc kubenswrapper[4736]: W0316 15:13:22.458824 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 16 15:13:22 crc kubenswrapper[4736]: E0316 15:13:22.458926 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 16 15:13:22 crc kubenswrapper[4736]: I0316 15:13:22.921049 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.042753 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c736c47ce7a235a6e0926065f2f42eaf8dd30c1b7a59bc1f4f368e72f70d30c3"} Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.042814 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.043736 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.043778 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.043788 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.045956 4736 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6" exitCode=0 Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.046009 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6"} Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.046127 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.046200 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.046129 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.047034 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.047056 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.047065 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.047490 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.047524 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.047534 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.047540 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.047569 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:23 crc kubenswrapper[4736]: I0316 15:13:23.047586 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.053362 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.053878 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786"} Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.053921 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19"} Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.053937 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359"} Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.053947 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4"} Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.054019 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.054403 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.054845 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.054871 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.054884 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.055323 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.055382 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.055395 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.496087 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.496414 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.498525 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.498592 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.498609 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.613681 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:24 crc kubenswrapper[4736]: I0316 15:13:24.993606 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.064019 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7"} Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.064059 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.064209 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.065691 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.065731 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.065746 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.066034 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.066130 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.066157 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.360832 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.362669 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.362983 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.363048 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.363166 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.810758 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.810963 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.816838 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.816915 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:25 crc kubenswrapper[4736]: I0316 15:13:25.816937 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:26 crc kubenswrapper[4736]: I0316 15:13:26.058312 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:26 crc kubenswrapper[4736]: I0316 15:13:26.066917 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:26 crc kubenswrapper[4736]: I0316 15:13:26.066941 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:26 crc kubenswrapper[4736]: I0316 15:13:26.068801 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:26 crc kubenswrapper[4736]: I0316 15:13:26.068854 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:26 crc kubenswrapper[4736]: I0316 15:13:26.068813 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:26 crc kubenswrapper[4736]: I0316 15:13:26.068873 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:26 crc kubenswrapper[4736]: I0316 15:13:26.068952 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:26 crc kubenswrapper[4736]: I0316 15:13:26.068992 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:27 crc kubenswrapper[4736]: I0316 15:13:27.069922 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:27 crc kubenswrapper[4736]: I0316 15:13:27.073960 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:27 crc kubenswrapper[4736]: I0316 15:13:27.074000 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:27 crc kubenswrapper[4736]: I0316 15:13:27.074016 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:27 crc kubenswrapper[4736]: I0316 15:13:27.496099 4736 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 15:13:27 crc kubenswrapper[4736]: I0316 15:13:27.496227 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 15:13:28 crc kubenswrapper[4736]: I0316 15:13:28.172913 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 16 15:13:28 crc kubenswrapper[4736]: I0316 15:13:28.173251 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:28 crc kubenswrapper[4736]: I0316 15:13:28.174799 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:28 crc kubenswrapper[4736]: I0316 15:13:28.174854 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:28 crc kubenswrapper[4736]: I0316 15:13:28.174883 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:28 crc kubenswrapper[4736]: I0316 15:13:28.404006 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 16 15:13:29 crc kubenswrapper[4736]: E0316 15:13:29.049228 4736 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 16 15:13:29 crc kubenswrapper[4736]: I0316 15:13:29.075985 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:29 crc kubenswrapper[4736]: I0316 15:13:29.077491 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:29 crc kubenswrapper[4736]: I0316 15:13:29.077546 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:29 crc kubenswrapper[4736]: I0316 15:13:29.077565 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:30 crc kubenswrapper[4736]: I0316 15:13:30.171170 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:30 crc kubenswrapper[4736]: I0316 15:13:30.172346 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:30 crc kubenswrapper[4736]: I0316 15:13:30.174531 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:30 crc kubenswrapper[4736]: I0316 15:13:30.174790 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:30 crc kubenswrapper[4736]: I0316 15:13:30.175001 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:30 crc kubenswrapper[4736]: I0316 15:13:30.179656 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:30 crc kubenswrapper[4736]: I0316 15:13:30.201603 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:30 crc kubenswrapper[4736]: I0316 15:13:30.207555 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:31 crc kubenswrapper[4736]: I0316 15:13:31.082460 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:31 crc kubenswrapper[4736]: I0316 15:13:31.084263 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:31 crc kubenswrapper[4736]: I0316 15:13:31.084328 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:31 crc kubenswrapper[4736]: I0316 15:13:31.084346 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:32 crc kubenswrapper[4736]: I0316 15:13:32.085707 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:32 crc kubenswrapper[4736]: I0316 15:13:32.087646 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:32 crc kubenswrapper[4736]: I0316 15:13:32.087738 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:32 crc kubenswrapper[4736]: I0316 15:13:32.087764 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:32 crc kubenswrapper[4736]: I0316 15:13:32.899558 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.092555 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 16 15:13:33 crc kubenswrapper[4736]: W0316 15:13:33.094826 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.094931 4736 trace.go:236] Trace[2083664978]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Mar-2026 15:13:23.093) (total time: 10001ms): Mar 16 15:13:33 crc kubenswrapper[4736]: Trace[2083664978]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:13:33.094) Mar 16 15:13:33 crc kubenswrapper[4736]: Trace[2083664978]: [10.001436902s] [10.001436902s] END Mar 16 15:13:33 crc kubenswrapper[4736]: E0316 15:13:33.094965 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.095254 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c736c47ce7a235a6e0926065f2f42eaf8dd30c1b7a59bc1f4f368e72f70d30c3" exitCode=255 Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.095325 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c736c47ce7a235a6e0926065f2f42eaf8dd30c1b7a59bc1f4f368e72f70d30c3"} Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.095520 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.096513 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.096553 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.096564 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.097349 4736 scope.go:117] "RemoveContainer" containerID="c736c47ce7a235a6e0926065f2f42eaf8dd30c1b7a59bc1f4f368e72f70d30c3" Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.315607 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:33 crc kubenswrapper[4736]: W0316 15:13:33.373378 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.373486 4736 trace.go:236] Trace[1252287698]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Mar-2026 15:13:23.371) (total time: 10001ms): Mar 16 15:13:33 crc kubenswrapper[4736]: Trace[1252287698]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:13:33.373) Mar 16 15:13:33 crc kubenswrapper[4736]: Trace[1252287698]: [10.001818905s] [10.001818905s] END Mar 16 15:13:33 crc kubenswrapper[4736]: E0316 15:13:33.373518 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.479778 4736 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.479870 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 16 15:13:33 crc kubenswrapper[4736]: W0316 15:13:33.484606 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:33Z is after 2026-02-23T05:33:13Z Mar 16 15:13:33 crc kubenswrapper[4736]: E0316 15:13:33.484714 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:33Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 16 15:13:33 crc kubenswrapper[4736]: W0316 15:13:33.485983 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:33Z is after 2026-02-23T05:33:13Z Mar 16 15:13:33 crc kubenswrapper[4736]: E0316 15:13:33.486062 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:33Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 16 15:13:33 crc kubenswrapper[4736]: E0316 15:13:33.486664 4736 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:33Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.487159 4736 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.487230 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 16 15:13:33 crc kubenswrapper[4736]: E0316 15:13:33.489728 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:33Z is after 2026-02-23T05:33:13Z" node="crc" Mar 16 15:13:33 crc kubenswrapper[4736]: E0316 15:13:33.489976 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:33Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 16 15:13:33 crc kubenswrapper[4736]: E0316 15:13:33.490833 4736 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:33Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189d5b1acbab6efa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.898130682 +0000 UTC m=+0.625520969,LastTimestamp:2026-03-16 15:13:18.898130682 +0000 UTC m=+0.625520969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:13:33 crc kubenswrapper[4736]: I0316 15:13:33.901217 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:33Z is after 2026-02-23T05:33:13Z Mar 16 15:13:34 crc kubenswrapper[4736]: I0316 15:13:34.108463 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 16 15:13:34 crc kubenswrapper[4736]: I0316 15:13:34.110962 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f9ccc8ad25680483a46e629bd6155424be8becd439a1308c4ed3b22c5102c6db"} Mar 16 15:13:34 crc kubenswrapper[4736]: I0316 15:13:34.111212 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:34 crc kubenswrapper[4736]: I0316 15:13:34.112530 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:34 crc kubenswrapper[4736]: I0316 15:13:34.112570 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:34 crc kubenswrapper[4736]: I0316 15:13:34.112582 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:34 crc kubenswrapper[4736]: I0316 15:13:34.900006 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:34Z is after 2026-02-23T05:33:13Z Mar 16 15:13:35 crc kubenswrapper[4736]: I0316 15:13:35.117224 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 16 15:13:35 crc kubenswrapper[4736]: I0316 15:13:35.118287 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 16 15:13:35 crc kubenswrapper[4736]: I0316 15:13:35.121308 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f9ccc8ad25680483a46e629bd6155424be8becd439a1308c4ed3b22c5102c6db" exitCode=255 Mar 16 15:13:35 crc kubenswrapper[4736]: I0316 15:13:35.121393 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f9ccc8ad25680483a46e629bd6155424be8becd439a1308c4ed3b22c5102c6db"} Mar 16 15:13:35 crc kubenswrapper[4736]: I0316 15:13:35.121495 4736 scope.go:117] "RemoveContainer" containerID="c736c47ce7a235a6e0926065f2f42eaf8dd30c1b7a59bc1f4f368e72f70d30c3" Mar 16 15:13:35 crc kubenswrapper[4736]: I0316 15:13:35.121669 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:35 crc kubenswrapper[4736]: I0316 15:13:35.123097 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:35 crc kubenswrapper[4736]: I0316 15:13:35.123198 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:35 crc kubenswrapper[4736]: I0316 15:13:35.123219 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:35 crc kubenswrapper[4736]: I0316 15:13:35.124064 4736 scope.go:117] "RemoveContainer" containerID="f9ccc8ad25680483a46e629bd6155424be8becd439a1308c4ed3b22c5102c6db" Mar 16 15:13:35 crc kubenswrapper[4736]: E0316 15:13:35.124542 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:13:35 crc kubenswrapper[4736]: I0316 15:13:35.900834 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:35Z is after 2026-02-23T05:33:13Z Mar 16 15:13:36 crc kubenswrapper[4736]: I0316 15:13:36.068267 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:36 crc kubenswrapper[4736]: I0316 15:13:36.127689 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 16 15:13:36 crc kubenswrapper[4736]: I0316 15:13:36.131777 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:36 crc kubenswrapper[4736]: I0316 15:13:36.133165 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:36 crc kubenswrapper[4736]: I0316 15:13:36.133219 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:36 crc kubenswrapper[4736]: I0316 15:13:36.133232 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:36 crc kubenswrapper[4736]: I0316 15:13:36.133946 4736 scope.go:117] "RemoveContainer" containerID="f9ccc8ad25680483a46e629bd6155424be8becd439a1308c4ed3b22c5102c6db" Mar 16 15:13:36 crc kubenswrapper[4736]: E0316 15:13:36.134213 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:13:36 crc kubenswrapper[4736]: I0316 15:13:36.139359 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:36 crc kubenswrapper[4736]: I0316 15:13:36.902911 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:36Z is after 2026-02-23T05:33:13Z Mar 16 15:13:37 crc kubenswrapper[4736]: I0316 15:13:37.135686 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:37 crc kubenswrapper[4736]: I0316 15:13:37.137358 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:37 crc kubenswrapper[4736]: I0316 15:13:37.137437 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:37 crc kubenswrapper[4736]: I0316 15:13:37.137459 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:37 crc kubenswrapper[4736]: I0316 15:13:37.138485 4736 scope.go:117] "RemoveContainer" containerID="f9ccc8ad25680483a46e629bd6155424be8becd439a1308c4ed3b22c5102c6db" Mar 16 15:13:37 crc kubenswrapper[4736]: E0316 15:13:37.138803 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:13:37 crc kubenswrapper[4736]: W0316 15:13:37.426399 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:37Z is after 2026-02-23T05:33:13Z Mar 16 15:13:37 crc kubenswrapper[4736]: E0316 15:13:37.426510 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:37Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 16 15:13:37 crc kubenswrapper[4736]: I0316 15:13:37.496443 4736 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 15:13:37 crc kubenswrapper[4736]: I0316 15:13:37.496545 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 15:13:37 crc kubenswrapper[4736]: I0316 15:13:37.902645 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:37Z is after 2026-02-23T05:33:13Z Mar 16 15:13:38 crc kubenswrapper[4736]: I0316 15:13:38.138735 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:38 crc kubenswrapper[4736]: I0316 15:13:38.139872 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:38 crc kubenswrapper[4736]: I0316 15:13:38.139927 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:38 crc kubenswrapper[4736]: I0316 15:13:38.139949 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:38 crc kubenswrapper[4736]: I0316 15:13:38.140933 4736 scope.go:117] "RemoveContainer" containerID="f9ccc8ad25680483a46e629bd6155424be8becd439a1308c4ed3b22c5102c6db" Mar 16 15:13:38 crc kubenswrapper[4736]: E0316 15:13:38.141276 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:13:38 crc kubenswrapper[4736]: I0316 15:13:38.445368 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 16 15:13:38 crc kubenswrapper[4736]: I0316 15:13:38.445745 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:38 crc kubenswrapper[4736]: I0316 15:13:38.447451 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:38 crc kubenswrapper[4736]: I0316 15:13:38.447521 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:38 crc kubenswrapper[4736]: I0316 15:13:38.447536 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:38 crc kubenswrapper[4736]: I0316 15:13:38.467050 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 16 15:13:38 crc kubenswrapper[4736]: W0316 15:13:38.868519 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:38Z is after 2026-02-23T05:33:13Z Mar 16 15:13:38 crc kubenswrapper[4736]: E0316 15:13:38.868609 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:38Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 16 15:13:38 crc kubenswrapper[4736]: I0316 15:13:38.903354 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:38Z is after 2026-02-23T05:33:13Z Mar 16 15:13:39 crc kubenswrapper[4736]: E0316 15:13:39.049587 4736 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 16 15:13:39 crc kubenswrapper[4736]: I0316 15:13:39.141034 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:39 crc kubenswrapper[4736]: I0316 15:13:39.142618 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:39 crc kubenswrapper[4736]: I0316 15:13:39.142694 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:39 crc kubenswrapper[4736]: I0316 15:13:39.142715 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:39 crc kubenswrapper[4736]: I0316 15:13:39.890627 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:39 crc kubenswrapper[4736]: I0316 15:13:39.892274 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:39 crc kubenswrapper[4736]: I0316 15:13:39.892376 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:39 crc kubenswrapper[4736]: I0316 15:13:39.892399 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:39 crc kubenswrapper[4736]: I0316 15:13:39.892458 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:13:39 crc kubenswrapper[4736]: E0316 15:13:39.894199 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:39Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 16 15:13:39 crc kubenswrapper[4736]: E0316 15:13:39.896283 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:39Z is after 2026-02-23T05:33:13Z" node="crc" Mar 16 15:13:39 crc kubenswrapper[4736]: I0316 15:13:39.900342 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:39Z is after 2026-02-23T05:33:13Z Mar 16 15:13:40 crc kubenswrapper[4736]: I0316 15:13:40.903003 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:40Z is after 2026-02-23T05:33:13Z Mar 16 15:13:41 crc kubenswrapper[4736]: I0316 15:13:41.606680 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 16 15:13:41 crc kubenswrapper[4736]: E0316 15:13:41.614016 4736 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:41Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 16 15:13:41 crc kubenswrapper[4736]: I0316 15:13:41.867095 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:41 crc kubenswrapper[4736]: I0316 15:13:41.867393 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:41 crc kubenswrapper[4736]: I0316 15:13:41.869525 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:41 crc kubenswrapper[4736]: I0316 15:13:41.869654 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:41 crc kubenswrapper[4736]: I0316 15:13:41.869720 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:41 crc kubenswrapper[4736]: I0316 15:13:41.873079 4736 scope.go:117] "RemoveContainer" containerID="f9ccc8ad25680483a46e629bd6155424be8becd439a1308c4ed3b22c5102c6db" Mar 16 15:13:41 crc kubenswrapper[4736]: E0316 15:13:41.874074 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:13:41 crc kubenswrapper[4736]: I0316 15:13:41.907331 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:41Z is after 2026-02-23T05:33:13Z Mar 16 15:13:42 crc kubenswrapper[4736]: I0316 15:13:42.901826 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:42Z is after 2026-02-23T05:33:13Z Mar 16 15:13:42 crc kubenswrapper[4736]: W0316 15:13:42.956624 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:42Z is after 2026-02-23T05:33:13Z Mar 16 15:13:42 crc kubenswrapper[4736]: E0316 15:13:42.956749 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 16 15:13:43 crc kubenswrapper[4736]: I0316 15:13:43.316414 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:13:43 crc kubenswrapper[4736]: I0316 15:13:43.316724 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:43 crc kubenswrapper[4736]: I0316 15:13:43.318403 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:43 crc kubenswrapper[4736]: I0316 15:13:43.318467 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:43 crc kubenswrapper[4736]: I0316 15:13:43.318479 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:43 crc kubenswrapper[4736]: I0316 15:13:43.319099 4736 scope.go:117] "RemoveContainer" containerID="f9ccc8ad25680483a46e629bd6155424be8becd439a1308c4ed3b22c5102c6db" Mar 16 15:13:43 crc kubenswrapper[4736]: E0316 15:13:43.319407 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:13:43 crc kubenswrapper[4736]: E0316 15:13:43.497277 4736 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:43Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189d5b1acbab6efa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.898130682 +0000 UTC m=+0.625520969,LastTimestamp:2026-03-16 15:13:18.898130682 +0000 UTC m=+0.625520969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:13:43 crc kubenswrapper[4736]: I0316 15:13:43.904000 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:43Z is after 2026-02-23T05:33:13Z Mar 16 15:13:44 crc kubenswrapper[4736]: W0316 15:13:44.203191 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:44Z is after 2026-02-23T05:33:13Z Mar 16 15:13:44 crc kubenswrapper[4736]: E0316 15:13:44.203322 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:44Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 16 15:13:44 crc kubenswrapper[4736]: I0316 15:13:44.900710 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:44Z is after 2026-02-23T05:33:13Z Mar 16 15:13:45 crc kubenswrapper[4736]: I0316 15:13:45.901691 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:45Z is after 2026-02-23T05:33:13Z Mar 16 15:13:46 crc kubenswrapper[4736]: I0316 15:13:46.897471 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:46 crc kubenswrapper[4736]: E0316 15:13:46.897647 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:46Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 16 15:13:46 crc kubenswrapper[4736]: I0316 15:13:46.899677 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:46 crc kubenswrapper[4736]: I0316 15:13:46.899704 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:46 crc kubenswrapper[4736]: I0316 15:13:46.899715 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:46 crc kubenswrapper[4736]: I0316 15:13:46.899739 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:13:46 crc kubenswrapper[4736]: I0316 15:13:46.903717 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:46Z is after 2026-02-23T05:33:13Z Mar 16 15:13:46 crc kubenswrapper[4736]: E0316 15:13:46.907149 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:46Z is after 2026-02-23T05:33:13Z" node="crc" Mar 16 15:13:47 crc kubenswrapper[4736]: I0316 15:13:47.496772 4736 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 15:13:47 crc kubenswrapper[4736]: I0316 15:13:47.496911 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 15:13:47 crc kubenswrapper[4736]: I0316 15:13:47.497001 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:47 crc kubenswrapper[4736]: I0316 15:13:47.497241 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:47 crc kubenswrapper[4736]: I0316 15:13:47.498780 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:47 crc kubenswrapper[4736]: I0316 15:13:47.498824 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:47 crc kubenswrapper[4736]: I0316 15:13:47.498839 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:47 crc kubenswrapper[4736]: I0316 15:13:47.499409 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"b1a48307cae76a325a04d20bcf35749389bf10b256f4f140d2abe13d76b4dffa"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 16 15:13:47 crc kubenswrapper[4736]: I0316 15:13:47.499583 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://b1a48307cae76a325a04d20bcf35749389bf10b256f4f140d2abe13d76b4dffa" gracePeriod=30 Mar 16 15:13:47 crc kubenswrapper[4736]: I0316 15:13:47.899838 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:47Z is after 2026-02-23T05:33:13Z Mar 16 15:13:48 crc kubenswrapper[4736]: I0316 15:13:48.171473 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 16 15:13:48 crc kubenswrapper[4736]: I0316 15:13:48.172840 4736 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b1a48307cae76a325a04d20bcf35749389bf10b256f4f140d2abe13d76b4dffa" exitCode=255 Mar 16 15:13:48 crc kubenswrapper[4736]: I0316 15:13:48.172871 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b1a48307cae76a325a04d20bcf35749389bf10b256f4f140d2abe13d76b4dffa"} Mar 16 15:13:48 crc kubenswrapper[4736]: I0316 15:13:48.172896 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"508caf5d1d449d47efaa2dfbbda3533cfe86b7cfaa9c3d03cf1064a64d707cf8"} Mar 16 15:13:48 crc kubenswrapper[4736]: I0316 15:13:48.172973 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:48 crc kubenswrapper[4736]: I0316 15:13:48.174322 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:48 crc kubenswrapper[4736]: I0316 15:13:48.174361 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:48 crc kubenswrapper[4736]: I0316 15:13:48.174370 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:48 crc kubenswrapper[4736]: I0316 15:13:48.901399 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:48Z is after 2026-02-23T05:33:13Z Mar 16 15:13:49 crc kubenswrapper[4736]: E0316 15:13:49.049794 4736 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 16 15:13:49 crc kubenswrapper[4736]: I0316 15:13:49.902917 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:49Z is after 2026-02-23T05:33:13Z Mar 16 15:13:49 crc kubenswrapper[4736]: W0316 15:13:49.935230 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:49Z is after 2026-02-23T05:33:13Z Mar 16 15:13:49 crc kubenswrapper[4736]: E0316 15:13:49.935337 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:49Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 16 15:13:50 crc kubenswrapper[4736]: I0316 15:13:50.903159 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:50Z is after 2026-02-23T05:33:13Z Mar 16 15:13:51 crc kubenswrapper[4736]: W0316 15:13:51.408256 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:51Z is after 2026-02-23T05:33:13Z Mar 16 15:13:51 crc kubenswrapper[4736]: E0316 15:13:51.408334 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 16 15:13:51 crc kubenswrapper[4736]: I0316 15:13:51.900729 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:51Z is after 2026-02-23T05:33:13Z Mar 16 15:13:52 crc kubenswrapper[4736]: I0316 15:13:52.901433 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:52Z is after 2026-02-23T05:33:13Z Mar 16 15:13:53 crc kubenswrapper[4736]: E0316 15:13:53.500493 4736 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:53Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189d5b1acbab6efa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.898130682 +0000 UTC m=+0.625520969,LastTimestamp:2026-03-16 15:13:18.898130682 +0000 UTC m=+0.625520969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:13:53 crc kubenswrapper[4736]: E0316 15:13:53.901337 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:53Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 16 15:13:53 crc kubenswrapper[4736]: I0316 15:13:53.902561 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:53Z is after 2026-02-23T05:33:13Z Mar 16 15:13:53 crc kubenswrapper[4736]: I0316 15:13:53.907787 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:53 crc kubenswrapper[4736]: I0316 15:13:53.909828 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:53 crc kubenswrapper[4736]: I0316 15:13:53.909897 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:53 crc kubenswrapper[4736]: I0316 15:13:53.909922 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:53 crc kubenswrapper[4736]: I0316 15:13:53.909970 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:13:53 crc kubenswrapper[4736]: E0316 15:13:53.915418 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:53Z is after 2026-02-23T05:33:13Z" node="crc" Mar 16 15:13:54 crc kubenswrapper[4736]: I0316 15:13:54.496199 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:54 crc kubenswrapper[4736]: I0316 15:13:54.496380 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:54 crc kubenswrapper[4736]: I0316 15:13:54.498058 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:54 crc kubenswrapper[4736]: I0316 15:13:54.498119 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:54 crc kubenswrapper[4736]: I0316 15:13:54.498134 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:54 crc kubenswrapper[4736]: I0316 15:13:54.901960 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:54Z is after 2026-02-23T05:33:13Z Mar 16 15:13:54 crc kubenswrapper[4736]: I0316 15:13:54.977767 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:54 crc kubenswrapper[4736]: I0316 15:13:54.979274 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:54 crc kubenswrapper[4736]: I0316 15:13:54.979322 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:54 crc kubenswrapper[4736]: I0316 15:13:54.979334 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:54 crc kubenswrapper[4736]: I0316 15:13:54.979916 4736 scope.go:117] "RemoveContainer" containerID="f9ccc8ad25680483a46e629bd6155424be8becd439a1308c4ed3b22c5102c6db" Mar 16 15:13:55 crc kubenswrapper[4736]: I0316 15:13:55.811182 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:13:55 crc kubenswrapper[4736]: I0316 15:13:55.811410 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:55 crc kubenswrapper[4736]: I0316 15:13:55.812874 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:55 crc kubenswrapper[4736]: I0316 15:13:55.812917 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:55 crc kubenswrapper[4736]: I0316 15:13:55.812929 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:55 crc kubenswrapper[4736]: I0316 15:13:55.900755 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:55Z is after 2026-02-23T05:33:13Z Mar 16 15:13:56 crc kubenswrapper[4736]: I0316 15:13:56.196441 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 16 15:13:56 crc kubenswrapper[4736]: I0316 15:13:56.196752 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 16 15:13:56 crc kubenswrapper[4736]: I0316 15:13:56.198071 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c31c898ef049f044b8516d39eeb57426a89f76c8beeadc1400a04d846f7a92cd" exitCode=255 Mar 16 15:13:56 crc kubenswrapper[4736]: I0316 15:13:56.198121 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c31c898ef049f044b8516d39eeb57426a89f76c8beeadc1400a04d846f7a92cd"} Mar 16 15:13:56 crc kubenswrapper[4736]: I0316 15:13:56.198159 4736 scope.go:117] "RemoveContainer" containerID="f9ccc8ad25680483a46e629bd6155424be8becd439a1308c4ed3b22c5102c6db" Mar 16 15:13:56 crc kubenswrapper[4736]: I0316 15:13:56.198326 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:13:56 crc kubenswrapper[4736]: I0316 15:13:56.198972 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:13:56 crc kubenswrapper[4736]: I0316 15:13:56.198994 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:13:56 crc kubenswrapper[4736]: I0316 15:13:56.199003 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:13:56 crc kubenswrapper[4736]: I0316 15:13:56.199424 4736 scope.go:117] "RemoveContainer" containerID="c31c898ef049f044b8516d39eeb57426a89f76c8beeadc1400a04d846f7a92cd" Mar 16 15:13:56 crc kubenswrapper[4736]: E0316 15:13:56.199569 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:13:56 crc kubenswrapper[4736]: I0316 15:13:56.902775 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:56Z is after 2026-02-23T05:33:13Z Mar 16 15:13:57 crc kubenswrapper[4736]: I0316 15:13:57.202809 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 16 15:13:57 crc kubenswrapper[4736]: I0316 15:13:57.496271 4736 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 15:13:57 crc kubenswrapper[4736]: I0316 15:13:57.496365 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 15:13:57 crc kubenswrapper[4736]: I0316 15:13:57.722608 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 16 15:13:57 crc kubenswrapper[4736]: E0316 15:13:57.728742 4736 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 16 15:13:57 crc kubenswrapper[4736]: E0316 15:13:57.730037 4736 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 16 15:13:57 crc kubenswrapper[4736]: I0316 15:13:57.903271 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:57Z is after 2026-02-23T05:33:13Z Mar 16 15:13:58 crc kubenswrapper[4736]: W0316 15:13:58.012521 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:58Z is after 2026-02-23T05:33:13Z Mar 16 15:13:58 crc kubenswrapper[4736]: E0316 15:13:58.012604 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:58Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 16 15:13:58 crc kubenswrapper[4736]: I0316 15:13:58.902671 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:58Z is after 2026-02-23T05:33:13Z Mar 16 15:13:59 crc kubenswrapper[4736]: E0316 15:13:59.049970 4736 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 16 15:13:59 crc kubenswrapper[4736]: I0316 15:13:59.900033 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:13:59Z is after 2026-02-23T05:33:13Z Mar 16 15:14:00 crc kubenswrapper[4736]: I0316 15:14:00.902923 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:00Z is after 2026-02-23T05:33:13Z Mar 16 15:14:00 crc kubenswrapper[4736]: E0316 15:14:00.908492 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:00Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 16 15:14:00 crc kubenswrapper[4736]: I0316 15:14:00.915668 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:00 crc kubenswrapper[4736]: I0316 15:14:00.917585 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:00 crc kubenswrapper[4736]: I0316 15:14:00.917656 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:00 crc kubenswrapper[4736]: I0316 15:14:00.917683 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:00 crc kubenswrapper[4736]: I0316 15:14:00.917727 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:14:00 crc kubenswrapper[4736]: E0316 15:14:00.922658 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:00Z is after 2026-02-23T05:33:13Z" node="crc" Mar 16 15:14:01 crc kubenswrapper[4736]: I0316 15:14:01.866947 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:14:01 crc kubenswrapper[4736]: I0316 15:14:01.867179 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:01 crc kubenswrapper[4736]: I0316 15:14:01.868317 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:01 crc kubenswrapper[4736]: I0316 15:14:01.868352 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:01 crc kubenswrapper[4736]: I0316 15:14:01.868365 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:01 crc kubenswrapper[4736]: I0316 15:14:01.868924 4736 scope.go:117] "RemoveContainer" containerID="c31c898ef049f044b8516d39eeb57426a89f76c8beeadc1400a04d846f7a92cd" Mar 16 15:14:01 crc kubenswrapper[4736]: E0316 15:14:01.869178 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:14:01 crc kubenswrapper[4736]: I0316 15:14:01.899775 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:01Z is after 2026-02-23T05:33:13Z Mar 16 15:14:02 crc kubenswrapper[4736]: W0316 15:14:02.414454 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:02Z is after 2026-02-23T05:33:13Z Mar 16 15:14:02 crc kubenswrapper[4736]: E0316 15:14:02.414543 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:02Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 16 15:14:02 crc kubenswrapper[4736]: I0316 15:14:02.900366 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:02Z is after 2026-02-23T05:33:13Z Mar 16 15:14:03 crc kubenswrapper[4736]: I0316 15:14:03.315606 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:14:03 crc kubenswrapper[4736]: I0316 15:14:03.315874 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:03 crc kubenswrapper[4736]: I0316 15:14:03.317612 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:03 crc kubenswrapper[4736]: I0316 15:14:03.317650 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:03 crc kubenswrapper[4736]: I0316 15:14:03.317658 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:03 crc kubenswrapper[4736]: I0316 15:14:03.318117 4736 scope.go:117] "RemoveContainer" containerID="c31c898ef049f044b8516d39eeb57426a89f76c8beeadc1400a04d846f7a92cd" Mar 16 15:14:03 crc kubenswrapper[4736]: E0316 15:14:03.318291 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:14:03 crc kubenswrapper[4736]: E0316 15:14:03.507694 4736 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:03Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189d5b1acbab6efa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.898130682 +0000 UTC m=+0.625520969,LastTimestamp:2026-03-16 15:13:18.898130682 +0000 UTC m=+0.625520969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:03 crc kubenswrapper[4736]: I0316 15:14:03.901757 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:03Z is after 2026-02-23T05:33:13Z Mar 16 15:14:04 crc kubenswrapper[4736]: I0316 15:14:04.903667 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:04Z is after 2026-02-23T05:33:13Z Mar 16 15:14:05 crc kubenswrapper[4736]: I0316 15:14:05.903098 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:05Z is after 2026-02-23T05:33:13Z Mar 16 15:14:06 crc kubenswrapper[4736]: I0316 15:14:06.899886 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:06Z is after 2026-02-23T05:33:13Z Mar 16 15:14:07 crc kubenswrapper[4736]: I0316 15:14:07.496611 4736 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 15:14:07 crc kubenswrapper[4736]: I0316 15:14:07.496696 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 15:14:07 crc kubenswrapper[4736]: I0316 15:14:07.902560 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:07Z is after 2026-02-23T05:33:13Z Mar 16 15:14:07 crc kubenswrapper[4736]: E0316 15:14:07.911146 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:07Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 16 15:14:07 crc kubenswrapper[4736]: I0316 15:14:07.923680 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:07 crc kubenswrapper[4736]: I0316 15:14:07.925306 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:07 crc kubenswrapper[4736]: I0316 15:14:07.925388 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:07 crc kubenswrapper[4736]: I0316 15:14:07.925400 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:07 crc kubenswrapper[4736]: I0316 15:14:07.925419 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:14:07 crc kubenswrapper[4736]: E0316 15:14:07.928465 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:07Z is after 2026-02-23T05:33:13Z" node="crc" Mar 16 15:14:08 crc kubenswrapper[4736]: I0316 15:14:08.903239 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:08Z is after 2026-02-23T05:33:13Z Mar 16 15:14:09 crc kubenswrapper[4736]: E0316 15:14:09.050118 4736 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 16 15:14:09 crc kubenswrapper[4736]: I0316 15:14:09.900688 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:09Z is after 2026-02-23T05:33:13Z Mar 16 15:14:10 crc kubenswrapper[4736]: I0316 15:14:10.903097 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:10Z is after 2026-02-23T05:33:13Z Mar 16 15:14:11 crc kubenswrapper[4736]: I0316 15:14:11.906363 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:12 crc kubenswrapper[4736]: W0316 15:14:12.350583 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 16 15:14:12 crc kubenswrapper[4736]: E0316 15:14:12.351097 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 16 15:14:12 crc kubenswrapper[4736]: I0316 15:14:12.901702 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:12 crc kubenswrapper[4736]: I0316 15:14:12.927001 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 16 15:14:12 crc kubenswrapper[4736]: I0316 15:14:12.928176 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:12 crc kubenswrapper[4736]: I0316 15:14:12.929907 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:12 crc kubenswrapper[4736]: I0316 15:14:12.929933 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:12 crc kubenswrapper[4736]: I0316 15:14:12.929944 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:13 crc kubenswrapper[4736]: W0316 15:14:13.115763 4736 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.115865 4736 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.514026 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acbab6efa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.898130682 +0000 UTC m=+0.625520969,LastTimestamp:2026-03-16 15:13:18.898130682 +0000 UTC m=+0.625520969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.519056 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6e9517 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961251607 +0000 UTC m=+0.688641894,LastTimestamp:2026-03-16 15:13:18.961251607 +0000 UTC m=+0.688641894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.524066 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f1a9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961285786 +0000 UTC m=+0.688676073,LastTimestamp:2026-03-16 15:13:18.961285786 +0000 UTC m=+0.688676073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.530427 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f475d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961297245 +0000 UTC m=+0.688687532,LastTimestamp:2026-03-16 15:13:18.961297245 +0000 UTC m=+0.688687532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.534290 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1ad437a99d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:19.041538461 +0000 UTC m=+0.768928748,LastTimestamp:2026-03-16 15:13:19.041538461 +0000 UTC m=+0.768928748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.540639 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6e9517\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6e9517 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961251607 +0000 UTC m=+0.688641894,LastTimestamp:2026-03-16 15:13:19.079076748 +0000 UTC m=+0.806467035,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.546549 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6f1a9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f1a9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961285786 +0000 UTC m=+0.688676073,LastTimestamp:2026-03-16 15:13:19.079096497 +0000 UTC m=+0.806486784,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.551396 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6f475d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f475d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961297245 +0000 UTC m=+0.688687532,LastTimestamp:2026-03-16 15:13:19.079124016 +0000 UTC m=+0.806514303,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.556614 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6e9517\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6e9517 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961251607 +0000 UTC m=+0.688641894,LastTimestamp:2026-03-16 15:13:19.080245792 +0000 UTC m=+0.807636079,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.561970 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6f1a9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f1a9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961285786 +0000 UTC m=+0.688676073,LastTimestamp:2026-03-16 15:13:19.080260162 +0000 UTC m=+0.807650449,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.567476 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6f475d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f475d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961297245 +0000 UTC m=+0.688687532,LastTimestamp:2026-03-16 15:13:19.080269831 +0000 UTC m=+0.807660118,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.572571 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6e9517\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6e9517 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961251607 +0000 UTC m=+0.688641894,LastTimestamp:2026-03-16 15:13:19.081565271 +0000 UTC m=+0.808955548,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.577744 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6f1a9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f1a9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961285786 +0000 UTC m=+0.688676073,LastTimestamp:2026-03-16 15:13:19.08158092 +0000 UTC m=+0.808971207,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.582837 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6f475d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f475d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961297245 +0000 UTC m=+0.688687532,LastTimestamp:2026-03-16 15:13:19.08159178 +0000 UTC m=+0.808982067,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.587086 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6e9517\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6e9517 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961251607 +0000 UTC m=+0.688641894,LastTimestamp:2026-03-16 15:13:19.081684117 +0000 UTC m=+0.809074404,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.593377 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6f1a9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f1a9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961285786 +0000 UTC m=+0.688676073,LastTimestamp:2026-03-16 15:13:19.081699247 +0000 UTC m=+0.809089524,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.598205 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6f475d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f475d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961297245 +0000 UTC m=+0.688687532,LastTimestamp:2026-03-16 15:13:19.081709406 +0000 UTC m=+0.809099693,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.603149 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6e9517\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6e9517 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961251607 +0000 UTC m=+0.688641894,LastTimestamp:2026-03-16 15:13:19.081808863 +0000 UTC m=+0.809199160,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.606598 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6f1a9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f1a9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961285786 +0000 UTC m=+0.688676073,LastTimestamp:2026-03-16 15:13:19.081830632 +0000 UTC m=+0.809220929,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.611716 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6f475d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f475d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961297245 +0000 UTC m=+0.688687532,LastTimestamp:2026-03-16 15:13:19.081882411 +0000 UTC m=+0.809272718,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.615812 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6e9517\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6e9517 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961251607 +0000 UTC m=+0.688641894,LastTimestamp:2026-03-16 15:13:19.083089874 +0000 UTC m=+0.810480191,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.620329 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6f1a9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f1a9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961285786 +0000 UTC m=+0.688676073,LastTimestamp:2026-03-16 15:13:19.083159072 +0000 UTC m=+0.810549399,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.624947 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6f475d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f475d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961297245 +0000 UTC m=+0.688687532,LastTimestamp:2026-03-16 15:13:19.083174981 +0000 UTC m=+0.810565308,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.628587 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6e9517\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6e9517 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961251607 +0000 UTC m=+0.688641894,LastTimestamp:2026-03-16 15:13:19.083740714 +0000 UTC m=+0.811131001,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.632263 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189d5b1acf6f1a9a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189d5b1acf6f1a9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:18.961285786 +0000 UTC m=+0.688676073,LastTimestamp:2026-03-16 15:13:19.083755623 +0000 UTC m=+0.811145910,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.637811 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1aef85ec9c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:19.499652252 +0000 UTC m=+1.227042539,LastTimestamp:2026-03-16 15:13:19.499652252 +0000 UTC m=+1.227042539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.642596 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189d5b1aef978515 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:19.500805397 +0000 UTC m=+1.228195694,LastTimestamp:2026-03-16 15:13:19.500805397 +0000 UTC m=+1.228195694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.648594 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189d5b1aeff0f9d2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:19.506667986 +0000 UTC m=+1.234058283,LastTimestamp:2026-03-16 15:13:19.506667986 +0000 UTC m=+1.234058283,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.653938 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1af03cb814 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:19.511631892 +0000 UTC m=+1.239022219,LastTimestamp:2026-03-16 15:13:19.511631892 +0000 UTC m=+1.239022219,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.658228 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1af0b73c7e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:19.519661182 +0000 UTC m=+1.247051489,LastTimestamp:2026-03-16 15:13:19.519661182 +0000 UTC m=+1.247051489,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.662389 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1b169c8480 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.155444352 +0000 UTC m=+1.882834639,LastTimestamp:2026-03-16 15:13:20.155444352 +0000 UTC m=+1.882834639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.665788 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189d5b1b16c435ac openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.158045612 +0000 UTC m=+1.885435899,LastTimestamp:2026-03-16 15:13:20.158045612 +0000 UTC m=+1.885435899,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.671571 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b16d13210 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.158896656 +0000 UTC m=+1.886286943,LastTimestamp:2026-03-16 15:13:20.158896656 +0000 UTC m=+1.886286943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.675812 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189d5b1b16d30960 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.159017312 +0000 UTC m=+1.886407599,LastTimestamp:2026-03-16 15:13:20.159017312 +0000 UTC m=+1.886407599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.681803 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1b177bc5ee openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.17007563 +0000 UTC m=+1.897465917,LastTimestamp:2026-03-16 15:13:20.17007563 +0000 UTC m=+1.897465917,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.685632 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1b17a21926 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.172587302 +0000 UTC m=+1.899977579,LastTimestamp:2026-03-16 15:13:20.172587302 +0000 UTC m=+1.899977579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.689870 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1b17eca3cc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.17747246 +0000 UTC m=+1.904862757,LastTimestamp:2026-03-16 15:13:20.17747246 +0000 UTC m=+1.904862757,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.693274 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189d5b1b17f03f21 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.177708833 +0000 UTC m=+1.905099130,LastTimestamp:2026-03-16 15:13:20.177708833 +0000 UTC m=+1.905099130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.697579 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b17f48640 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.177989184 +0000 UTC m=+1.905379471,LastTimestamp:2026-03-16 15:13:20.177989184 +0000 UTC m=+1.905379471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.702293 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189d5b1b1861e7c4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.185157572 +0000 UTC m=+1.912547859,LastTimestamp:2026-03-16 15:13:20.185157572 +0000 UTC m=+1.912547859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.705603 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1b190d3fc2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.196386754 +0000 UTC m=+1.923777041,LastTimestamp:2026-03-16 15:13:20.196386754 +0000 UTC m=+1.923777041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.707609 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1b28271c48 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.449739848 +0000 UTC m=+2.177130145,LastTimestamp:2026-03-16 15:13:20.449739848 +0000 UTC m=+2.177130145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.710514 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1b28f276a1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.463066785 +0000 UTC m=+2.190457112,LastTimestamp:2026-03-16 15:13:20.463066785 +0000 UTC m=+2.190457112,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.713084 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1b29151391 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.465335185 +0000 UTC m=+2.192725472,LastTimestamp:2026-03-16 15:13:20.465335185 +0000 UTC m=+2.192725472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.715767 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1b33eed772 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.647378802 +0000 UTC m=+2.374769089,LastTimestamp:2026-03-16 15:13:20.647378802 +0000 UTC m=+2.374769089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.719665 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1b346dde03 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.655703555 +0000 UTC m=+2.383093842,LastTimestamp:2026-03-16 15:13:20.655703555 +0000 UTC m=+2.383093842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.721736 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1b34867b3e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.65731667 +0000 UTC m=+2.384706957,LastTimestamp:2026-03-16 15:13:20.65731667 +0000 UTC m=+2.384706957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.726733 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1b3f690322 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.839934754 +0000 UTC m=+2.567325051,LastTimestamp:2026-03-16 15:13:20.839934754 +0000 UTC m=+2.567325051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.731675 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1b4015ddec openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.851262956 +0000 UTC m=+2.578653243,LastTimestamp:2026-03-16 15:13:20.851262956 +0000 UTC m=+2.578653243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.736874 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189d5b1b4901bc9e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.000938654 +0000 UTC m=+2.728329001,LastTimestamp:2026-03-16 15:13:21.000938654 +0000 UTC m=+2.728329001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.742679 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b49a4e34e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.011630926 +0000 UTC m=+2.739021243,LastTimestamp:2026-03-16 15:13:21.011630926 +0000 UTC m=+2.739021243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.747037 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1b49bfb279 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.013387897 +0000 UTC m=+2.740778214,LastTimestamp:2026-03-16 15:13:21.013387897 +0000 UTC m=+2.740778214,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.752240 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189d5b1b4a2b2a53 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.020430931 +0000 UTC m=+2.747821268,LastTimestamp:2026-03-16 15:13:21.020430931 +0000 UTC m=+2.747821268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.756987 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b5a89ffce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.295081422 +0000 UTC m=+3.022471709,LastTimestamp:2026-03-16 15:13:21.295081422 +0000 UTC m=+3.022471709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.760863 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189d5b1b5aca27f5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.299286005 +0000 UTC m=+3.026676292,LastTimestamp:2026-03-16 15:13:21.299286005 +0000 UTC m=+3.026676292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.764707 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1b5ad5716d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.300025709 +0000 UTC m=+3.027415996,LastTimestamp:2026-03-16 15:13:21.300025709 +0000 UTC m=+3.027415996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.769776 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189d5b1b5ae82999 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.301252505 +0000 UTC m=+3.028642792,LastTimestamp:2026-03-16 15:13:21.301252505 +0000 UTC m=+3.028642792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.773981 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b5c083c96 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.320131734 +0000 UTC m=+3.047522021,LastTimestamp:2026-03-16 15:13:21.320131734 +0000 UTC m=+3.047522021,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.777634 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b5c253dc6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.322032582 +0000 UTC m=+3.049422869,LastTimestamp:2026-03-16 15:13:21.322032582 +0000 UTC m=+3.049422869,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.782869 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189d5b1b5c394884 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.323346052 +0000 UTC m=+3.050736339,LastTimestamp:2026-03-16 15:13:21.323346052 +0000 UTC m=+3.050736339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.787762 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189d5b1b5c4724cf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.324254415 +0000 UTC m=+3.051644702,LastTimestamp:2026-03-16 15:13:21.324254415 +0000 UTC m=+3.051644702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.792799 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189d5b1b5c5cfe30 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.32568632 +0000 UTC m=+3.053076597,LastTimestamp:2026-03-16 15:13:21.32568632 +0000 UTC m=+3.053076597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.797464 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1b5d902c42 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.345817666 +0000 UTC m=+3.073207953,LastTimestamp:2026-03-16 15:13:21.345817666 +0000 UTC m=+3.073207953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.802594 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189d5b1b68c6a2e2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.533936354 +0000 UTC m=+3.261326671,LastTimestamp:2026-03-16 15:13:21.533936354 +0000 UTC m=+3.261326671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.807720 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b68c7a664 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.534002788 +0000 UTC m=+3.261393075,LastTimestamp:2026-03-16 15:13:21.534002788 +0000 UTC m=+3.261393075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.810137 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b69a1dda0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.548303776 +0000 UTC m=+3.275694063,LastTimestamp:2026-03-16 15:13:21.548303776 +0000 UTC m=+3.275694063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.812187 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b69b7d7dc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.549744092 +0000 UTC m=+3.277134379,LastTimestamp:2026-03-16 15:13:21.549744092 +0000 UTC m=+3.277134379,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.814942 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189d5b1b69d1d1f9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.551446521 +0000 UTC m=+3.278836818,LastTimestamp:2026-03-16 15:13:21.551446521 +0000 UTC m=+3.278836818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.819069 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189d5b1b69e0e248 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.552433736 +0000 UTC m=+3.279824043,LastTimestamp:2026-03-16 15:13:21.552433736 +0000 UTC m=+3.279824043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.823936 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b75d9e31f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.753301791 +0000 UTC m=+3.480692078,LastTimestamp:2026-03-16 15:13:21.753301791 +0000 UTC m=+3.480692078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.828843 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189d5b1b76153cac openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.75719134 +0000 UTC m=+3.484581627,LastTimestamp:2026-03-16 15:13:21.75719134 +0000 UTC m=+3.484581627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.833544 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b7699a559 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.765868889 +0000 UTC m=+3.493259176,LastTimestamp:2026-03-16 15:13:21.765868889 +0000 UTC m=+3.493259176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.837622 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b76ad17dc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.767143388 +0000 UTC m=+3.494533675,LastTimestamp:2026-03-16 15:13:21.767143388 +0000 UTC m=+3.494533675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.841861 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189d5b1b772bdbd3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.775451091 +0000 UTC m=+3.502841378,LastTimestamp:2026-03-16 15:13:21.775451091 +0000 UTC m=+3.502841378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.846467 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b8239f7ef openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.960925167 +0000 UTC m=+3.688315444,LastTimestamp:2026-03-16 15:13:21.960925167 +0000 UTC m=+3.688315444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.851292 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b82ed60e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.972682978 +0000 UTC m=+3.700073275,LastTimestamp:2026-03-16 15:13:21.972682978 +0000 UTC m=+3.700073275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.857008 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b82ff7787 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.973868423 +0000 UTC m=+3.701258710,LastTimestamp:2026-03-16 15:13:21.973868423 +0000 UTC m=+3.701258710,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.863250 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1b86838a06 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:22.032855558 +0000 UTC m=+3.760245845,LastTimestamp:2026-03-16 15:13:22.032855558 +0000 UTC m=+3.760245845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.869514 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b8f7beaf9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:22.183351033 +0000 UTC m=+3.910741320,LastTimestamp:2026-03-16 15:13:22.183351033 +0000 UTC m=+3.910741320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.877853 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b907c64a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:22.200159397 +0000 UTC m=+3.927549684,LastTimestamp:2026-03-16 15:13:22.200159397 +0000 UTC m=+3.927549684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.885113 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1b95544759 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:22.281416537 +0000 UTC m=+4.008806824,LastTimestamp:2026-03-16 15:13:22.281416537 +0000 UTC m=+4.008806824,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.892052 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1b961ed3a8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:22.294690728 +0000 UTC m=+4.022081015,LastTimestamp:2026-03-16 15:13:22.294690728 +0000 UTC m=+4.022081015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: I0316 15:14:13.898932 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.899417 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1bc316fefb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:23.049152251 +0000 UTC m=+4.776542538,LastTimestamp:2026-03-16 15:13:23.049152251 +0000 UTC m=+4.776542538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.904330 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1bcd3afbd5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:23.219282901 +0000 UTC m=+4.946673178,LastTimestamp:2026-03-16 15:13:23.219282901 +0000 UTC m=+4.946673178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.911427 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1bcdd0aef7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:23.229093623 +0000 UTC m=+4.956483910,LastTimestamp:2026-03-16 15:13:23.229093623 +0000 UTC m=+4.956483910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.915846 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1bcde4b31b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:23.230405403 +0000 UTC m=+4.957795690,LastTimestamp:2026-03-16 15:13:23.230405403 +0000 UTC m=+4.957795690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.921822 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1bdcdcc574 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:23.481544052 +0000 UTC m=+5.208934339,LastTimestamp:2026-03-16 15:13:23.481544052 +0000 UTC m=+5.208934339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.927533 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1bdde2587f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:23.498686591 +0000 UTC m=+5.226076898,LastTimestamp:2026-03-16 15:13:23.498686591 +0000 UTC m=+5.226076898,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.932541 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1bddf477ed openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:23.499874285 +0000 UTC m=+5.227264582,LastTimestamp:2026-03-16 15:13:23.499874285 +0000 UTC m=+5.227264582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.937329 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1beae330b8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:23.716845752 +0000 UTC m=+5.444236059,LastTimestamp:2026-03-16 15:13:23.716845752 +0000 UTC m=+5.444236059,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.941729 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1beb8c6610 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:23.727934992 +0000 UTC m=+5.455325289,LastTimestamp:2026-03-16 15:13:23.727934992 +0000 UTC m=+5.455325289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.947049 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1beba096bc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:23.729258172 +0000 UTC m=+5.456648479,LastTimestamp:2026-03-16 15:13:23.729258172 +0000 UTC m=+5.456648479,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.952902 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1bf8cffc7f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:23.950468223 +0000 UTC m=+5.677858520,LastTimestamp:2026-03-16 15:13:23.950468223 +0000 UTC m=+5.677858520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.957750 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1bf9a3b302 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:23.964343042 +0000 UTC m=+5.691733339,LastTimestamp:2026-03-16 15:13:23.964343042 +0000 UTC m=+5.691733339,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.962235 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1bf9b481dc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:23.965444572 +0000 UTC m=+5.692834859,LastTimestamp:2026-03-16 15:13:23.965444572 +0000 UTC m=+5.692834859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.968447 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1c0496b93f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:24.148042047 +0000 UTC m=+5.875432374,LastTimestamp:2026-03-16 15:13:24.148042047 +0000 UTC m=+5.875432374,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.972810 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189d5b1c05958d06 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:24.164742406 +0000 UTC m=+5.892132733,LastTimestamp:2026-03-16 15:13:24.164742406 +0000 UTC m=+5.892132733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.979132 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 16 15:14:13 crc kubenswrapper[4736]: &Event{ObjectMeta:{kube-controller-manager-crc.189d5b1ccc27867d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 16 15:14:13 crc kubenswrapper[4736]: body: Mar 16 15:14:13 crc kubenswrapper[4736]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:27.496197757 +0000 UTC m=+9.223588054,LastTimestamp:2026-03-16 15:13:27.496197757 +0000 UTC m=+9.223588054,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 16 15:14:13 crc kubenswrapper[4736]: > Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.983412 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1ccc28b77f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:27.496275839 +0000 UTC m=+9.223666136,LastTimestamp:2026-03-16 15:13:27.496275839 +0000 UTC m=+9.223666136,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.991613 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189d5b1b82ff7787\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b82ff7787 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:21.973868423 +0000 UTC m=+3.701258710,LastTimestamp:2026-03-16 15:13:33.098323164 +0000 UTC m=+14.825713451,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:13 crc kubenswrapper[4736]: E0316 15:14:13.995461 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189d5b1b8f7beaf9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b8f7beaf9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:22.183351033 +0000 UTC m=+3.910741320,LastTimestamp:2026-03-16 15:13:33.363228175 +0000 UTC m=+15.090618462,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.000188 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189d5b1b907c64a5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1b907c64a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:22.200159397 +0000 UTC m=+3.927549684,LastTimestamp:2026-03-16 15:13:33.380879595 +0000 UTC m=+15.108269882,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.003741 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 16 15:14:14 crc kubenswrapper[4736]: &Event{ObjectMeta:{kube-apiserver-crc.189d5b1e30cec128 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 16 15:14:14 crc kubenswrapper[4736]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 16 15:14:14 crc kubenswrapper[4736]: Mar 16 15:14:14 crc kubenswrapper[4736]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:33.479846184 +0000 UTC m=+15.207236511,LastTimestamp:2026-03-16 15:13:33.479846184 +0000 UTC m=+15.207236511,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 16 15:14:14 crc kubenswrapper[4736]: > Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.007140 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1e30cfa765 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:33.479905125 +0000 UTC m=+15.207295442,LastTimestamp:2026-03-16 15:13:33.479905125 +0000 UTC m=+15.207295442,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.011551 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189d5b1e30cec128\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 16 15:14:14 crc kubenswrapper[4736]: &Event{ObjectMeta:{kube-apiserver-crc.189d5b1e30cec128 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 16 15:14:14 crc kubenswrapper[4736]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 16 15:14:14 crc kubenswrapper[4736]: Mar 16 15:14:14 crc kubenswrapper[4736]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:33.479846184 +0000 UTC m=+15.207236511,LastTimestamp:2026-03-16 15:13:33.487207879 +0000 UTC m=+15.214598196,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 16 15:14:14 crc kubenswrapper[4736]: > Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.016951 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189d5b1e30cfa765\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189d5b1e30cfa765 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:33.479905125 +0000 UTC m=+15.207295442,LastTimestamp:2026-03-16 15:13:33.487270911 +0000 UTC m=+15.214661228,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.022524 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 16 15:14:14 crc kubenswrapper[4736]: &Event{ObjectMeta:{kube-controller-manager-crc.189d5b1f20384ae6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 16 15:14:14 crc kubenswrapper[4736]: body: Mar 16 15:14:14 crc kubenswrapper[4736]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:37.49651735 +0000 UTC m=+19.223907677,LastTimestamp:2026-03-16 15:13:37.49651735 +0000 UTC m=+19.223907677,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 16 15:14:14 crc kubenswrapper[4736]: > Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.027175 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1f203969be openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:37.496590782 +0000 UTC m=+19.223981109,LastTimestamp:2026-03-16 15:13:37.496590782 +0000 UTC m=+19.223981109,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.034151 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189d5b1f20384ae6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 16 15:14:14 crc kubenswrapper[4736]: &Event{ObjectMeta:{kube-controller-manager-crc.189d5b1f20384ae6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 16 15:14:14 crc kubenswrapper[4736]: body: Mar 16 15:14:14 crc kubenswrapper[4736]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:37.49651735 +0000 UTC m=+19.223907677,LastTimestamp:2026-03-16 15:13:47.496869873 +0000 UTC m=+29.224260200,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 16 15:14:14 crc kubenswrapper[4736]: > Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.039399 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189d5b1f203969be\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1f203969be openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:37.496590782 +0000 UTC m=+19.223981109,LastTimestamp:2026-03-16 15:13:47.496962645 +0000 UTC m=+29.224352972,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.043867 4736 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b2174729e10 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:47.499560464 +0000 UTC m=+29.226950751,LastTimestamp:2026-03-16 15:13:47.499560464 +0000 UTC m=+29.226950751,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.048030 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189d5b1b17a21926\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1b17a21926 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.172587302 +0000 UTC m=+1.899977579,LastTimestamp:2026-03-16 15:13:47.617668933 +0000 UTC m=+29.345059230,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.052723 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189d5b1b28271c48\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1b28271c48 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.449739848 +0000 UTC m=+2.177130145,LastTimestamp:2026-03-16 15:13:47.854932557 +0000 UTC m=+29.582322844,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.057305 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189d5b1b28f276a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1b28f276a1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:20.463066785 +0000 UTC m=+2.190457112,LastTimestamp:2026-03-16 15:13:47.863096235 +0000 UTC m=+29.590486522,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.065141 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189d5b1f20384ae6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 16 15:14:14 crc kubenswrapper[4736]: &Event{ObjectMeta:{kube-controller-manager-crc.189d5b1f20384ae6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 16 15:14:14 crc kubenswrapper[4736]: body: Mar 16 15:14:14 crc kubenswrapper[4736]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:37.49651735 +0000 UTC m=+19.223907677,LastTimestamp:2026-03-16 15:13:57.496349224 +0000 UTC m=+39.223739511,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 16 15:14:14 crc kubenswrapper[4736]: > Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.070136 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189d5b1f203969be\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189d5b1f203969be openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:37.496590782 +0000 UTC m=+19.223981109,LastTimestamp:2026-03-16 15:13:57.496396035 +0000 UTC m=+39.223786322,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.076278 4736 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189d5b1f20384ae6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 16 15:14:14 crc kubenswrapper[4736]: &Event{ObjectMeta:{kube-controller-manager-crc.189d5b1f20384ae6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 16 15:14:14 crc kubenswrapper[4736]: body: Mar 16 15:14:14 crc kubenswrapper[4736]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:13:37.49651735 +0000 UTC m=+19.223907677,LastTimestamp:2026-03-16 15:14:07.496672248 +0000 UTC m=+49.224062575,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 16 15:14:14 crc kubenswrapper[4736]: > Mar 16 15:14:14 crc kubenswrapper[4736]: I0316 15:14:14.902810 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.916001 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 16 15:14:14 crc kubenswrapper[4736]: I0316 15:14:14.929063 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:14 crc kubenswrapper[4736]: I0316 15:14:14.931785 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:14 crc kubenswrapper[4736]: I0316 15:14:14.931846 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:14 crc kubenswrapper[4736]: I0316 15:14:14.931861 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:14 crc kubenswrapper[4736]: I0316 15:14:14.931888 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:14:14 crc kubenswrapper[4736]: E0316 15:14:14.935796 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 16 15:14:15 crc kubenswrapper[4736]: I0316 15:14:15.904957 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:16 crc kubenswrapper[4736]: I0316 15:14:16.903851 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:16 crc kubenswrapper[4736]: I0316 15:14:16.977490 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:16 crc kubenswrapper[4736]: I0316 15:14:16.978637 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:16 crc kubenswrapper[4736]: I0316 15:14:16.978664 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:16 crc kubenswrapper[4736]: I0316 15:14:16.978674 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:16 crc kubenswrapper[4736]: I0316 15:14:16.979208 4736 scope.go:117] "RemoveContainer" containerID="c31c898ef049f044b8516d39eeb57426a89f76c8beeadc1400a04d846f7a92cd" Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.260009 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.261252 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab"} Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.261372 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.262061 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.262081 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.262090 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.497277 4736 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.497366 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.497430 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.497608 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.498894 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.498941 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.498987 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.499608 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"508caf5d1d449d47efaa2dfbbda3533cfe86b7cfaa9c3d03cf1064a64d707cf8"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.499725 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://508caf5d1d449d47efaa2dfbbda3533cfe86b7cfaa9c3d03cf1064a64d707cf8" gracePeriod=30 Mar 16 15:14:17 crc kubenswrapper[4736]: I0316 15:14:17.903701 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.266837 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.267457 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.269444 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab" exitCode=255 Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.269525 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab"} Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.269577 4736 scope.go:117] "RemoveContainer" containerID="c31c898ef049f044b8516d39eeb57426a89f76c8beeadc1400a04d846f7a92cd" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.269729 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.270503 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.270535 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.270547 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.270997 4736 scope.go:117] "RemoveContainer" containerID="df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab" Mar 16 15:14:18 crc kubenswrapper[4736]: E0316 15:14:18.271256 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.276006 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.278084 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.278525 4736 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="508caf5d1d449d47efaa2dfbbda3533cfe86b7cfaa9c3d03cf1064a64d707cf8" exitCode=255 Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.278564 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"508caf5d1d449d47efaa2dfbbda3533cfe86b7cfaa9c3d03cf1064a64d707cf8"} Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.278593 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"54a9e8325d39b9a28e7029851abaf7a833637c19cddd4bc3ed91b4b25d90ce2d"} Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.278682 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.279503 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.279531 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.279539 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.285391 4736 scope.go:117] "RemoveContainer" containerID="b1a48307cae76a325a04d20bcf35749389bf10b256f4f140d2abe13d76b4dffa" Mar 16 15:14:18 crc kubenswrapper[4736]: I0316 15:14:18.902277 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:19 crc kubenswrapper[4736]: E0316 15:14:19.050754 4736 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 16 15:14:19 crc kubenswrapper[4736]: I0316 15:14:19.283163 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 16 15:14:19 crc kubenswrapper[4736]: I0316 15:14:19.285638 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 16 15:14:19 crc kubenswrapper[4736]: I0316 15:14:19.900836 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:20 crc kubenswrapper[4736]: I0316 15:14:20.904322 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:21 crc kubenswrapper[4736]: I0316 15:14:21.867078 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:14:21 crc kubenswrapper[4736]: I0316 15:14:21.867324 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:21 crc kubenswrapper[4736]: I0316 15:14:21.869044 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:21 crc kubenswrapper[4736]: I0316 15:14:21.869096 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:21 crc kubenswrapper[4736]: I0316 15:14:21.869122 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:21 crc kubenswrapper[4736]: I0316 15:14:21.869774 4736 scope.go:117] "RemoveContainer" containerID="df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab" Mar 16 15:14:21 crc kubenswrapper[4736]: E0316 15:14:21.869953 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:14:21 crc kubenswrapper[4736]: I0316 15:14:21.902161 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:21 crc kubenswrapper[4736]: E0316 15:14:21.924820 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 16 15:14:21 crc kubenswrapper[4736]: I0316 15:14:21.936137 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:21 crc kubenswrapper[4736]: I0316 15:14:21.937306 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:21 crc kubenswrapper[4736]: I0316 15:14:21.937335 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:21 crc kubenswrapper[4736]: I0316 15:14:21.937346 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:21 crc kubenswrapper[4736]: I0316 15:14:21.937372 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:14:21 crc kubenswrapper[4736]: E0316 15:14:21.943457 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 16 15:14:22 crc kubenswrapper[4736]: I0316 15:14:22.902134 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:23 crc kubenswrapper[4736]: I0316 15:14:23.315519 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:14:23 crc kubenswrapper[4736]: I0316 15:14:23.315688 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:23 crc kubenswrapper[4736]: I0316 15:14:23.316940 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:23 crc kubenswrapper[4736]: I0316 15:14:23.316964 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:23 crc kubenswrapper[4736]: I0316 15:14:23.316977 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:23 crc kubenswrapper[4736]: I0316 15:14:23.317627 4736 scope.go:117] "RemoveContainer" containerID="df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab" Mar 16 15:14:23 crc kubenswrapper[4736]: E0316 15:14:23.317790 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:14:23 crc kubenswrapper[4736]: I0316 15:14:23.901127 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:24 crc kubenswrapper[4736]: I0316 15:14:24.496118 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:14:24 crc kubenswrapper[4736]: I0316 15:14:24.496298 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:24 crc kubenswrapper[4736]: I0316 15:14:24.497780 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:24 crc kubenswrapper[4736]: I0316 15:14:24.497829 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:24 crc kubenswrapper[4736]: I0316 15:14:24.497841 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:24 crc kubenswrapper[4736]: I0316 15:14:24.502291 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:14:24 crc kubenswrapper[4736]: I0316 15:14:24.901045 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:25 crc kubenswrapper[4736]: I0316 15:14:25.301657 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:25 crc kubenswrapper[4736]: I0316 15:14:25.301763 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:14:25 crc kubenswrapper[4736]: I0316 15:14:25.303555 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:25 crc kubenswrapper[4736]: I0316 15:14:25.303636 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:25 crc kubenswrapper[4736]: I0316 15:14:25.303662 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:25 crc kubenswrapper[4736]: I0316 15:14:25.902595 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:26 crc kubenswrapper[4736]: I0316 15:14:26.303945 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:26 crc kubenswrapper[4736]: I0316 15:14:26.305290 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:26 crc kubenswrapper[4736]: I0316 15:14:26.305549 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:26 crc kubenswrapper[4736]: I0316 15:14:26.305625 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:26 crc kubenswrapper[4736]: I0316 15:14:26.901624 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:27 crc kubenswrapper[4736]: I0316 15:14:27.901440 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:27 crc kubenswrapper[4736]: I0316 15:14:27.977547 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:27 crc kubenswrapper[4736]: I0316 15:14:27.979256 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:27 crc kubenswrapper[4736]: I0316 15:14:27.979400 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:27 crc kubenswrapper[4736]: I0316 15:14:27.979495 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:28 crc kubenswrapper[4736]: I0316 15:14:28.904429 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:28 crc kubenswrapper[4736]: E0316 15:14:28.932866 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 16 15:14:28 crc kubenswrapper[4736]: I0316 15:14:28.943910 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:28 crc kubenswrapper[4736]: I0316 15:14:28.946170 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:28 crc kubenswrapper[4736]: I0316 15:14:28.946207 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:28 crc kubenswrapper[4736]: I0316 15:14:28.946218 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:28 crc kubenswrapper[4736]: I0316 15:14:28.946246 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:14:28 crc kubenswrapper[4736]: E0316 15:14:28.950023 4736 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 16 15:14:29 crc kubenswrapper[4736]: E0316 15:14:29.051366 4736 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 16 15:14:29 crc kubenswrapper[4736]: I0316 15:14:29.731918 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 16 15:14:29 crc kubenswrapper[4736]: I0316 15:14:29.751641 4736 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 16 15:14:29 crc kubenswrapper[4736]: I0316 15:14:29.905856 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:30 crc kubenswrapper[4736]: I0316 15:14:30.902426 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:31 crc kubenswrapper[4736]: I0316 15:14:31.905468 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:32 crc kubenswrapper[4736]: I0316 15:14:32.905043 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:33 crc kubenswrapper[4736]: I0316 15:14:33.902612 4736 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 16 15:14:33 crc kubenswrapper[4736]: I0316 15:14:33.977663 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:33 crc kubenswrapper[4736]: I0316 15:14:33.979557 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:33 crc kubenswrapper[4736]: I0316 15:14:33.979628 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:33 crc kubenswrapper[4736]: I0316 15:14:33.979643 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:33 crc kubenswrapper[4736]: I0316 15:14:33.980507 4736 scope.go:117] "RemoveContainer" containerID="df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab" Mar 16 15:14:33 crc kubenswrapper[4736]: E0316 15:14:33.980778 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:14:34 crc kubenswrapper[4736]: I0316 15:14:34.104083 4736 csr.go:261] certificate signing request csr-bqmm5 is approved, waiting to be issued Mar 16 15:14:34 crc kubenswrapper[4736]: I0316 15:14:34.112093 4736 csr.go:257] certificate signing request csr-bqmm5 is issued Mar 16 15:14:34 crc kubenswrapper[4736]: I0316 15:14:34.215938 4736 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 16 15:14:34 crc kubenswrapper[4736]: I0316 15:14:34.722613 4736 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.114008 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-06 17:55:51.900175396 +0000 UTC Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.114080 4736 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7106h41m16.786100617s for next certificate rotation Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.817252 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.817514 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.819531 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.819592 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.819641 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.950616 4736 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.952579 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.952890 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.953183 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.953876 4736 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.967349 4736 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.968010 4736 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 16 15:14:35 crc kubenswrapper[4736]: E0316 15:14:35.968245 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.973314 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.973355 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.973372 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.973399 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:35 crc kubenswrapper[4736]: I0316 15:14:35.973419 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:35Z","lastTransitionTime":"2026-03-16T15:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:35 crc kubenswrapper[4736]: E0316 15:14:35.996182 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.008370 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.008465 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.008490 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.008532 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.008561 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:36Z","lastTransitionTime":"2026-03-16T15:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:36 crc kubenswrapper[4736]: E0316 15:14:36.027571 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.039460 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.039509 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.039527 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.039557 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.039579 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:36Z","lastTransitionTime":"2026-03-16T15:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:36 crc kubenswrapper[4736]: E0316 15:14:36.061168 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.073886 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.074284 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.074530 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.074749 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:36 crc kubenswrapper[4736]: I0316 15:14:36.074966 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:36Z","lastTransitionTime":"2026-03-16T15:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:36 crc kubenswrapper[4736]: E0316 15:14:36.096527 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:36 crc kubenswrapper[4736]: E0316 15:14:36.096891 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 16 15:14:36 crc kubenswrapper[4736]: E0316 15:14:36.096946 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:36 crc kubenswrapper[4736]: E0316 15:14:36.197410 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:36 crc kubenswrapper[4736]: E0316 15:14:36.298389 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:36 crc kubenswrapper[4736]: E0316 15:14:36.398579 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:36 crc kubenswrapper[4736]: E0316 15:14:36.499301 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:36 crc kubenswrapper[4736]: E0316 15:14:36.600229 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:36 crc kubenswrapper[4736]: E0316 15:14:36.701307 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:36 crc kubenswrapper[4736]: E0316 15:14:36.802249 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:36 crc kubenswrapper[4736]: E0316 15:14:36.903258 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:37 crc kubenswrapper[4736]: E0316 15:14:37.003449 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:37 crc kubenswrapper[4736]: E0316 15:14:37.103645 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:37 crc kubenswrapper[4736]: E0316 15:14:37.204519 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:37 crc kubenswrapper[4736]: E0316 15:14:37.305807 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:37 crc kubenswrapper[4736]: E0316 15:14:37.406270 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:37 crc kubenswrapper[4736]: E0316 15:14:37.506743 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:37 crc kubenswrapper[4736]: E0316 15:14:37.607799 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:37 crc kubenswrapper[4736]: E0316 15:14:37.708948 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:37 crc kubenswrapper[4736]: E0316 15:14:37.809539 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:37 crc kubenswrapper[4736]: E0316 15:14:37.910004 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:38 crc kubenswrapper[4736]: E0316 15:14:38.011136 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:38 crc kubenswrapper[4736]: E0316 15:14:38.111817 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:38 crc kubenswrapper[4736]: E0316 15:14:38.212405 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:38 crc kubenswrapper[4736]: E0316 15:14:38.313409 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:38 crc kubenswrapper[4736]: E0316 15:14:38.413620 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:38 crc kubenswrapper[4736]: E0316 15:14:38.513764 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:38 crc kubenswrapper[4736]: E0316 15:14:38.614863 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:38 crc kubenswrapper[4736]: E0316 15:14:38.715847 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:38 crc kubenswrapper[4736]: E0316 15:14:38.816609 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:38 crc kubenswrapper[4736]: E0316 15:14:38.916930 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:39 crc kubenswrapper[4736]: E0316 15:14:39.017989 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:39 crc kubenswrapper[4736]: E0316 15:14:39.052500 4736 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 16 15:14:39 crc kubenswrapper[4736]: E0316 15:14:39.118531 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:39 crc kubenswrapper[4736]: E0316 15:14:39.219204 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:39 crc kubenswrapper[4736]: E0316 15:14:39.319744 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:39 crc kubenswrapper[4736]: E0316 15:14:39.420078 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:39 crc kubenswrapper[4736]: E0316 15:14:39.520901 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:39 crc kubenswrapper[4736]: E0316 15:14:39.621177 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:39 crc kubenswrapper[4736]: E0316 15:14:39.722146 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:39 crc kubenswrapper[4736]: E0316 15:14:39.822813 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:39 crc kubenswrapper[4736]: E0316 15:14:39.923409 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:40 crc kubenswrapper[4736]: E0316 15:14:40.023583 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:40 crc kubenswrapper[4736]: E0316 15:14:40.124022 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:40 crc kubenswrapper[4736]: E0316 15:14:40.224765 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:40 crc kubenswrapper[4736]: E0316 15:14:40.324927 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:40 crc kubenswrapper[4736]: E0316 15:14:40.425530 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:40 crc kubenswrapper[4736]: E0316 15:14:40.526218 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:40 crc kubenswrapper[4736]: E0316 15:14:40.627194 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:40 crc kubenswrapper[4736]: E0316 15:14:40.728075 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:40 crc kubenswrapper[4736]: E0316 15:14:40.829179 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:40 crc kubenswrapper[4736]: E0316 15:14:40.929400 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:41 crc kubenswrapper[4736]: E0316 15:14:41.029919 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:41 crc kubenswrapper[4736]: E0316 15:14:41.130487 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:41 crc kubenswrapper[4736]: E0316 15:14:41.231365 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:41 crc kubenswrapper[4736]: E0316 15:14:41.331547 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:41 crc kubenswrapper[4736]: E0316 15:14:41.432026 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:41 crc kubenswrapper[4736]: E0316 15:14:41.533223 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:41 crc kubenswrapper[4736]: E0316 15:14:41.633980 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:41 crc kubenswrapper[4736]: E0316 15:14:41.734870 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:41 crc kubenswrapper[4736]: E0316 15:14:41.835269 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:41 crc kubenswrapper[4736]: E0316 15:14:41.936059 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:42 crc kubenswrapper[4736]: E0316 15:14:42.036895 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:42 crc kubenswrapper[4736]: E0316 15:14:42.137692 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:42 crc kubenswrapper[4736]: E0316 15:14:42.238090 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:42 crc kubenswrapper[4736]: E0316 15:14:42.338659 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:42 crc kubenswrapper[4736]: E0316 15:14:42.439145 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:42 crc kubenswrapper[4736]: E0316 15:14:42.539847 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:42 crc kubenswrapper[4736]: E0316 15:14:42.641037 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:42 crc kubenswrapper[4736]: E0316 15:14:42.741368 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:42 crc kubenswrapper[4736]: E0316 15:14:42.842591 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:42 crc kubenswrapper[4736]: E0316 15:14:42.942852 4736 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.002746 4736 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.046087 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.046190 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.046209 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.046242 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.046261 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:43Z","lastTransitionTime":"2026-03-16T15:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.150246 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.150643 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.150779 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.151048 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.151273 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:43Z","lastTransitionTime":"2026-03-16T15:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.254566 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.255196 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.255273 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.255341 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.255403 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:43Z","lastTransitionTime":"2026-03-16T15:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.358229 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.358309 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.358332 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.358366 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.358386 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:43Z","lastTransitionTime":"2026-03-16T15:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.439699 4736 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.462297 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.462780 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.462938 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.463180 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.463348 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:43Z","lastTransitionTime":"2026-03-16T15:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.567615 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.568154 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.568329 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.568469 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.568603 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:43Z","lastTransitionTime":"2026-03-16T15:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.672805 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.673395 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.673607 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.673839 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.674052 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:43Z","lastTransitionTime":"2026-03-16T15:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.777271 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.777360 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.777728 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.777805 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.777925 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:43Z","lastTransitionTime":"2026-03-16T15:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.882298 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.882354 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.882373 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.882402 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.882425 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:43Z","lastTransitionTime":"2026-03-16T15:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.924008 4736 apiserver.go:52] "Watching apiserver" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.933581 4736 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.934290 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.935277 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.935377 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.935470 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.936081 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 16 15:14:43 crc kubenswrapper[4736]: E0316 15:14:43.936233 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:14:43 crc kubenswrapper[4736]: E0316 15:14:43.936358 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.936403 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.937047 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:43 crc kubenswrapper[4736]: E0316 15:14:43.937166 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.941545 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.941656 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.941834 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.941918 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.942061 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.942378 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.942730 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.942972 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.944158 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.986955 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.987015 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.987035 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.987061 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.987080 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:43Z","lastTransitionTime":"2026-03-16T15:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:43 crc kubenswrapper[4736]: I0316 15:14:43.987740 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.009805 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.016663 4736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.031244 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.049381 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.065856 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.081821 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.081900 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.081952 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.081997 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082034 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082070 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082136 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082173 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082207 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082250 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082282 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082320 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082352 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082388 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082422 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082456 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082487 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082527 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082561 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082601 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082632 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082666 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082701 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082734 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082768 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082850 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082884 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082886 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.082919 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.083010 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:14:44.582945096 +0000 UTC m=+86.310335423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083091 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083195 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083240 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083279 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083320 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083354 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083395 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083431 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083473 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083515 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083550 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083585 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083621 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083655 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083689 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083727 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083764 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083786 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083798 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083892 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.084274 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.084487 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.084535 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.084841 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.085074 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.085432 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.085523 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.085558 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.083897 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.085747 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.085807 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.085857 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.085914 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.085957 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086284 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086339 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086459 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086513 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086564 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086618 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086669 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086719 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086772 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086823 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086868 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086916 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086965 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087009 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087059 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087178 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087234 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087286 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087336 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087388 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087461 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087513 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087563 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087616 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087663 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087710 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087759 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087820 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087870 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087980 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.088036 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.088081 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.088159 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.088212 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.088258 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.088301 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.088347 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.088395 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.088450 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.089170 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.089240 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.089292 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.089385 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.089449 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.089501 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086279 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086325 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086659 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.086675 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087645 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087786 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.090003 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087846 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.090050 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.087927 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.088483 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.088709 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.088717 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.088994 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.089289 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.089424 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.089709 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.089889 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.090661 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.090672 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.090700 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.090770 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.090820 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091031 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091079 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091165 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091201 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091236 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091273 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091307 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091341 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091375 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091416 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091453 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091505 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091558 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091600 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091637 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091670 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091704 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091736 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091769 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091803 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091836 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.093564 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.093610 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.093644 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.093679 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.093712 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.093745 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.093777 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.093810 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.093843 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.093878 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.093910 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.093949 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.093986 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.094018 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.094052 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.094088 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.094151 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.094188 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.094221 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.094253 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.094287 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.094370 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.094447 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.094483 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.096534 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.096604 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.096617 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.096666 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.096680 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:44Z","lastTransitionTime":"2026-03-16T15:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.094523 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097041 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097138 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097196 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097249 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097305 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097361 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097410 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097483 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097537 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097591 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097645 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097700 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097752 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097802 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.100097 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.100183 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.100223 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.100263 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.100305 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102203 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102298 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102358 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102410 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102465 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102568 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102625 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102691 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102743 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102799 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102849 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102904 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102957 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103010 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103063 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103145 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103204 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103261 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103320 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103377 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103419 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103463 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103501 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103546 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103583 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103620 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103655 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103707 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103757 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103800 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103838 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103887 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103933 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.104019 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.104065 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.104141 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.104213 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.104277 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.104332 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.107628 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.107677 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.108591 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.108639 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.108681 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.110454 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.110629 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.110676 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.114390 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.114433 4736 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.114457 4736 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.114478 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.114501 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.114522 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.114546 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.114564 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091231 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091341 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091359 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.091907 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.092009 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.092213 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.092288 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.092451 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.092990 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.094539 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.094801 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.095162 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.095302 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.095628 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.095838 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.095849 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.095910 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.096204 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.096186 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.096400 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.096421 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.096871 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097057 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097160 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097224 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097642 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097669 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.097744 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.098164 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.098176 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.098486 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.098784 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.098993 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.099374 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.099689 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.099805 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.100547 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.100576 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.101298 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.101426 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102245 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102566 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.102823 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103322 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103375 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.103830 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.104438 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.104800 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.104968 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.105129 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.115466 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.105434 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.105758 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.106244 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.106845 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.106889 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.115600 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.115610 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.107045 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.107534 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.107550 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.105633 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.107390 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.107226 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.107581 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.107605 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.109168 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.109175 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.109407 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.109472 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.109497 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.109651 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.109972 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.110280 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.110319 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.111384 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.111730 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.111837 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.112560 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.112825 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.113134 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.113162 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.113327 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.115769 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.116006 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.116048 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.116965 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.116990 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.114581 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.117304 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.117575 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.118066 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.118097 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.118672 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.118685 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119008 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119034 4736 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119048 4736 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119064 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119365 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119387 4736 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119404 4736 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119417 4736 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119431 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119445 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119458 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119472 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119486 4736 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119503 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119520 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119589 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119611 4736 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119629 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119647 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119671 4736 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119685 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119701 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119720 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.119738 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.119799 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.119945 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:44.619903519 +0000 UTC m=+86.347293846 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.120760 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.120866 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:44.620839824 +0000 UTC m=+86.348230331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.121681 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.122513 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.123395 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.123748 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.123798 4736 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.124477 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.125503 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.125635 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.125835 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.128647 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.135963 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.136021 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.136028 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.136720 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.137297 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.139392 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.141904 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.142072 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.142256 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.142301 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.142324 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.142428 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:44.642389587 +0000 UTC m=+86.369780094 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.143099 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.143690 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.144590 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.145712 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.147661 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.148826 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.149444 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.150767 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.151553 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.151626 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.151634 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.152856 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.153410 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.154516 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.155398 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.155674 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.155699 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.155715 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.155801 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:44.655769331 +0000 UTC m=+86.383159838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.155798 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.155932 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.155973 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.156129 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.156210 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.156383 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.157820 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.157871 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.158354 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.158533 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.162747 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.162947 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.163598 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.164443 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.164936 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.165350 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.166852 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.166876 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.166955 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.167494 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.167616 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.168004 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.168094 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.168843 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.170536 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.170699 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.170827 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.170935 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.171466 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.171498 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.171545 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.171761 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.177009 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.177524 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.179282 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.179458 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.179582 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.180142 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.180752 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.181274 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.181399 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.194076 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.197683 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.199762 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.199792 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.199801 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.199815 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.199825 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:44Z","lastTransitionTime":"2026-03-16T15:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.205985 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220386 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220446 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220595 4736 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220618 4736 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220638 4736 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220658 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220677 4736 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220696 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220718 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220738 4736 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220758 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220776 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220795 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220813 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220832 4736 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220850 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220898 4736 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220916 4736 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220936 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220955 4736 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220974 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.220991 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221009 4736 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221030 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221048 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221066 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221084 4736 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221141 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221160 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221179 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221201 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221220 4736 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221238 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221256 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221275 4736 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221296 4736 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221314 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221333 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221352 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221346 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221371 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221474 4736 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221501 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221518 4736 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221538 4736 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221540 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221553 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221568 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221583 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221599 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221617 4736 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221633 4736 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221648 4736 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221686 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221702 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221722 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221737 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221751 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221765 4736 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221781 4736 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221795 4736 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221808 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221822 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221835 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221849 4736 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221863 4736 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221876 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221891 4736 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221904 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221918 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221932 4736 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221946 4736 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221960 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221974 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.221988 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222003 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222019 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222034 4736 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222048 4736 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222063 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222077 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222091 4736 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222131 4736 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222146 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222162 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222176 4736 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222189 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222203 4736 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222216 4736 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222230 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222244 4736 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222259 4736 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222271 4736 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222284 4736 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222296 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222311 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222327 4736 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222339 4736 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222355 4736 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222370 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222385 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222402 4736 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222415 4736 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222429 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222442 4736 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222457 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222472 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222487 4736 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222501 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222515 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222529 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222542 4736 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222560 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222578 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222596 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222613 4736 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222632 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222647 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222663 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222678 4736 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222696 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222714 4736 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222734 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222748 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222761 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222799 4736 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222817 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222837 4736 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222855 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222884 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222904 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222925 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222944 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222963 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.222982 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223000 4736 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223018 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223036 4736 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223053 4736 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223083 4736 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223100 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223142 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223160 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223178 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223195 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223212 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223230 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223249 4736 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223266 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223285 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223303 4736 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223322 4736 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223341 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223374 4736 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223393 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223411 4736 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223430 4736 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223448 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223467 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223487 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223507 4736 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223529 4736 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223547 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223567 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.223587 4736 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.271976 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.294310 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.300020 4736 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 16 15:14:44 crc kubenswrapper[4736]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 16 15:14:44 crc kubenswrapper[4736]: if [[ -f "/env/_master" ]]; then Mar 16 15:14:44 crc kubenswrapper[4736]: set -o allexport Mar 16 15:14:44 crc kubenswrapper[4736]: source "/env/_master" Mar 16 15:14:44 crc kubenswrapper[4736]: set +o allexport Mar 16 15:14:44 crc kubenswrapper[4736]: fi Mar 16 15:14:44 crc kubenswrapper[4736]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 16 15:14:44 crc kubenswrapper[4736]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 16 15:14:44 crc kubenswrapper[4736]: ho_enable="--enable-hybrid-overlay" Mar 16 15:14:44 crc kubenswrapper[4736]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 16 15:14:44 crc kubenswrapper[4736]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 16 15:14:44 crc kubenswrapper[4736]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 16 15:14:44 crc kubenswrapper[4736]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 16 15:14:44 crc kubenswrapper[4736]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 16 15:14:44 crc kubenswrapper[4736]: --webhook-host=127.0.0.1 \ Mar 16 15:14:44 crc kubenswrapper[4736]: --webhook-port=9743 \ Mar 16 15:14:44 crc kubenswrapper[4736]: ${ho_enable} \ Mar 16 15:14:44 crc kubenswrapper[4736]: --enable-interconnect \ Mar 16 15:14:44 crc kubenswrapper[4736]: --disable-approver \ Mar 16 15:14:44 crc kubenswrapper[4736]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 16 15:14:44 crc kubenswrapper[4736]: --wait-for-kubernetes-api=200s \ Mar 16 15:14:44 crc kubenswrapper[4736]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 16 15:14:44 crc kubenswrapper[4736]: --loglevel="${LOGLEVEL}" Mar 16 15:14:44 crc kubenswrapper[4736]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 16 15:14:44 crc kubenswrapper[4736]: > logger="UnhandledError" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.302337 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.302383 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.302400 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.302425 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.302442 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:44Z","lastTransitionTime":"2026-03-16T15:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.304045 4736 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 16 15:14:44 crc kubenswrapper[4736]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 16 15:14:44 crc kubenswrapper[4736]: if [[ -f "/env/_master" ]]; then Mar 16 15:14:44 crc kubenswrapper[4736]: set -o allexport Mar 16 15:14:44 crc kubenswrapper[4736]: source "/env/_master" Mar 16 15:14:44 crc kubenswrapper[4736]: set +o allexport Mar 16 15:14:44 crc kubenswrapper[4736]: fi Mar 16 15:14:44 crc kubenswrapper[4736]: Mar 16 15:14:44 crc kubenswrapper[4736]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 16 15:14:44 crc kubenswrapper[4736]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 16 15:14:44 crc kubenswrapper[4736]: --disable-webhook \ Mar 16 15:14:44 crc kubenswrapper[4736]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 16 15:14:44 crc kubenswrapper[4736]: --loglevel="${LOGLEVEL}" Mar 16 15:14:44 crc kubenswrapper[4736]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 16 15:14:44 crc kubenswrapper[4736]: > logger="UnhandledError" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.305282 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.306458 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.315369 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.317416 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.327588 4736 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 16 15:14:44 crc kubenswrapper[4736]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 16 15:14:44 crc kubenswrapper[4736]: set -o allexport Mar 16 15:14:44 crc kubenswrapper[4736]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 16 15:14:44 crc kubenswrapper[4736]: source /etc/kubernetes/apiserver-url.env Mar 16 15:14:44 crc kubenswrapper[4736]: else Mar 16 15:14:44 crc kubenswrapper[4736]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 16 15:14:44 crc kubenswrapper[4736]: exit 1 Mar 16 15:14:44 crc kubenswrapper[4736]: fi Mar 16 15:14:44 crc kubenswrapper[4736]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 16 15:14:44 crc kubenswrapper[4736]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 16 15:14:44 crc kubenswrapper[4736]: > logger="UnhandledError" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.328842 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.361080 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"000c48cbb0564bfad24e7f6b259dc17857f230ec6a40ba1278fdb212d7c12188"} Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.364038 4736 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 16 15:14:44 crc kubenswrapper[4736]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 16 15:14:44 crc kubenswrapper[4736]: set -o allexport Mar 16 15:14:44 crc kubenswrapper[4736]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 16 15:14:44 crc kubenswrapper[4736]: source /etc/kubernetes/apiserver-url.env Mar 16 15:14:44 crc kubenswrapper[4736]: else Mar 16 15:14:44 crc kubenswrapper[4736]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 16 15:14:44 crc kubenswrapper[4736]: exit 1 Mar 16 15:14:44 crc kubenswrapper[4736]: fi Mar 16 15:14:44 crc kubenswrapper[4736]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 16 15:14:44 crc kubenswrapper[4736]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 16 15:14:44 crc kubenswrapper[4736]: > logger="UnhandledError" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.365082 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e580b7102926a895a429c4cfa1288eb71e0688fb5abd8d6514bcabfff52512f7"} Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.365252 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.374855 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.376584 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.378545 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"528cc3797159fca23ea142ea240abd3907cb357b43ba4ef0c1d58929d79f40a7"} Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.380868 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.381702 4736 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 16 15:14:44 crc kubenswrapper[4736]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 16 15:14:44 crc kubenswrapper[4736]: if [[ -f "/env/_master" ]]; then Mar 16 15:14:44 crc kubenswrapper[4736]: set -o allexport Mar 16 15:14:44 crc kubenswrapper[4736]: source "/env/_master" Mar 16 15:14:44 crc kubenswrapper[4736]: set +o allexport Mar 16 15:14:44 crc kubenswrapper[4736]: fi Mar 16 15:14:44 crc kubenswrapper[4736]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 16 15:14:44 crc kubenswrapper[4736]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 16 15:14:44 crc kubenswrapper[4736]: ho_enable="--enable-hybrid-overlay" Mar 16 15:14:44 crc kubenswrapper[4736]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 16 15:14:44 crc kubenswrapper[4736]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 16 15:14:44 crc kubenswrapper[4736]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 16 15:14:44 crc kubenswrapper[4736]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 16 15:14:44 crc kubenswrapper[4736]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 16 15:14:44 crc kubenswrapper[4736]: --webhook-host=127.0.0.1 \ Mar 16 15:14:44 crc kubenswrapper[4736]: --webhook-port=9743 \ Mar 16 15:14:44 crc kubenswrapper[4736]: ${ho_enable} \ Mar 16 15:14:44 crc kubenswrapper[4736]: --enable-interconnect \ Mar 16 15:14:44 crc kubenswrapper[4736]: --disable-approver \ Mar 16 15:14:44 crc kubenswrapper[4736]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 16 15:14:44 crc kubenswrapper[4736]: --wait-for-kubernetes-api=200s \ Mar 16 15:14:44 crc kubenswrapper[4736]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 16 15:14:44 crc kubenswrapper[4736]: --loglevel="${LOGLEVEL}" Mar 16 15:14:44 crc kubenswrapper[4736]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 16 15:14:44 crc kubenswrapper[4736]: > logger="UnhandledError" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.388495 4736 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 16 15:14:44 crc kubenswrapper[4736]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 16 15:14:44 crc kubenswrapper[4736]: if [[ -f "/env/_master" ]]; then Mar 16 15:14:44 crc kubenswrapper[4736]: set -o allexport Mar 16 15:14:44 crc kubenswrapper[4736]: source "/env/_master" Mar 16 15:14:44 crc kubenswrapper[4736]: set +o allexport Mar 16 15:14:44 crc kubenswrapper[4736]: fi Mar 16 15:14:44 crc kubenswrapper[4736]: Mar 16 15:14:44 crc kubenswrapper[4736]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 16 15:14:44 crc kubenswrapper[4736]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 16 15:14:44 crc kubenswrapper[4736]: --disable-webhook \ Mar 16 15:14:44 crc kubenswrapper[4736]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 16 15:14:44 crc kubenswrapper[4736]: --loglevel="${LOGLEVEL}" Mar 16 15:14:44 crc kubenswrapper[4736]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 16 15:14:44 crc kubenswrapper[4736]: > logger="UnhandledError" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.389672 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.401833 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.406374 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.406436 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.406456 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.406486 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.406504 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:44Z","lastTransitionTime":"2026-03-16T15:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.419725 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.432995 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.447065 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.459929 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.473491 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.486235 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.496520 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.507873 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.510411 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.510464 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.510484 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.510515 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.510535 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:44Z","lastTransitionTime":"2026-03-16T15:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.526903 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.541784 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.569851 4736 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.612921 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.612975 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.612992 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.613016 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.613034 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:44Z","lastTransitionTime":"2026-03-16T15:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.633971 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.634141 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:14:45.634097055 +0000 UTC m=+87.361487352 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.634212 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.634274 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.634355 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.634400 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:45.634392394 +0000 UTC m=+87.361782681 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.634406 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.634500 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:45.634481397 +0000 UTC m=+87.361871684 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.715519 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.715565 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.715575 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.715593 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.715602 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:44Z","lastTransitionTime":"2026-03-16T15:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.734657 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.734698 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.734836 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.734854 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.734866 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.734910 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:45.734895531 +0000 UTC m=+87.462285818 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.734836 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.734945 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.734958 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:44 crc kubenswrapper[4736]: E0316 15:14:44.734990 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:45.734977804 +0000 UTC m=+87.462368091 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.818032 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.818073 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.818086 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.818114 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.818125 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:44Z","lastTransitionTime":"2026-03-16T15:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.921603 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.921676 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.921695 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.921730 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.921751 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:44Z","lastTransitionTime":"2026-03-16T15:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.985191 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.986366 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.989317 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.990970 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.993200 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.994672 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 16 15:14:44 crc kubenswrapper[4736]: I0316 15:14:44.997847 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.000072 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.000270 4736 scope.go:117] "RemoveContainer" containerID="df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab" Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.000967 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.001453 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.003624 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.004902 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.007447 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.008531 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.009624 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.011730 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.012833 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.014819 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.015646 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.016821 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.019877 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.021570 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.024749 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.025270 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.025301 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.025312 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.025329 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.025342 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:45Z","lastTransitionTime":"2026-03-16T15:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.026125 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.027490 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.029523 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.031077 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.033096 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.034015 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.035446 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.036150 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.036844 4736 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.037610 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.040008 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.041033 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.042665 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.044846 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.045595 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.046605 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.047284 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.048512 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.049125 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.050405 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.051016 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.051985 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.052498 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.053735 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.054481 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.055761 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.056361 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.057335 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.057847 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.058442 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.059538 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.060032 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.060978 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.129155 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.129206 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.129216 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.129234 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.129249 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:45Z","lastTransitionTime":"2026-03-16T15:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.233004 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.233074 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.233092 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.233164 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.233192 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:45Z","lastTransitionTime":"2026-03-16T15:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.336892 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.336977 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.337002 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.337037 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.337062 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:45Z","lastTransitionTime":"2026-03-16T15:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.383377 4736 scope.go:117] "RemoveContainer" containerID="df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab" Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.383738 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.440585 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.440655 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.440666 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.440689 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.440706 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:45Z","lastTransitionTime":"2026-03-16T15:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.543812 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.543869 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.543887 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.543923 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.543942 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:45Z","lastTransitionTime":"2026-03-16T15:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.643642 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.643880 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:14:47.643838329 +0000 UTC m=+89.371228656 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.644634 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.644695 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.644819 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.644882 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:47.644864327 +0000 UTC m=+89.372254614 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.644921 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.645006 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:47.64498722 +0000 UTC m=+89.372377537 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.648036 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.648099 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.648161 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.648225 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.648255 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:45Z","lastTransitionTime":"2026-03-16T15:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.745467 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.745549 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.745711 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.745733 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.745749 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.745821 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:47.745798357 +0000 UTC m=+89.473188654 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.745871 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.745949 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.745974 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.746133 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:47.746061454 +0000 UTC m=+89.473451781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.752622 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.752781 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.752817 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.752851 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.752891 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:45Z","lastTransitionTime":"2026-03-16T15:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.856480 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.856536 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.856556 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.856584 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.856609 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:45Z","lastTransitionTime":"2026-03-16T15:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.958868 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.958904 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.958911 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.958928 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.958938 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:45Z","lastTransitionTime":"2026-03-16T15:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.977740 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.977776 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:45 crc kubenswrapper[4736]: I0316 15:14:45.977741 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.977922 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.978043 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:14:45 crc kubenswrapper[4736]: E0316 15:14:45.978175 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.061988 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.062028 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.062037 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.062053 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.062064 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:46Z","lastTransitionTime":"2026-03-16T15:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.165639 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.165690 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.165703 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.165722 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.165734 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:46Z","lastTransitionTime":"2026-03-16T15:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.173498 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.173581 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.173600 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.173628 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.173651 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:46Z","lastTransitionTime":"2026-03-16T15:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:46 crc kubenswrapper[4736]: E0316 15:14:46.185819 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.191308 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.191364 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.191382 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.191404 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.191420 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:46Z","lastTransitionTime":"2026-03-16T15:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:46 crc kubenswrapper[4736]: E0316 15:14:46.209317 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.214766 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.214884 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.214908 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.214939 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.214959 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:46Z","lastTransitionTime":"2026-03-16T15:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:46 crc kubenswrapper[4736]: E0316 15:14:46.232497 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.237504 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.237552 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.237568 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.237591 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.237607 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:46Z","lastTransitionTime":"2026-03-16T15:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:46 crc kubenswrapper[4736]: E0316 15:14:46.254188 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.259596 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.259647 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.259666 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.259691 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.259709 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:46Z","lastTransitionTime":"2026-03-16T15:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:46 crc kubenswrapper[4736]: E0316 15:14:46.276214 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:46 crc kubenswrapper[4736]: E0316 15:14:46.276495 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.280041 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.280144 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.280181 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.280217 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.280237 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:46Z","lastTransitionTime":"2026-03-16T15:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.383692 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.383731 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.383741 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.383757 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.383773 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:46Z","lastTransitionTime":"2026-03-16T15:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.488034 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.488509 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.488636 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.488794 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.488945 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:46Z","lastTransitionTime":"2026-03-16T15:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.599618 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.599653 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.599666 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.599929 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.599947 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:46Z","lastTransitionTime":"2026-03-16T15:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.703851 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.703925 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.703949 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.703983 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.704002 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:46Z","lastTransitionTime":"2026-03-16T15:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.807815 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.808175 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.808346 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.808482 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.808599 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:46Z","lastTransitionTime":"2026-03-16T15:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.911904 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.912267 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.912429 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.912542 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:46 crc kubenswrapper[4736]: I0316 15:14:46.912635 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:46Z","lastTransitionTime":"2026-03-16T15:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.015843 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.015898 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.015910 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.015959 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.015976 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:47Z","lastTransitionTime":"2026-03-16T15:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.119448 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.119560 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.119589 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.119626 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.119653 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:47Z","lastTransitionTime":"2026-03-16T15:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.222375 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.222720 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.222853 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.222981 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.223095 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:47Z","lastTransitionTime":"2026-03-16T15:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.325575 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.325921 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.326034 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.326222 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.326393 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:47Z","lastTransitionTime":"2026-03-16T15:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.428689 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.429451 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.429556 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.429654 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.429731 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:47Z","lastTransitionTime":"2026-03-16T15:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.533662 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.534179 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.534475 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.534773 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.534904 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:47Z","lastTransitionTime":"2026-03-16T15:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.638809 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.638883 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.638910 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.638949 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.638974 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:47Z","lastTransitionTime":"2026-03-16T15:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.665688 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.665810 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.665851 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.666134 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.666487 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.666517 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:51.666189665 +0000 UTC m=+93.393579992 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.666573 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:14:51.666534314 +0000 UTC m=+93.393924641 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.666597 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:51.666585836 +0000 UTC m=+93.393976153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.742864 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.742922 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.742939 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.742965 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.742982 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:47Z","lastTransitionTime":"2026-03-16T15:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.767248 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.767327 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.767599 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.767638 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.767664 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.767755 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:51.767719321 +0000 UTC m=+93.495109658 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.768423 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.768470 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.768484 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.768572 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:51.768540084 +0000 UTC m=+93.495930371 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.845857 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.845931 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.845952 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.845980 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.845998 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:47Z","lastTransitionTime":"2026-03-16T15:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.950258 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.950338 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.950359 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.950391 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.950414 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:47Z","lastTransitionTime":"2026-03-16T15:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.977394 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.977464 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:47 crc kubenswrapper[4736]: I0316 15:14:47.977410 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.977627 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.977753 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:14:47 crc kubenswrapper[4736]: E0316 15:14:47.977846 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.053306 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.053815 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.053834 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.053871 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.053900 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:48Z","lastTransitionTime":"2026-03-16T15:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.156939 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.156996 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.157008 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.157031 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.157044 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:48Z","lastTransitionTime":"2026-03-16T15:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.260062 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.260132 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.260145 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.260168 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.260182 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:48Z","lastTransitionTime":"2026-03-16T15:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.362959 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.363018 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.363034 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.363061 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.363079 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:48Z","lastTransitionTime":"2026-03-16T15:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.466209 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.466267 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.466285 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.466312 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.466331 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:48Z","lastTransitionTime":"2026-03-16T15:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.569571 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.569654 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.569673 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.569704 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.569727 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:48Z","lastTransitionTime":"2026-03-16T15:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.672412 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.672489 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.672510 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.672544 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.672565 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:48Z","lastTransitionTime":"2026-03-16T15:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.775292 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.775334 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.775343 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.775405 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.775422 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:48Z","lastTransitionTime":"2026-03-16T15:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.872490 4736 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.878697 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.878742 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.878758 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.878785 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.878803 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:48Z","lastTransitionTime":"2026-03-16T15:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.982184 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.982220 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.982229 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.982242 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.982273 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:48Z","lastTransitionTime":"2026-03-16T15:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.992866 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:48 crc kubenswrapper[4736]: I0316 15:14:48.998734 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.005938 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.016560 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.026458 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.034513 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.045290 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.062037 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.084124 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.084182 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.084196 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.084219 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.084232 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:49Z","lastTransitionTime":"2026-03-16T15:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.186533 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.186574 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.186584 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.186600 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.186614 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:49Z","lastTransitionTime":"2026-03-16T15:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.289374 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.289449 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.289467 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.289495 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.289518 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:49Z","lastTransitionTime":"2026-03-16T15:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.392853 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.392908 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.392923 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.392942 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.392955 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:49Z","lastTransitionTime":"2026-03-16T15:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.495923 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.495982 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.495994 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.496015 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.496028 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:49Z","lastTransitionTime":"2026-03-16T15:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.598517 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.598574 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.598586 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.598604 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.598614 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:49Z","lastTransitionTime":"2026-03-16T15:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.701127 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.701171 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.701183 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.701198 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.701209 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:49Z","lastTransitionTime":"2026-03-16T15:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.803982 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.804028 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.804046 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.804065 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.804077 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:49Z","lastTransitionTime":"2026-03-16T15:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.906605 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.906675 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.906692 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.906722 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.906742 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:49Z","lastTransitionTime":"2026-03-16T15:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.977386 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.977386 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:49 crc kubenswrapper[4736]: I0316 15:14:49.977570 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:49 crc kubenswrapper[4736]: E0316 15:14:49.977676 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:14:49 crc kubenswrapper[4736]: E0316 15:14:49.977778 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:14:49 crc kubenswrapper[4736]: E0316 15:14:49.977938 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.009898 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.009973 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.009992 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.010019 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.010036 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:50Z","lastTransitionTime":"2026-03-16T15:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.114241 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.114288 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.114299 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.114317 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.114328 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:50Z","lastTransitionTime":"2026-03-16T15:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.217821 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.217886 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.217904 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.218012 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.218058 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:50Z","lastTransitionTime":"2026-03-16T15:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.322125 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.322180 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.322196 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.322216 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.322229 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:50Z","lastTransitionTime":"2026-03-16T15:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.424332 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.424404 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.424428 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.424463 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.424492 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:50Z","lastTransitionTime":"2026-03-16T15:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.528929 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.529002 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.529028 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.529061 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.529084 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:50Z","lastTransitionTime":"2026-03-16T15:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.632461 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.632519 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.632533 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.632557 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.632573 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:50Z","lastTransitionTime":"2026-03-16T15:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.735752 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.736382 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.736596 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.736808 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.737005 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:50Z","lastTransitionTime":"2026-03-16T15:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.840876 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.840929 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.840943 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.840963 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.840974 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:50Z","lastTransitionTime":"2026-03-16T15:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.944172 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.944497 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.944582 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.944732 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:50 crc kubenswrapper[4736]: I0316 15:14:50.944813 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:50Z","lastTransitionTime":"2026-03-16T15:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.047757 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.047826 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.047847 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.047877 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.047898 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:51Z","lastTransitionTime":"2026-03-16T15:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.151007 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.151086 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.151134 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.151167 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.151193 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:51Z","lastTransitionTime":"2026-03-16T15:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.254995 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.255064 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.255089 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.255156 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.255175 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:51Z","lastTransitionTime":"2026-03-16T15:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.358193 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.358246 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.358283 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.358310 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.358326 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:51Z","lastTransitionTime":"2026-03-16T15:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.460774 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.460863 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.460881 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.460920 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.460938 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:51Z","lastTransitionTime":"2026-03-16T15:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.563860 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.563909 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.563918 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.563936 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.563954 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:51Z","lastTransitionTime":"2026-03-16T15:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.667166 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.667213 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.667226 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.667244 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.667256 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:51Z","lastTransitionTime":"2026-03-16T15:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.706025 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.706205 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.706257 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.706372 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.706411 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.706468 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:14:59.706408421 +0000 UTC m=+101.433798738 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.706525 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:59.706510244 +0000 UTC m=+101.433900571 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.706557 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:59.706544805 +0000 UTC m=+101.433935132 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.770173 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.770258 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.770272 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.770322 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.770341 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:51Z","lastTransitionTime":"2026-03-16T15:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.807327 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.807380 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.807525 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.807545 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.807557 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.807617 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:59.807601859 +0000 UTC m=+101.534992136 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.807644 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.807676 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.807694 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.807778 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-16 15:14:59.807748933 +0000 UTC m=+101.535139420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.874597 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.874655 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.874672 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.874693 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.874703 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:51Z","lastTransitionTime":"2026-03-16T15:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.977460 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.977528 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.977596 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.977528 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.977743 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:14:51 crc kubenswrapper[4736]: E0316 15:14:51.977819 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.978550 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.978641 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.978665 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.978737 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:51 crc kubenswrapper[4736]: I0316 15:14:51.978760 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:51Z","lastTransitionTime":"2026-03-16T15:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.082768 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.082818 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.082828 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.082845 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.082856 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:52Z","lastTransitionTime":"2026-03-16T15:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.186392 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.186454 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.186472 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.186503 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.186526 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:52Z","lastTransitionTime":"2026-03-16T15:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.289416 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.289466 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.289479 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.289499 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.289514 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:52Z","lastTransitionTime":"2026-03-16T15:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.392684 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.392720 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.392730 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.392746 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.392764 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:52Z","lastTransitionTime":"2026-03-16T15:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.495598 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.495657 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.495681 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.495711 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.495729 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:52Z","lastTransitionTime":"2026-03-16T15:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.599754 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.599873 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.599927 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.599961 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.599981 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:52Z","lastTransitionTime":"2026-03-16T15:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.702645 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.702707 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.702725 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.702754 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.702773 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:52Z","lastTransitionTime":"2026-03-16T15:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.805961 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.806022 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.806040 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.806067 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.806092 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:52Z","lastTransitionTime":"2026-03-16T15:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.908978 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.909020 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.909029 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.909046 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:52 crc kubenswrapper[4736]: I0316 15:14:52.909055 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:52Z","lastTransitionTime":"2026-03-16T15:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.011763 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.011856 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.011874 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.011931 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.011951 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:53Z","lastTransitionTime":"2026-03-16T15:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.114719 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.114767 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.114782 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.114810 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.114827 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:53Z","lastTransitionTime":"2026-03-16T15:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.217479 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.217547 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.217566 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.217596 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.217622 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:53Z","lastTransitionTime":"2026-03-16T15:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.321044 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.321166 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.321191 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.321223 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.321245 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:53Z","lastTransitionTime":"2026-03-16T15:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.423813 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.423867 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.423886 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.424053 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.424077 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:53Z","lastTransitionTime":"2026-03-16T15:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.526376 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.526476 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.526490 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.526510 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.526523 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:53Z","lastTransitionTime":"2026-03-16T15:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.628551 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.628583 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.628594 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.628610 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.628622 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:53Z","lastTransitionTime":"2026-03-16T15:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.730985 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.731022 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.731034 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.731055 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.731125 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:53Z","lastTransitionTime":"2026-03-16T15:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.833577 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.833649 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.833669 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.833696 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.833713 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:53Z","lastTransitionTime":"2026-03-16T15:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.936738 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.936777 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.936786 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.936801 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.936811 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:53Z","lastTransitionTime":"2026-03-16T15:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.977575 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:53 crc kubenswrapper[4736]: E0316 15:14:53.977736 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.977614 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:53 crc kubenswrapper[4736]: I0316 15:14:53.977591 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:53 crc kubenswrapper[4736]: E0316 15:14:53.977816 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:14:53 crc kubenswrapper[4736]: E0316 15:14:53.978155 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.039078 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.039140 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.039151 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.039169 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.039179 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:54Z","lastTransitionTime":"2026-03-16T15:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.142775 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.142838 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.142850 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.142869 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.142886 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:54Z","lastTransitionTime":"2026-03-16T15:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.246736 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.246821 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.246840 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.246869 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.246888 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:54Z","lastTransitionTime":"2026-03-16T15:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.350456 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.350511 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.350520 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.350537 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.350548 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:54Z","lastTransitionTime":"2026-03-16T15:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.453324 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.453364 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.453377 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.453399 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.453417 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:54Z","lastTransitionTime":"2026-03-16T15:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.556096 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.556300 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.556323 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.556353 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.556374 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:54Z","lastTransitionTime":"2026-03-16T15:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.659967 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.660039 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.660058 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.660084 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.660278 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:54Z","lastTransitionTime":"2026-03-16T15:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.762817 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.762882 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.762900 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.762925 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.762945 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:54Z","lastTransitionTime":"2026-03-16T15:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.868572 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.868676 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.868699 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.868736 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.868764 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:54Z","lastTransitionTime":"2026-03-16T15:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.972672 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.972738 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.972758 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.972787 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:54 crc kubenswrapper[4736]: I0316 15:14:54.972808 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:54Z","lastTransitionTime":"2026-03-16T15:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.081209 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.081287 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.081307 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.081337 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.081363 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:55Z","lastTransitionTime":"2026-03-16T15:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.184371 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.184603 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.184728 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.184839 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.184932 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:55Z","lastTransitionTime":"2026-03-16T15:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.286844 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.287137 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.287210 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.287277 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.287332 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:55Z","lastTransitionTime":"2026-03-16T15:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.389887 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.389920 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.389934 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.389949 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.389958 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:55Z","lastTransitionTime":"2026-03-16T15:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.417794 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca"} Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.417826 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638"} Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.437354 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.449856 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.472938 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.488345 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.492455 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.492507 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.492540 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.492573 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.492593 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:55Z","lastTransitionTime":"2026-03-16T15:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.505795 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.521950 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.539604 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.553150 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.595827 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.596150 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.596230 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.596293 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.596363 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:55Z","lastTransitionTime":"2026-03-16T15:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.699257 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.699313 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.699327 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.699346 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.699363 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:55Z","lastTransitionTime":"2026-03-16T15:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.802006 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.802407 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.802499 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.802573 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.802641 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:55Z","lastTransitionTime":"2026-03-16T15:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.905084 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.905163 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.905172 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.905185 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.905193 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:55Z","lastTransitionTime":"2026-03-16T15:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.977270 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.977307 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:55 crc kubenswrapper[4736]: I0316 15:14:55.977292 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:55 crc kubenswrapper[4736]: E0316 15:14:55.977459 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:14:55 crc kubenswrapper[4736]: E0316 15:14:55.977581 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:14:55 crc kubenswrapper[4736]: E0316 15:14:55.977894 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.007503 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.007802 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.007901 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.007977 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.008065 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:56Z","lastTransitionTime":"2026-03-16T15:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.110096 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.110145 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.110154 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.110170 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.110178 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:56Z","lastTransitionTime":"2026-03-16T15:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.212431 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.212455 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.212463 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.212476 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.212484 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:56Z","lastTransitionTime":"2026-03-16T15:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.318844 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.318912 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.319065 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.319173 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.319204 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:56Z","lastTransitionTime":"2026-03-16T15:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.386839 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.386877 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.386889 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.386906 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.386918 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:56Z","lastTransitionTime":"2026-03-16T15:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:56 crc kubenswrapper[4736]: E0316 15:14:56.401946 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:56Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.410735 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.411052 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.411403 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.411520 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.411628 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:56Z","lastTransitionTime":"2026-03-16T15:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:56 crc kubenswrapper[4736]: E0316 15:14:56.433142 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:56Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.437116 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.437257 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.437321 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.437397 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.437475 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:56Z","lastTransitionTime":"2026-03-16T15:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:56 crc kubenswrapper[4736]: E0316 15:14:56.451005 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:56Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.454868 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.454912 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.454923 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.454947 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.454960 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:56Z","lastTransitionTime":"2026-03-16T15:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:56 crc kubenswrapper[4736]: E0316 15:14:56.467955 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:56Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.471574 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.471627 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.471666 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.471687 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.471704 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:56Z","lastTransitionTime":"2026-03-16T15:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:56 crc kubenswrapper[4736]: E0316 15:14:56.483858 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:56Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:56 crc kubenswrapper[4736]: E0316 15:14:56.484035 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.485975 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.486039 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.486055 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.486080 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.486095 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:56Z","lastTransitionTime":"2026-03-16T15:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.590539 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.590582 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.590590 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.590637 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.590652 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:56Z","lastTransitionTime":"2026-03-16T15:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.693388 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.693436 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.693449 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.693466 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.693479 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:56Z","lastTransitionTime":"2026-03-16T15:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.797361 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.798452 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.798641 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.798865 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.799161 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:56Z","lastTransitionTime":"2026-03-16T15:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.902494 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.903020 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.903301 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.903493 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:56 crc kubenswrapper[4736]: I0316 15:14:56.903677 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:56Z","lastTransitionTime":"2026-03-16T15:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.006312 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.006367 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.006379 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.006402 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.006419 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:57Z","lastTransitionTime":"2026-03-16T15:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.109788 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.109847 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.109920 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.109951 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.109976 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:57Z","lastTransitionTime":"2026-03-16T15:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.213737 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.213782 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.213791 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.213812 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.213824 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:57Z","lastTransitionTime":"2026-03-16T15:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.317093 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.317215 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.317241 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.317319 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.317381 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:57Z","lastTransitionTime":"2026-03-16T15:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.420423 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.420457 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.420465 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.420479 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.420488 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:57Z","lastTransitionTime":"2026-03-16T15:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.523209 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.523258 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.523272 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.523290 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.523303 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:57Z","lastTransitionTime":"2026-03-16T15:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.626079 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.626147 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.626159 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.626176 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.626189 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:57Z","lastTransitionTime":"2026-03-16T15:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.729553 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.729618 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.729635 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.729663 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.729676 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:57Z","lastTransitionTime":"2026-03-16T15:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.832858 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.832895 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.832903 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.832917 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.832927 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:57Z","lastTransitionTime":"2026-03-16T15:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.935853 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.935892 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.935901 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.935916 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.935924 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:57Z","lastTransitionTime":"2026-03-16T15:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.977834 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.977901 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:57 crc kubenswrapper[4736]: I0316 15:14:57.977834 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:57 crc kubenswrapper[4736]: E0316 15:14:57.978007 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:14:57 crc kubenswrapper[4736]: E0316 15:14:57.978234 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:14:57 crc kubenswrapper[4736]: E0316 15:14:57.978397 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.038576 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.038899 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.038926 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.038952 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.038970 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:58Z","lastTransitionTime":"2026-03-16T15:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.142776 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.142832 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.142860 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.142893 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.142916 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:58Z","lastTransitionTime":"2026-03-16T15:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.245203 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.245257 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.245279 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.245301 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.245317 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:58Z","lastTransitionTime":"2026-03-16T15:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.349171 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.349214 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.349229 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.349249 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.349262 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:58Z","lastTransitionTime":"2026-03-16T15:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.430336 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d"} Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.450177 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:58Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.453160 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.453197 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.453209 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.453227 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.453241 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:58Z","lastTransitionTime":"2026-03-16T15:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.467741 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:58Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.492921 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:58Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.514293 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:58Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.543341 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:58Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.561602 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.561681 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.561696 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.561719 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.561736 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:58Z","lastTransitionTime":"2026-03-16T15:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.598700 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:58Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.617856 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:58Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.632887 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:58Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.665005 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.665044 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.665055 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.665072 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.665083 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:58Z","lastTransitionTime":"2026-03-16T15:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.768297 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.768354 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.768364 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.768383 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.768392 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:58Z","lastTransitionTime":"2026-03-16T15:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.871461 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.872198 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.872224 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.872247 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.872264 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:58Z","lastTransitionTime":"2026-03-16T15:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.975555 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.975599 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.975607 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.975622 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:58 crc kubenswrapper[4736]: I0316 15:14:58.975633 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:58Z","lastTransitionTime":"2026-03-16T15:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.007036 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:59Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.024290 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:59Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.043932 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:59Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.063178 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:59Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.077344 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.077385 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.077396 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.077412 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.077423 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:59Z","lastTransitionTime":"2026-03-16T15:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.094415 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:59Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.113420 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:59Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.128615 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:59Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.142250 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:14:59Z is after 2025-08-24T17:21:41Z" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.179481 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.179532 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.179542 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.179559 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.179569 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:59Z","lastTransitionTime":"2026-03-16T15:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.283319 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.283376 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.283388 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.283409 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.283421 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:59Z","lastTransitionTime":"2026-03-16T15:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.386050 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.386097 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.386127 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.386144 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.386157 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:59Z","lastTransitionTime":"2026-03-16T15:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.488660 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.488712 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.488728 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.488747 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.488761 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:59Z","lastTransitionTime":"2026-03-16T15:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.591795 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.592220 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.592233 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.592255 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.592271 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:59Z","lastTransitionTime":"2026-03-16T15:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.696399 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.696442 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.696456 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.696473 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.696483 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:59Z","lastTransitionTime":"2026-03-16T15:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.792348 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.792455 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.792603 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.792623 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:15:15.792560445 +0000 UTC m=+117.519950772 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.792684 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:15:15.792667249 +0000 UTC m=+117.520057566 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.792750 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.793030 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.793198 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:15:15.793165583 +0000 UTC m=+117.520556060 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.799420 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.799459 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.799489 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.799508 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.799518 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:59Z","lastTransitionTime":"2026-03-16T15:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.893236 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.893302 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.893497 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.893519 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.893534 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.893496 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.893625 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.893639 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.893597 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-16 15:15:15.893581859 +0000 UTC m=+117.620972146 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.893717 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-16 15:15:15.893698322 +0000 UTC m=+117.621088599 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.902557 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.902608 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.902621 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.902642 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.902664 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:14:59Z","lastTransitionTime":"2026-03-16T15:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.977585 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.977675 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.978163 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.977702 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.978370 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:14:59 crc kubenswrapper[4736]: I0316 15:14:59.978388 4736 scope.go:117] "RemoveContainer" containerID="df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab" Mar 16 15:14:59 crc kubenswrapper[4736]: E0316 15:14:59.978189 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.010058 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.010121 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.010130 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.010144 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.010154 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:00Z","lastTransitionTime":"2026-03-16T15:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.121751 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.121947 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.122007 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.122068 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.122142 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:00Z","lastTransitionTime":"2026-03-16T15:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.224553 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.224598 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.224624 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.224645 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.224656 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:00Z","lastTransitionTime":"2026-03-16T15:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.327478 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.327837 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.327933 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.328034 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.328129 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:00Z","lastTransitionTime":"2026-03-16T15:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.430314 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.430364 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.430376 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.430393 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.430406 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:00Z","lastTransitionTime":"2026-03-16T15:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.437778 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.439526 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144"} Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.439881 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.455282 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:00Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.467780 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:00Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.491973 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:00Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.509314 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:00Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.524194 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:00Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.533294 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.533328 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.533337 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.533353 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.533363 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:00Z","lastTransitionTime":"2026-03-16T15:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.544355 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:00Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.559355 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:00Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.573762 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:00Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.636495 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.636551 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.636569 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.636594 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.636613 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:00Z","lastTransitionTime":"2026-03-16T15:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.739811 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.740147 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.740209 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.740276 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.740340 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:00Z","lastTransitionTime":"2026-03-16T15:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.842921 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.842966 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.842976 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.842994 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.843007 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:00Z","lastTransitionTime":"2026-03-16T15:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.946528 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.946588 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.946608 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.946630 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:00 crc kubenswrapper[4736]: I0316 15:15:00.946647 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:00Z","lastTransitionTime":"2026-03-16T15:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.049367 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.049416 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.049431 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.049451 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.049465 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:01Z","lastTransitionTime":"2026-03-16T15:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.152320 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.152365 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.152374 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.152391 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.152400 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:01Z","lastTransitionTime":"2026-03-16T15:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.255022 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.255067 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.255079 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.255097 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.255137 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:01Z","lastTransitionTime":"2026-03-16T15:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.357257 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.357301 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.357311 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.357334 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.357345 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:01Z","lastTransitionTime":"2026-03-16T15:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.459717 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.459768 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.459781 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.459808 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.459833 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:01Z","lastTransitionTime":"2026-03-16T15:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.562212 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.562267 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.562279 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.562297 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.562309 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:01Z","lastTransitionTime":"2026-03-16T15:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.665425 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.665475 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.665485 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.665500 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.665524 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:01Z","lastTransitionTime":"2026-03-16T15:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.767769 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.767810 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.767821 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.767852 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.767862 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:01Z","lastTransitionTime":"2026-03-16T15:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.871479 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.871562 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.871585 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.871616 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.871639 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:01Z","lastTransitionTime":"2026-03-16T15:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.974651 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.974728 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.974752 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.974784 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.974806 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:01Z","lastTransitionTime":"2026-03-16T15:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.977302 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.977422 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:01 crc kubenswrapper[4736]: E0316 15:15:01.977473 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:01 crc kubenswrapper[4736]: I0316 15:15:01.977314 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:01 crc kubenswrapper[4736]: E0316 15:15:01.977734 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:01 crc kubenswrapper[4736]: E0316 15:15:01.977785 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.077002 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.077048 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.077062 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.077081 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.077094 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:02Z","lastTransitionTime":"2026-03-16T15:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.179347 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.179442 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.179467 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.179525 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.179544 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:02Z","lastTransitionTime":"2026-03-16T15:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.283062 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.283142 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.283156 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.283182 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.283225 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:02Z","lastTransitionTime":"2026-03-16T15:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.386531 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.386587 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.386599 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.386618 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.386631 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:02Z","lastTransitionTime":"2026-03-16T15:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.447224 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f"} Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.464005 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:02Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.481788 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:02Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.493129 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.493242 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.493259 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.493279 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.493295 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:02Z","lastTransitionTime":"2026-03-16T15:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.517651 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:02Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.540411 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:02Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.561463 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:02Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.577096 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:02Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.591354 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:02Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.595701 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.595744 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.595759 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.595779 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.595792 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:02Z","lastTransitionTime":"2026-03-16T15:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.606427 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:02Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.698394 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.698428 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.698438 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.698452 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.698462 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:02Z","lastTransitionTime":"2026-03-16T15:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.801244 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.801299 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.801311 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.801329 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.801341 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:02Z","lastTransitionTime":"2026-03-16T15:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.903884 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.903925 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.903938 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.903955 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:02 crc kubenswrapper[4736]: I0316 15:15:02.903967 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:02Z","lastTransitionTime":"2026-03-16T15:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.006094 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.006173 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.006187 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.006204 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.006228 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:03Z","lastTransitionTime":"2026-03-16T15:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.108747 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.108794 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.108803 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.108825 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.108835 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:03Z","lastTransitionTime":"2026-03-16T15:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.211491 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.211525 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.211534 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.211549 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.211559 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:03Z","lastTransitionTime":"2026-03-16T15:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.314211 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.314263 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.314275 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.314292 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.314305 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:03Z","lastTransitionTime":"2026-03-16T15:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.416730 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.416771 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.416781 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.416796 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.416806 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:03Z","lastTransitionTime":"2026-03-16T15:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.519306 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.519366 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.519393 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.519426 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.519450 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:03Z","lastTransitionTime":"2026-03-16T15:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.621990 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.622089 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.622126 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.622148 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.622163 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:03Z","lastTransitionTime":"2026-03-16T15:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.724731 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.724802 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.724815 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.724836 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.724869 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:03Z","lastTransitionTime":"2026-03-16T15:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.828770 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.828848 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.828868 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.828895 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.828915 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:03Z","lastTransitionTime":"2026-03-16T15:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.931826 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.931878 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.931894 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.931914 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.931925 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:03Z","lastTransitionTime":"2026-03-16T15:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.977781 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.977877 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:03 crc kubenswrapper[4736]: E0316 15:15:03.977928 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:03 crc kubenswrapper[4736]: I0316 15:15:03.977802 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:03 crc kubenswrapper[4736]: E0316 15:15:03.978016 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:03 crc kubenswrapper[4736]: E0316 15:15:03.978158 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.034693 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.034767 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.034778 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.034795 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.034809 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:04Z","lastTransitionTime":"2026-03-16T15:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.136806 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.136844 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.136858 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.136876 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.136889 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:04Z","lastTransitionTime":"2026-03-16T15:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.239167 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.239201 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.239212 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.239231 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.239243 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:04Z","lastTransitionTime":"2026-03-16T15:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.341542 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.341581 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.341591 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.341606 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.341616 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:04Z","lastTransitionTime":"2026-03-16T15:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.444233 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.444349 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.444362 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.444380 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.444393 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:04Z","lastTransitionTime":"2026-03-16T15:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.547359 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.547392 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.547402 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.547419 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.547430 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:04Z","lastTransitionTime":"2026-03-16T15:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.651000 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.651042 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.651053 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.651069 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.651082 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:04Z","lastTransitionTime":"2026-03-16T15:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.753990 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.754040 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.754058 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.754097 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.754153 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:04Z","lastTransitionTime":"2026-03-16T15:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.857484 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.857531 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.857543 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.857563 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.857575 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:04Z","lastTransitionTime":"2026-03-16T15:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.960829 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.960880 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.960891 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.960912 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:04 crc kubenswrapper[4736]: I0316 15:15:04.960924 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:04Z","lastTransitionTime":"2026-03-16T15:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.063832 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.063909 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.063928 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.063981 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.064000 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:05Z","lastTransitionTime":"2026-03-16T15:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.166941 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.167074 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.167092 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.167147 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.167173 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:05Z","lastTransitionTime":"2026-03-16T15:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.270747 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.270819 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.270838 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.270868 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.270887 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:05Z","lastTransitionTime":"2026-03-16T15:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.373374 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.373432 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.373446 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.373465 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.373479 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:05Z","lastTransitionTime":"2026-03-16T15:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.476853 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.476926 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.476949 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.476972 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.476990 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:05Z","lastTransitionTime":"2026-03-16T15:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.580405 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.580456 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.580472 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.580496 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.580513 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:05Z","lastTransitionTime":"2026-03-16T15:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.683359 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.683407 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.683419 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.683434 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.683446 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:05Z","lastTransitionTime":"2026-03-16T15:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.785397 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.785440 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.785456 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.785476 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.785489 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:05Z","lastTransitionTime":"2026-03-16T15:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.888129 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.888182 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.888198 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.888217 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.888230 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:05Z","lastTransitionTime":"2026-03-16T15:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.977321 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.977321 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:05 crc kubenswrapper[4736]: E0316 15:15:05.977462 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:05 crc kubenswrapper[4736]: E0316 15:15:05.977546 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.977339 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:05 crc kubenswrapper[4736]: E0316 15:15:05.977619 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.991226 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.991272 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.991286 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.991308 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:05 crc kubenswrapper[4736]: I0316 15:15:05.991324 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:05Z","lastTransitionTime":"2026-03-16T15:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.093954 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.093988 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.094000 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.094018 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.094029 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:06Z","lastTransitionTime":"2026-03-16T15:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.197200 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.197254 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.197270 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.197292 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.197309 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:06Z","lastTransitionTime":"2026-03-16T15:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.300315 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.300370 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.300387 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.300411 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.300429 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:06Z","lastTransitionTime":"2026-03-16T15:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.403412 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.403456 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.403477 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.403506 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.403523 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:06Z","lastTransitionTime":"2026-03-16T15:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.506021 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.506073 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.506089 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.506137 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.506156 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:06Z","lastTransitionTime":"2026-03-16T15:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.595322 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.595374 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.595388 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.595408 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.595419 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:06Z","lastTransitionTime":"2026-03-16T15:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:06 crc kubenswrapper[4736]: E0316 15:15:06.606963 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:06Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.609752 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.609790 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.609802 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.609819 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.609830 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:06Z","lastTransitionTime":"2026-03-16T15:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:06 crc kubenswrapper[4736]: E0316 15:15:06.623303 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:06Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.626015 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.626042 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.626050 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.626064 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.626074 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:06Z","lastTransitionTime":"2026-03-16T15:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:06 crc kubenswrapper[4736]: E0316 15:15:06.638571 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:06Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.641740 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.641772 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.641783 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.641799 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.641809 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:06Z","lastTransitionTime":"2026-03-16T15:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:06 crc kubenswrapper[4736]: E0316 15:15:06.657059 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:06Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.661944 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.661991 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.662005 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.662032 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.662045 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:06Z","lastTransitionTime":"2026-03-16T15:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:06 crc kubenswrapper[4736]: E0316 15:15:06.674525 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:06Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:06 crc kubenswrapper[4736]: E0316 15:15:06.674637 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.676299 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.676368 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.676387 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.676415 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.676435 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:06Z","lastTransitionTime":"2026-03-16T15:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.778756 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.778796 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.778805 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.778818 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.778828 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:06Z","lastTransitionTime":"2026-03-16T15:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.881052 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.881087 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.881096 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.881126 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.881137 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:06Z","lastTransitionTime":"2026-03-16T15:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.984057 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.984138 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.984163 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.984190 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:06 crc kubenswrapper[4736]: I0316 15:15:06.984209 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:06Z","lastTransitionTime":"2026-03-16T15:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.087573 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.087617 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.087629 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.087646 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.087658 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:07Z","lastTransitionTime":"2026-03-16T15:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.190206 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.190244 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.190254 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.190270 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.190282 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:07Z","lastTransitionTime":"2026-03-16T15:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.293015 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.293069 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.293081 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.293097 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.293132 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:07Z","lastTransitionTime":"2026-03-16T15:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.395341 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.395376 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.395386 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.395400 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.395409 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:07Z","lastTransitionTime":"2026-03-16T15:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.498093 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.498138 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.498146 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.498162 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.498176 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:07Z","lastTransitionTime":"2026-03-16T15:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.601088 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.601185 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.601202 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.601225 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.601243 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:07Z","lastTransitionTime":"2026-03-16T15:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.703623 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.703752 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.703771 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.703913 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.703940 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:07Z","lastTransitionTime":"2026-03-16T15:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.762123 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-99qcn"] Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.762544 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-99qcn" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.766429 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.766537 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.767193 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.785229 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:07Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.801770 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:07Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.806008 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.806047 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.806056 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.806071 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.806080 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:07Z","lastTransitionTime":"2026-03-16T15:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.819724 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:07Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.833031 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:07Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.843313 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:07Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.866400 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:07Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.871394 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a828b340-068e-4918-8873-1137677926e8-hosts-file\") pod \"node-resolver-99qcn\" (UID: \"a828b340-068e-4918-8873-1137677926e8\") " pod="openshift-dns/node-resolver-99qcn" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.871459 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c7ds\" (UniqueName: \"kubernetes.io/projected/a828b340-068e-4918-8873-1137677926e8-kube-api-access-2c7ds\") pod \"node-resolver-99qcn\" (UID: \"a828b340-068e-4918-8873-1137677926e8\") " pod="openshift-dns/node-resolver-99qcn" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.882085 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:07Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.898916 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:07Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.908368 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.908413 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.908424 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.908439 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.908453 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:07Z","lastTransitionTime":"2026-03-16T15:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.917444 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:07Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.972499 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a828b340-068e-4918-8873-1137677926e8-hosts-file\") pod \"node-resolver-99qcn\" (UID: \"a828b340-068e-4918-8873-1137677926e8\") " pod="openshift-dns/node-resolver-99qcn" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.972576 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c7ds\" (UniqueName: \"kubernetes.io/projected/a828b340-068e-4918-8873-1137677926e8-kube-api-access-2c7ds\") pod \"node-resolver-99qcn\" (UID: \"a828b340-068e-4918-8873-1137677926e8\") " pod="openshift-dns/node-resolver-99qcn" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.972688 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a828b340-068e-4918-8873-1137677926e8-hosts-file\") pod \"node-resolver-99qcn\" (UID: \"a828b340-068e-4918-8873-1137677926e8\") " pod="openshift-dns/node-resolver-99qcn" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.977297 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.977421 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:07 crc kubenswrapper[4736]: E0316 15:15:07.977534 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:07 crc kubenswrapper[4736]: I0316 15:15:07.977582 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:07 crc kubenswrapper[4736]: E0316 15:15:07.977700 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:07 crc kubenswrapper[4736]: E0316 15:15:07.977816 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:07.997955 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c7ds\" (UniqueName: \"kubernetes.io/projected/a828b340-068e-4918-8873-1137677926e8-kube-api-access-2c7ds\") pod \"node-resolver-99qcn\" (UID: \"a828b340-068e-4918-8873-1137677926e8\") " pod="openshift-dns/node-resolver-99qcn" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.011304 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.011383 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.011410 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.011441 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.011464 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:08Z","lastTransitionTime":"2026-03-16T15:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.087440 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-99qcn" Mar 16 15:15:08 crc kubenswrapper[4736]: W0316 15:15:08.113279 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda828b340_068e_4918_8873_1137677926e8.slice/crio-e778c2c0ef956b5080a23eb4c9e24402ef2f848d1539f58685b821dcb24e8223 WatchSource:0}: Error finding container e778c2c0ef956b5080a23eb4c9e24402ef2f848d1539f58685b821dcb24e8223: Status 404 returned error can't find the container with id e778c2c0ef956b5080a23eb4c9e24402ef2f848d1539f58685b821dcb24e8223 Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.115484 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.115530 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.115548 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.115691 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.115713 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:08Z","lastTransitionTime":"2026-03-16T15:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.132026 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-j9cg2"] Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.132453 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-zgcj2"] Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.132668 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-hsw5j"] Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.133288 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.133490 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.135223 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.142060 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.144228 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.145529 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.145673 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.145801 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.146044 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.146217 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.146247 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.146352 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.146398 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.146468 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.146513 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.166782 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.191899 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.223840 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.226390 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.226540 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.226631 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.226716 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.226808 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:08Z","lastTransitionTime":"2026-03-16T15:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.239741 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.254961 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.268877 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.275833 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8fb586d7-2a83-4790-8de9-e7e993542167-system-cni-dir\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.276241 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/eabe1535-f51c-4a72-b299-aab5ca4ab624-cni-binary-copy\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.276415 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/eabe1535-f51c-4a72-b299-aab5ca4ab624-multus-daemon-config\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.276560 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8fb586d7-2a83-4790-8de9-e7e993542167-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.276703 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45c93e24-5358-402f-9ace-e85478dedb49-mcd-auth-proxy-config\") pod \"machine-config-daemon-j9cg2\" (UID: \"45c93e24-5358-402f-9ace-e85478dedb49\") " pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.276864 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-multus-conf-dir\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.277011 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-multus-cni-dir\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.277176 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-cnibin\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.277348 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-run-k8s-cni-cncf-io\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.277588 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-var-lib-cni-bin\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.277783 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8fb586d7-2a83-4790-8de9-e7e993542167-cnibin\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.277922 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-run-netns\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.278049 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-run-multus-certs\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.278198 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8fb586d7-2a83-4790-8de9-e7e993542167-os-release\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.278333 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdplh\" (UniqueName: \"kubernetes.io/projected/8fb586d7-2a83-4790-8de9-e7e993542167-kube-api-access-tdplh\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.278453 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-system-cni-dir\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.278597 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-var-lib-kubelet\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.278703 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-os-release\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.278810 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8fb586d7-2a83-4790-8de9-e7e993542167-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.278907 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-var-lib-cni-multus\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.279048 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjlpz\" (UniqueName: \"kubernetes.io/projected/45c93e24-5358-402f-9ace-e85478dedb49-kube-api-access-tjlpz\") pod \"machine-config-daemon-j9cg2\" (UID: \"45c93e24-5358-402f-9ace-e85478dedb49\") " pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.279192 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-multus-socket-dir-parent\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.279317 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-hostroot\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.279420 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/45c93e24-5358-402f-9ace-e85478dedb49-rootfs\") pod \"machine-config-daemon-j9cg2\" (UID: \"45c93e24-5358-402f-9ace-e85478dedb49\") " pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.279520 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/45c93e24-5358-402f-9ace-e85478dedb49-proxy-tls\") pod \"machine-config-daemon-j9cg2\" (UID: \"45c93e24-5358-402f-9ace-e85478dedb49\") " pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.279631 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-etc-kubernetes\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.279754 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nfnm\" (UniqueName: \"kubernetes.io/projected/eabe1535-f51c-4a72-b299-aab5ca4ab624-kube-api-access-5nfnm\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.279892 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8fb586d7-2a83-4790-8de9-e7e993542167-cni-binary-copy\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.284429 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.299717 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.314308 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.328971 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.330929 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.330977 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.330986 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.331002 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.331011 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:08Z","lastTransitionTime":"2026-03-16T15:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.341904 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.355885 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.370586 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381510 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/45c93e24-5358-402f-9ace-e85478dedb49-proxy-tls\") pod \"machine-config-daemon-j9cg2\" (UID: \"45c93e24-5358-402f-9ace-e85478dedb49\") " pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381565 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-etc-kubernetes\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381586 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nfnm\" (UniqueName: \"kubernetes.io/projected/eabe1535-f51c-4a72-b299-aab5ca4ab624-kube-api-access-5nfnm\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381624 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8fb586d7-2a83-4790-8de9-e7e993542167-cni-binary-copy\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381648 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8fb586d7-2a83-4790-8de9-e7e993542167-system-cni-dir\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381670 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/eabe1535-f51c-4a72-b299-aab5ca4ab624-cni-binary-copy\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381695 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45c93e24-5358-402f-9ace-e85478dedb49-mcd-auth-proxy-config\") pod \"machine-config-daemon-j9cg2\" (UID: \"45c93e24-5358-402f-9ace-e85478dedb49\") " pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381719 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-multus-conf-dir\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381739 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/eabe1535-f51c-4a72-b299-aab5ca4ab624-multus-daemon-config\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381769 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8fb586d7-2a83-4790-8de9-e7e993542167-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381793 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-multus-cni-dir\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381823 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8fb586d7-2a83-4790-8de9-e7e993542167-cnibin\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381844 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-cnibin\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381867 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-run-k8s-cni-cncf-io\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381892 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-var-lib-cni-bin\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381929 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-run-netns\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381955 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-system-cni-dir\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.381982 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-var-lib-kubelet\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382008 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-run-multus-certs\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382030 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8fb586d7-2a83-4790-8de9-e7e993542167-os-release\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382053 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdplh\" (UniqueName: \"kubernetes.io/projected/8fb586d7-2a83-4790-8de9-e7e993542167-kube-api-access-tdplh\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382074 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-os-release\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382099 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-var-lib-cni-multus\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382156 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8fb586d7-2a83-4790-8de9-e7e993542167-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382179 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/45c93e24-5358-402f-9ace-e85478dedb49-rootfs\") pod \"machine-config-daemon-j9cg2\" (UID: \"45c93e24-5358-402f-9ace-e85478dedb49\") " pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382201 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjlpz\" (UniqueName: \"kubernetes.io/projected/45c93e24-5358-402f-9ace-e85478dedb49-kube-api-access-tjlpz\") pod \"machine-config-daemon-j9cg2\" (UID: \"45c93e24-5358-402f-9ace-e85478dedb49\") " pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382226 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-multus-socket-dir-parent\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382249 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-hostroot\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382270 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8fb586d7-2a83-4790-8de9-e7e993542167-cnibin\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382320 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-hostroot\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382394 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-cnibin\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382431 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-run-k8s-cni-cncf-io\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382436 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-etc-kubernetes\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382469 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-var-lib-cni-bin\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382513 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-run-netns\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382621 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-system-cni-dir\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382660 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-var-lib-kubelet\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382697 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-run-multus-certs\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.382758 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8fb586d7-2a83-4790-8de9-e7e993542167-os-release\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.383005 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-os-release\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.383044 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-host-var-lib-cni-multus\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.383583 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8fb586d7-2a83-4790-8de9-e7e993542167-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.383631 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/45c93e24-5358-402f-9ace-e85478dedb49-rootfs\") pod \"machine-config-daemon-j9cg2\" (UID: \"45c93e24-5358-402f-9ace-e85478dedb49\") " pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.383868 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-multus-conf-dir\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.383899 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-multus-socket-dir-parent\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.383928 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8fb586d7-2a83-4790-8de9-e7e993542167-system-cni-dir\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.384330 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/eabe1535-f51c-4a72-b299-aab5ca4ab624-multus-cni-dir\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.384872 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8fb586d7-2a83-4790-8de9-e7e993542167-cni-binary-copy\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.385942 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45c93e24-5358-402f-9ace-e85478dedb49-mcd-auth-proxy-config\") pod \"machine-config-daemon-j9cg2\" (UID: \"45c93e24-5358-402f-9ace-e85478dedb49\") " pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.388142 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/eabe1535-f51c-4a72-b299-aab5ca4ab624-cni-binary-copy\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.388195 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/eabe1535-f51c-4a72-b299-aab5ca4ab624-multus-daemon-config\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.390461 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.393554 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8fb586d7-2a83-4790-8de9-e7e993542167-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.395375 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/45c93e24-5358-402f-9ace-e85478dedb49-proxy-tls\") pod \"machine-config-daemon-j9cg2\" (UID: \"45c93e24-5358-402f-9ace-e85478dedb49\") " pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.400020 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nfnm\" (UniqueName: \"kubernetes.io/projected/eabe1535-f51c-4a72-b299-aab5ca4ab624-kube-api-access-5nfnm\") pod \"multus-zgcj2\" (UID: \"eabe1535-f51c-4a72-b299-aab5ca4ab624\") " pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.401019 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjlpz\" (UniqueName: \"kubernetes.io/projected/45c93e24-5358-402f-9ace-e85478dedb49-kube-api-access-tjlpz\") pod \"machine-config-daemon-j9cg2\" (UID: \"45c93e24-5358-402f-9ace-e85478dedb49\") " pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.401647 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdplh\" (UniqueName: \"kubernetes.io/projected/8fb586d7-2a83-4790-8de9-e7e993542167-kube-api-access-tdplh\") pod \"multus-additional-cni-plugins-hsw5j\" (UID: \"8fb586d7-2a83-4790-8de9-e7e993542167\") " pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.406898 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.421740 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.433580 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.433631 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.433646 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.433666 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.433681 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:08Z","lastTransitionTime":"2026-03-16T15:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.438492 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.459603 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.470452 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-99qcn" event={"ID":"a828b340-068e-4918-8873-1137677926e8","Type":"ContainerStarted","Data":"0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51"} Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.470510 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-99qcn" event={"ID":"a828b340-068e-4918-8873-1137677926e8","Type":"ContainerStarted","Data":"e778c2c0ef956b5080a23eb4c9e24402ef2f848d1539f58685b821dcb24e8223"} Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.473385 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-zgcj2" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.475209 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: W0316 15:15:08.485911 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeabe1535_f51c_4a72_b299_aab5ca4ab624.slice/crio-50bb292d073c3e0a50e521a3fa826e545c34c9d003e6979055d0953fd57d80eb WatchSource:0}: Error finding container 50bb292d073c3e0a50e521a3fa826e545c34c9d003e6979055d0953fd57d80eb: Status 404 returned error can't find the container with id 50bb292d073c3e0a50e521a3fa826e545c34c9d003e6979055d0953fd57d80eb Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.488271 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.488607 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.501849 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.507170 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:15:08 crc kubenswrapper[4736]: W0316 15:15:08.516482 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fb586d7_2a83_4790_8de9_e7e993542167.slice/crio-d5677d343a73e2a5882508573458d22f83f86a8b5bb1ea46b0a13bbfc926e83c WatchSource:0}: Error finding container d5677d343a73e2a5882508573458d22f83f86a8b5bb1ea46b0a13bbfc926e83c: Status 404 returned error can't find the container with id d5677d343a73e2a5882508573458d22f83f86a8b5bb1ea46b0a13bbfc926e83c Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.517976 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w7kdw"] Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.519434 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.520271 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.523503 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.523884 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.526185 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.526687 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.527080 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.527275 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.527309 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.538997 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.540688 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.540721 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.540733 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.540751 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.540764 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:08Z","lastTransitionTime":"2026-03-16T15:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.554293 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.585281 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.607427 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.622599 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.644051 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.644095 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.644146 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.644177 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.644194 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:08Z","lastTransitionTime":"2026-03-16T15:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.644336 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.663178 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.678492 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.685439 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.685502 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-slash\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.685551 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-var-lib-openvswitch\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.685588 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-openvswitch\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.685613 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-ovn\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.685684 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-env-overrides\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.685786 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-systemd\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.685850 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-kubelet\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.685874 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-run-netns\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.685912 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p48n\" (UniqueName: \"kubernetes.io/projected/83041fd9-2e75-4569-ab47-ac7590a189a6-kube-api-access-9p48n\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.685942 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-ovnkube-config\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.685966 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/83041fd9-2e75-4569-ab47-ac7590a189a6-ovn-node-metrics-cert\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.685992 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-log-socket\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.686016 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-cni-netd\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.686041 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-etc-openvswitch\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.686071 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-cni-bin\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.686128 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-ovnkube-script-lib\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.686160 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-systemd-units\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.686198 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-node-log\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.686226 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.693416 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.715689 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.735693 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.749790 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.749840 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.749854 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.749880 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.749897 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:08Z","lastTransitionTime":"2026-03-16T15:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.756392 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.774341 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.787793 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-systemd-units\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.787869 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-node-log\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.787902 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.787960 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.787996 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-slash\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788027 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-var-lib-openvswitch\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788060 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-openvswitch\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788177 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-ovn\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788213 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-env-overrides\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788252 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-systemd\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788283 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-kubelet\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788319 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-run-netns\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788386 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p48n\" (UniqueName: \"kubernetes.io/projected/83041fd9-2e75-4569-ab47-ac7590a189a6-kube-api-access-9p48n\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788418 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-ovnkube-config\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788447 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/83041fd9-2e75-4569-ab47-ac7590a189a6-ovn-node-metrics-cert\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788487 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-log-socket\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788517 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-cni-netd\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788549 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-etc-openvswitch\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788584 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-cni-bin\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.788615 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-ovnkube-script-lib\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.789794 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-ovnkube-script-lib\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.789897 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-systemd-units\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.789964 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-node-log\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.790016 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.790071 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.790155 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-slash\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.790213 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-var-lib-openvswitch\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.790263 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-openvswitch\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.790360 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-ovn\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.790961 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-env-overrides\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.791018 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-systemd\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.791052 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-kubelet\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.791082 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-run-netns\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.792214 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-ovnkube-config\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.793223 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-log-socket\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.793313 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-cni-netd\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.793238 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-cni-bin\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.793220 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-etc-openvswitch\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.793383 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.796233 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/83041fd9-2e75-4569-ab47-ac7590a189a6-ovn-node-metrics-cert\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.810375 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p48n\" (UniqueName: \"kubernetes.io/projected/83041fd9-2e75-4569-ab47-ac7590a189a6-kube-api-access-9p48n\") pod \"ovnkube-node-w7kdw\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.849290 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.852536 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.852585 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.852594 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.852612 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.852625 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:08Z","lastTransitionTime":"2026-03-16T15:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:08 crc kubenswrapper[4736]: W0316 15:15:08.862259 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83041fd9_2e75_4569_ab47_ac7590a189a6.slice/crio-a4d2e50e055e9dc6357c91ca24fca2b83be1471759c4d06da42cc823f6324718 WatchSource:0}: Error finding container a4d2e50e055e9dc6357c91ca24fca2b83be1471759c4d06da42cc823f6324718: Status 404 returned error can't find the container with id a4d2e50e055e9dc6357c91ca24fca2b83be1471759c4d06da42cc823f6324718 Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.956041 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.956147 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.956170 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.956202 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.956222 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:08Z","lastTransitionTime":"2026-03-16T15:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:08 crc kubenswrapper[4736]: I0316 15:15:08.995447 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:08Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.016042 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.053456 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.060068 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.060138 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.060152 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.060186 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.060199 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:09Z","lastTransitionTime":"2026-03-16T15:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.083302 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.121311 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.150781 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.162830 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.163209 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.163225 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.163242 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.163125 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.163255 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:09Z","lastTransitionTime":"2026-03-16T15:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.184267 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.199552 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.211867 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.223094 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.234753 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.248875 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.266019 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.266064 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.266074 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.266089 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.266115 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:09Z","lastTransitionTime":"2026-03-16T15:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.369709 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.369751 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.369761 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.369778 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.369791 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:09Z","lastTransitionTime":"2026-03-16T15:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.474023 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.474062 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.474075 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.474093 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.474128 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:09Z","lastTransitionTime":"2026-03-16T15:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.476007 4736 generic.go:334] "Generic (PLEG): container finished" podID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerID="8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9" exitCode=0 Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.476079 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerDied","Data":"8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.476140 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerStarted","Data":"a4d2e50e055e9dc6357c91ca24fca2b83be1471759c4d06da42cc823f6324718"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.478898 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.478919 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.478929 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"0ead1e831920bdf8c29c4aa4ebf060e7297ea804fcd6fcfbdaf124917b0073ae"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.482828 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zgcj2" event={"ID":"eabe1535-f51c-4a72-b299-aab5ca4ab624","Type":"ContainerStarted","Data":"5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.482866 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zgcj2" event={"ID":"eabe1535-f51c-4a72-b299-aab5ca4ab624","Type":"ContainerStarted","Data":"50bb292d073c3e0a50e521a3fa826e545c34c9d003e6979055d0953fd57d80eb"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.485487 4736 generic.go:334] "Generic (PLEG): container finished" podID="8fb586d7-2a83-4790-8de9-e7e993542167" containerID="d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7" exitCode=0 Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.485514 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" event={"ID":"8fb586d7-2a83-4790-8de9-e7e993542167","Type":"ContainerDied","Data":"d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.485527 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" event={"ID":"8fb586d7-2a83-4790-8de9-e7e993542167","Type":"ContainerStarted","Data":"d5677d343a73e2a5882508573458d22f83f86a8b5bb1ea46b0a13bbfc926e83c"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.510772 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.535216 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.570210 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.576788 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.576811 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.576820 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.576835 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.576846 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:09Z","lastTransitionTime":"2026-03-16T15:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.589087 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.609351 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.639291 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.653544 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.683494 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.685442 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.685478 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.685495 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.685521 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.685536 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:09Z","lastTransitionTime":"2026-03-16T15:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.700671 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.715612 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.731494 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.753741 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.766944 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.778909 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.788353 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.788379 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.788389 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.788406 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.788419 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:09Z","lastTransitionTime":"2026-03-16T15:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.790576 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.809813 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.825618 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.836718 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.867217 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.886632 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.893128 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.893219 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.893229 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.893263 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.893275 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:09Z","lastTransitionTime":"2026-03-16T15:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.901094 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.914853 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.927985 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.945713 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.962445 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.975020 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:09Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.977405 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.977415 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.977510 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:09 crc kubenswrapper[4736]: E0316 15:15:09.977546 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:09 crc kubenswrapper[4736]: E0316 15:15:09.977715 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:09 crc kubenswrapper[4736]: E0316 15:15:09.977818 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.996151 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.996218 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.996229 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.996260 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:09 crc kubenswrapper[4736]: I0316 15:15:09.996275 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:09Z","lastTransitionTime":"2026-03-16T15:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.098587 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.098636 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.098648 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.098667 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.098679 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:10Z","lastTransitionTime":"2026-03-16T15:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.200830 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.200901 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.200915 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.200936 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.200950 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:10Z","lastTransitionTime":"2026-03-16T15:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.303750 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.304488 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.304518 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.304596 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.304626 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:10Z","lastTransitionTime":"2026-03-16T15:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.410429 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.410492 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.410506 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.410533 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.410547 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:10Z","lastTransitionTime":"2026-03-16T15:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.494952 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerStarted","Data":"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.495029 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerStarted","Data":"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.495042 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerStarted","Data":"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.495056 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerStarted","Data":"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.495069 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerStarted","Data":"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.500340 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" event={"ID":"8fb586d7-2a83-4790-8de9-e7e993542167","Type":"ContainerStarted","Data":"e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.514378 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.514429 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.514442 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.514465 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.514476 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:10Z","lastTransitionTime":"2026-03-16T15:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.545530 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:10Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.566161 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:10Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.581197 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:10Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.599423 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:10Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.617282 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:10Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.617627 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.617666 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.617678 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.617693 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.617702 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:10Z","lastTransitionTime":"2026-03-16T15:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.634489 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:10Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.649690 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:10Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.664176 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:10Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.681216 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:10Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.704976 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:10Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.720399 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.720440 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.720454 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.720470 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.720480 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:10Z","lastTransitionTime":"2026-03-16T15:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.722663 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:10Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.746190 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:10Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.759565 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:10Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.822809 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.822845 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.822856 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.822873 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.822885 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:10Z","lastTransitionTime":"2026-03-16T15:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.926442 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.926497 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.926508 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.926526 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:10 crc kubenswrapper[4736]: I0316 15:15:10.926536 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:10Z","lastTransitionTime":"2026-03-16T15:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.029340 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.029394 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.029404 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.029423 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.029436 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:11Z","lastTransitionTime":"2026-03-16T15:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.133095 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.133212 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.133235 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.133269 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.133293 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:11Z","lastTransitionTime":"2026-03-16T15:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.236671 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.236750 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.236772 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.236801 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.236822 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:11Z","lastTransitionTime":"2026-03-16T15:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.340943 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.341612 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.341635 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.341668 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.341692 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:11Z","lastTransitionTime":"2026-03-16T15:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.446683 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.446817 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.446846 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.446885 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.446912 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:11Z","lastTransitionTime":"2026-03-16T15:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.510439 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerStarted","Data":"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d"} Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.514005 4736 generic.go:334] "Generic (PLEG): container finished" podID="8fb586d7-2a83-4790-8de9-e7e993542167" containerID="e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734" exitCode=0 Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.514196 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" event={"ID":"8fb586d7-2a83-4790-8de9-e7e993542167","Type":"ContainerDied","Data":"e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734"} Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.553590 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.558041 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.558079 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.558087 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.558120 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.558131 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:11Z","lastTransitionTime":"2026-03-16T15:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.585654 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.611544 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.633874 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.651860 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.660621 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.660656 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.660664 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.660680 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.660693 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:11Z","lastTransitionTime":"2026-03-16T15:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.669598 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.684325 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.699485 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.717297 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.734871 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.751095 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.763196 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.763226 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.763237 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.763254 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.763265 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:11Z","lastTransitionTime":"2026-03-16T15:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.763811 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.787525 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.866231 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.866272 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.866282 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.866301 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.866315 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:11Z","lastTransitionTime":"2026-03-16T15:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.873683 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.890211 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.916499 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.938879 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.955327 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.969799 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.969845 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.969857 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.969879 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.969891 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:11Z","lastTransitionTime":"2026-03-16T15:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.974542 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.977307 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.977434 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.977463 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:11 crc kubenswrapper[4736]: E0316 15:15:11.977625 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:11 crc kubenswrapper[4736]: E0316 15:15:11.977759 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:11 crc kubenswrapper[4736]: E0316 15:15:11.977875 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:11 crc kubenswrapper[4736]: I0316 15:15:11.990574 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:11Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.005263 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.019471 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.040788 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.052070 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.073188 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.073229 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.073239 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.073258 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.073270 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:12Z","lastTransitionTime":"2026-03-16T15:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.074187 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.098317 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.111539 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.176882 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.176979 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.176999 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.177031 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.177054 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:12Z","lastTransitionTime":"2026-03-16T15:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.280590 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.280640 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.280650 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.280667 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.280678 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:12Z","lastTransitionTime":"2026-03-16T15:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.385019 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.385084 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.385149 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.385179 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.385208 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:12Z","lastTransitionTime":"2026-03-16T15:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.489525 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.489595 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.489619 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.489670 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.489695 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:12Z","lastTransitionTime":"2026-03-16T15:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.522237 4736 generic.go:334] "Generic (PLEG): container finished" podID="8fb586d7-2a83-4790-8de9-e7e993542167" containerID="421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4" exitCode=0 Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.522311 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" event={"ID":"8fb586d7-2a83-4790-8de9-e7e993542167","Type":"ContainerDied","Data":"421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4"} Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.544578 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.580558 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.594665 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.594733 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.594755 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.594784 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.594809 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:12Z","lastTransitionTime":"2026-03-16T15:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.619035 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.639238 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.665167 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.684436 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.697671 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.697730 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.697748 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.697776 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.697795 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:12Z","lastTransitionTime":"2026-03-16T15:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.706176 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.724433 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.743238 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.758420 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.782879 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.803257 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.803292 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.803304 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.803322 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.803334 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:12Z","lastTransitionTime":"2026-03-16T15:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.804235 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.821619 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:12Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.905664 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.905701 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.905714 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.905732 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:12 crc kubenswrapper[4736]: I0316 15:15:12.905746 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:12Z","lastTransitionTime":"2026-03-16T15:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.008854 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.008912 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.008922 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.008939 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.008950 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:13Z","lastTransitionTime":"2026-03-16T15:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.112444 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.112518 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.112538 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.112571 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.112595 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:13Z","lastTransitionTime":"2026-03-16T15:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.227402 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.227477 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.227498 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.227529 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.227549 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:13Z","lastTransitionTime":"2026-03-16T15:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.330439 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.330574 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.330599 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.330636 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.330658 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:13Z","lastTransitionTime":"2026-03-16T15:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.433524 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.433574 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.433587 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.433607 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.433620 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:13Z","lastTransitionTime":"2026-03-16T15:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.533939 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerStarted","Data":"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a"} Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.536460 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.536511 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.536532 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.536556 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.536577 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:13Z","lastTransitionTime":"2026-03-16T15:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.539069 4736 generic.go:334] "Generic (PLEG): container finished" podID="8fb586d7-2a83-4790-8de9-e7e993542167" containerID="07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d" exitCode=0 Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.539195 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" event={"ID":"8fb586d7-2a83-4790-8de9-e7e993542167","Type":"ContainerDied","Data":"07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d"} Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.578289 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:13Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.604930 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:13Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.632759 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:13Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.658310 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:13Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.671701 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.671747 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.671758 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.671777 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.671791 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:13Z","lastTransitionTime":"2026-03-16T15:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.679089 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:13Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.695767 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:13Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.711342 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:13Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.723851 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:13Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.738235 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:13Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.751702 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:13Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.774819 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:13Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.775627 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.775661 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.775674 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.775697 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.775710 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:13Z","lastTransitionTime":"2026-03-16T15:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.793094 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:13Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.806036 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:13Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.879569 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.879628 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.879642 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.879665 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.879683 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:13Z","lastTransitionTime":"2026-03-16T15:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.977499 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.977549 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:13 crc kubenswrapper[4736]: E0316 15:15:13.977674 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:13 crc kubenswrapper[4736]: E0316 15:15:13.977840 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.978096 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:13 crc kubenswrapper[4736]: E0316 15:15:13.978394 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.983433 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.983491 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.983503 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.983519 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:13 crc kubenswrapper[4736]: I0316 15:15:13.983531 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:13Z","lastTransitionTime":"2026-03-16T15:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.086797 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.086852 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.086864 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.086886 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.086897 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:14Z","lastTransitionTime":"2026-03-16T15:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.189994 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.190379 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.190463 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.190626 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.190781 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:14Z","lastTransitionTime":"2026-03-16T15:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.294250 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.294309 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.294325 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.294357 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.294374 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:14Z","lastTransitionTime":"2026-03-16T15:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.397329 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.397394 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.397418 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.397446 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.397462 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:14Z","lastTransitionTime":"2026-03-16T15:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.500065 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.500154 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.500173 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.500199 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.500216 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:14Z","lastTransitionTime":"2026-03-16T15:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.548128 4736 generic.go:334] "Generic (PLEG): container finished" podID="8fb586d7-2a83-4790-8de9-e7e993542167" containerID="ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496" exitCode=0 Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.548188 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" event={"ID":"8fb586d7-2a83-4790-8de9-e7e993542167","Type":"ContainerDied","Data":"ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496"} Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.567651 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.592875 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.605486 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.605517 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.605529 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.605548 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.605560 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:14Z","lastTransitionTime":"2026-03-16T15:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.623873 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.624117 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-nq8cg"] Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.627730 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nq8cg" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.631902 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.631925 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.635694 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.635727 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.656680 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.670403 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.679182 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7ec4aa5-6e80-415b-a253-6374da970e4d-host\") pod \"node-ca-nq8cg\" (UID: \"a7ec4aa5-6e80-415b-a253-6374da970e4d\") " pod="openshift-image-registry/node-ca-nq8cg" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.679237 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcrxm\" (UniqueName: \"kubernetes.io/projected/a7ec4aa5-6e80-415b-a253-6374da970e4d-kube-api-access-xcrxm\") pod \"node-ca-nq8cg\" (UID: \"a7ec4aa5-6e80-415b-a253-6374da970e4d\") " pod="openshift-image-registry/node-ca-nq8cg" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.679283 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a7ec4aa5-6e80-415b-a253-6374da970e4d-serviceca\") pod \"node-ca-nq8cg\" (UID: \"a7ec4aa5-6e80-415b-a253-6374da970e4d\") " pod="openshift-image-registry/node-ca-nq8cg" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.707605 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.713271 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.713540 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.713691 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.713835 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.713901 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:14Z","lastTransitionTime":"2026-03-16T15:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.729695 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.746270 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.760361 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.778135 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.780751 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a7ec4aa5-6e80-415b-a253-6374da970e4d-serviceca\") pod \"node-ca-nq8cg\" (UID: \"a7ec4aa5-6e80-415b-a253-6374da970e4d\") " pod="openshift-image-registry/node-ca-nq8cg" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.780839 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7ec4aa5-6e80-415b-a253-6374da970e4d-host\") pod \"node-ca-nq8cg\" (UID: \"a7ec4aa5-6e80-415b-a253-6374da970e4d\") " pod="openshift-image-registry/node-ca-nq8cg" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.780881 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcrxm\" (UniqueName: \"kubernetes.io/projected/a7ec4aa5-6e80-415b-a253-6374da970e4d-kube-api-access-xcrxm\") pod \"node-ca-nq8cg\" (UID: \"a7ec4aa5-6e80-415b-a253-6374da970e4d\") " pod="openshift-image-registry/node-ca-nq8cg" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.781415 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7ec4aa5-6e80-415b-a253-6374da970e4d-host\") pod \"node-ca-nq8cg\" (UID: \"a7ec4aa5-6e80-415b-a253-6374da970e4d\") " pod="openshift-image-registry/node-ca-nq8cg" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.782602 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a7ec4aa5-6e80-415b-a253-6374da970e4d-serviceca\") pod \"node-ca-nq8cg\" (UID: \"a7ec4aa5-6e80-415b-a253-6374da970e4d\") " pod="openshift-image-registry/node-ca-nq8cg" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.799901 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.808358 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcrxm\" (UniqueName: \"kubernetes.io/projected/a7ec4aa5-6e80-415b-a253-6374da970e4d-kube-api-access-xcrxm\") pod \"node-ca-nq8cg\" (UID: \"a7ec4aa5-6e80-415b-a253-6374da970e4d\") " pod="openshift-image-registry/node-ca-nq8cg" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.816962 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.817018 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.817033 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.817053 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.817067 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:14Z","lastTransitionTime":"2026-03-16T15:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.819230 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.834245 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.851590 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.867910 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.898585 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.918777 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.923983 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.924038 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.924052 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.924079 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.924094 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:14Z","lastTransitionTime":"2026-03-16T15:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.931845 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.945131 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.949003 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nq8cg" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.957974 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: I0316 15:15:14.975488 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:14Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:14 crc kubenswrapper[4736]: W0316 15:15:14.980348 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7ec4aa5_6e80_415b_a253_6374da970e4d.slice/crio-6633cfd5a5f5c9155f5b028b0885e60815383960c581ea6eef43223f3a5a93bc WatchSource:0}: Error finding container 6633cfd5a5f5c9155f5b028b0885e60815383960c581ea6eef43223f3a5a93bc: Status 404 returned error can't find the container with id 6633cfd5a5f5c9155f5b028b0885e60815383960c581ea6eef43223f3a5a93bc Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.029766 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.030540 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.030570 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.030580 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.030612 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.030627 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:15Z","lastTransitionTime":"2026-03-16T15:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.069512 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.096182 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.119241 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.133733 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.133794 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.133808 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.133831 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.133851 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:15Z","lastTransitionTime":"2026-03-16T15:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.135819 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.147611 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.236829 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.236870 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.236880 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.236896 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.236910 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:15Z","lastTransitionTime":"2026-03-16T15:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.340459 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.340515 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.340529 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.340554 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.340567 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:15Z","lastTransitionTime":"2026-03-16T15:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.449529 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.449614 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.449635 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.449665 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.449705 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:15Z","lastTransitionTime":"2026-03-16T15:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.552854 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.552904 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.552915 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.552933 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.552943 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:15Z","lastTransitionTime":"2026-03-16T15:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.559713 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerStarted","Data":"21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e"} Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.560420 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.560641 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.560675 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.564875 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nq8cg" event={"ID":"a7ec4aa5-6e80-415b-a253-6374da970e4d","Type":"ContainerStarted","Data":"bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4"} Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.564940 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nq8cg" event={"ID":"a7ec4aa5-6e80-415b-a253-6374da970e4d","Type":"ContainerStarted","Data":"6633cfd5a5f5c9155f5b028b0885e60815383960c581ea6eef43223f3a5a93bc"} Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.569993 4736 generic.go:334] "Generic (PLEG): container finished" podID="8fb586d7-2a83-4790-8de9-e7e993542167" containerID="19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7" exitCode=0 Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.570033 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" event={"ID":"8fb586d7-2a83-4790-8de9-e7e993542167","Type":"ContainerDied","Data":"19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7"} Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.594847 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.603756 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.612171 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.620644 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.628206 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.642390 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.656454 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.656524 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.656537 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.656580 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.656594 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:15Z","lastTransitionTime":"2026-03-16T15:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.668573 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.686407 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.704374 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.719416 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.744471 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.759787 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.760636 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.760664 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.760673 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.760690 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.760701 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:15Z","lastTransitionTime":"2026-03-16T15:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.775056 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.787022 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.801460 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.815731 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:15Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.864116 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.864173 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.864190 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.864216 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.864231 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:15Z","lastTransitionTime":"2026-03-16T15:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.892152 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.892302 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.892319 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:15:47.892291544 +0000 UTC m=+149.619681831 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.892348 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.892443 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.892457 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.892501 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:15:47.89249368 +0000 UTC m=+149.619883967 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.892514 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:15:47.89250935 +0000 UTC m=+149.619899627 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.969343 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.969408 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.969421 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.969443 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.969457 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:15Z","lastTransitionTime":"2026-03-16T15:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.977962 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.978025 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.978139 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.977969 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.978263 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.978411 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.993539 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:15 crc kubenswrapper[4736]: I0316 15:15:15.993616 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.993885 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.993909 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.993926 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.994003 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-16 15:15:47.993977175 +0000 UTC m=+149.721367462 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.994219 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.994296 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.994329 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:15:15 crc kubenswrapper[4736]: E0316 15:15:15.994455 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-16 15:15:47.994413477 +0000 UTC m=+149.721803814 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.073039 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.073150 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.073173 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.073202 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.073220 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:16Z","lastTransitionTime":"2026-03-16T15:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.175942 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.176000 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.176015 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.176036 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.176047 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:16Z","lastTransitionTime":"2026-03-16T15:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.278849 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.278902 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.278913 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.278933 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.278948 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:16Z","lastTransitionTime":"2026-03-16T15:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.382361 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.382434 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.382451 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.382481 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.382499 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:16Z","lastTransitionTime":"2026-03-16T15:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.487975 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.488049 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.488067 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.488099 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.488145 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:16Z","lastTransitionTime":"2026-03-16T15:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.591924 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.592043 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.592064 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.592155 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.592221 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:16Z","lastTransitionTime":"2026-03-16T15:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.696948 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.697028 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.697047 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.697074 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.697098 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:16Z","lastTransitionTime":"2026-03-16T15:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.725355 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.725410 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.725438 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.725471 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:16 crc kubenswrapper[4736]: I0316 15:15:16.725494 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:16Z","lastTransitionTime":"2026-03-16T15:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:17 crc kubenswrapper[4736]: E0316 15:15:17.067612 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:16Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.074581 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.074618 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.074633 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.074654 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.074670 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:17Z","lastTransitionTime":"2026-03-16T15:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.082827 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: E0316 15:15:17.095990 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.102012 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.102050 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.102059 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.102078 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.102090 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:17Z","lastTransitionTime":"2026-03-16T15:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.106917 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: E0316 15:15:17.119397 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.128683 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.128745 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.128763 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.128788 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.128807 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:17Z","lastTransitionTime":"2026-03-16T15:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:17 crc kubenswrapper[4736]: E0316 15:15:17.141597 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.142947 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.145834 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.145881 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.145891 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.145908 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.145919 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:17Z","lastTransitionTime":"2026-03-16T15:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:17 crc kubenswrapper[4736]: E0316 15:15:17.157629 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: E0316 15:15:17.157799 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.158899 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.160549 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.160660 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.160719 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.160740 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.160755 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:17Z","lastTransitionTime":"2026-03-16T15:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.174916 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.192522 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.210039 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.224763 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.238285 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.252780 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.263795 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.263900 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.263913 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.263938 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.263953 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:17Z","lastTransitionTime":"2026-03-16T15:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.264798 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.285530 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.303417 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.316856 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.367371 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.367449 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.367471 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.367502 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.367523 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:17Z","lastTransitionTime":"2026-03-16T15:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.470508 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.470545 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.470556 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.470577 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.470589 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:17Z","lastTransitionTime":"2026-03-16T15:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.574345 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.574405 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.574426 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.574459 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.574498 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:17Z","lastTransitionTime":"2026-03-16T15:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.590095 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" event={"ID":"8fb586d7-2a83-4790-8de9-e7e993542167","Type":"ContainerStarted","Data":"0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28"} Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.609311 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.622918 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.642820 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.661262 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.674964 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.677743 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.677771 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.677781 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.677799 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.677812 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:17Z","lastTransitionTime":"2026-03-16T15:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.687051 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.704660 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.728506 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.749329 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.766926 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.782240 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.782283 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.782294 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.782311 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.782325 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:17Z","lastTransitionTime":"2026-03-16T15:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.790536 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.810512 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.827485 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.838876 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:17Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.884939 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.885014 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.885024 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.885037 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.885046 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:17Z","lastTransitionTime":"2026-03-16T15:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.977589 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.977647 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:17 crc kubenswrapper[4736]: E0316 15:15:17.977734 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:17 crc kubenswrapper[4736]: E0316 15:15:17.977871 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.978148 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:17 crc kubenswrapper[4736]: E0316 15:15:17.978224 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.987389 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.987431 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.987443 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.987462 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:17 crc kubenswrapper[4736]: I0316 15:15:17.987477 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:17Z","lastTransitionTime":"2026-03-16T15:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.089705 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.089732 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.089740 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.089753 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.089763 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:18Z","lastTransitionTime":"2026-03-16T15:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.192499 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.192625 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.192644 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.192661 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.192674 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:18Z","lastTransitionTime":"2026-03-16T15:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.296940 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.297019 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.297039 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.297069 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.297089 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:18Z","lastTransitionTime":"2026-03-16T15:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.400072 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.400185 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.400205 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.400233 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.400252 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:18Z","lastTransitionTime":"2026-03-16T15:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.503064 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.503147 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.503161 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.503179 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.503194 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:18Z","lastTransitionTime":"2026-03-16T15:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.606254 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.606331 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.606342 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.606358 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.606387 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:18Z","lastTransitionTime":"2026-03-16T15:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.709010 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.709054 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.709065 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.709081 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.709091 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:18Z","lastTransitionTime":"2026-03-16T15:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.812615 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.812651 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.812660 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.812678 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.812687 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:18Z","lastTransitionTime":"2026-03-16T15:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:18 crc kubenswrapper[4736]: E0316 15:15:18.913236 4736 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 16 15:15:18 crc kubenswrapper[4736]: I0316 15:15:18.995577 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:18Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.007353 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.027237 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.041960 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.054823 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.069142 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: E0316 15:15:19.082270 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.082840 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.102807 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.118660 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.136345 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.149258 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.162761 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.180807 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.193565 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.607574 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovnkube-controller/0.log" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.611894 4736 generic.go:334] "Generic (PLEG): container finished" podID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerID="21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e" exitCode=1 Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.611960 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerDied","Data":"21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e"} Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.613284 4736 scope.go:117] "RemoveContainer" containerID="21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.635556 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.659843 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.678195 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.703726 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.720376 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.736878 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.755059 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.772793 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.783575 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.803706 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.826946 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.848600 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:19Z\\\",\\\"message\\\":\\\"informers/externalversions/factory.go:140\\\\nI0316 15:15:19.107332 6470 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0316 15:15:19.107367 6470 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0316 15:15:19.107381 6470 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0316 15:15:19.107481 6470 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0316 15:15:19.107496 6470 handler.go:208] Removed *v1.Node event handler 2\\\\nI0316 15:15:19.107503 6470 handler.go:208] Removed *v1.Node event handler 7\\\\nI0316 15:15:19.107532 6470 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0316 15:15:19.107557 6470 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0316 15:15:19.107597 6470 factory.go:656] Stopping watch factory\\\\nI0316 15:15:19.107614 6470 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0316 15:15:19.108447 6470 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0316 15:15:19.108469 6470 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0316 15:15:19.108507 6470 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0316 15:15:19.108526 6470 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.868780 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.887179 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:19Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.977766 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.977766 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:19 crc kubenswrapper[4736]: I0316 15:15:19.977829 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:19 crc kubenswrapper[4736]: E0316 15:15:19.978413 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:19 crc kubenswrapper[4736]: E0316 15:15:19.978674 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:19 crc kubenswrapper[4736]: E0316 15:15:19.978957 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.674665 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovnkube-controller/0.log" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.677882 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerStarted","Data":"173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191"} Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.678314 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.694545 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:20Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.709367 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:20Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.740443 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:20Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.765059 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:19Z\\\",\\\"message\\\":\\\"informers/externalversions/factory.go:140\\\\nI0316 15:15:19.107332 6470 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0316 15:15:19.107367 6470 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0316 15:15:19.107381 6470 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0316 15:15:19.107481 6470 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0316 15:15:19.107496 6470 handler.go:208] Removed *v1.Node event handler 2\\\\nI0316 15:15:19.107503 6470 handler.go:208] Removed *v1.Node event handler 7\\\\nI0316 15:15:19.107532 6470 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0316 15:15:19.107557 6470 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0316 15:15:19.107597 6470 factory.go:656] Stopping watch factory\\\\nI0316 15:15:19.107614 6470 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0316 15:15:19.108447 6470 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0316 15:15:19.108469 6470 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0316 15:15:19.108507 6470 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0316 15:15:19.108526 6470 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:20Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.778811 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:20Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.792708 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:20Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.811746 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:20Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.823032 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:20Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.837347 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:20Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.848671 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:20Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.863346 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:20Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.875367 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:20Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.893246 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:20Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:20 crc kubenswrapper[4736]: I0316 15:15:20.920526 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:20Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.684360 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovnkube-controller/1.log" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.685038 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovnkube-controller/0.log" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.688850 4736 generic.go:334] "Generic (PLEG): container finished" podID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerID="173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191" exitCode=1 Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.688899 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerDied","Data":"173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191"} Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.688949 4736 scope.go:117] "RemoveContainer" containerID="21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.690236 4736 scope.go:117] "RemoveContainer" containerID="173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191" Mar 16 15:15:21 crc kubenswrapper[4736]: E0316 15:15:21.690549 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w7kdw_openshift-ovn-kubernetes(83041fd9-2e75-4569-ab47-ac7590a189a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.707563 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.725948 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:19Z\\\",\\\"message\\\":\\\"informers/externalversions/factory.go:140\\\\nI0316 15:15:19.107332 6470 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0316 15:15:19.107367 6470 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0316 15:15:19.107381 6470 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0316 15:15:19.107481 6470 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0316 15:15:19.107496 6470 handler.go:208] Removed *v1.Node event handler 2\\\\nI0316 15:15:19.107503 6470 handler.go:208] Removed *v1.Node event handler 7\\\\nI0316 15:15:19.107532 6470 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0316 15:15:19.107557 6470 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0316 15:15:19.107597 6470 factory.go:656] Stopping watch factory\\\\nI0316 15:15:19.107614 6470 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0316 15:15:19.108447 6470 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0316 15:15:19.108469 6470 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0316 15:15:19.108507 6470 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0316 15:15:19.108526 6470 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:20Z\\\",\\\"message\\\":\\\" 6681 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0316 15:15:20.784255 6681 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0316 15:15:20.784312 6681 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0316 15:15:20.784339 6681 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0316 15:15:20.784358 6681 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0316 15:15:20.784357 6681 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0316 15:15:20.784410 6681 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0316 15:15:20.784418 6681 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0316 15:15:20.784431 6681 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0316 15:15:20.784442 6681 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0316 15:15:20.784453 6681 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0316 15:15:20.784461 6681 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0316 15:15:20.784507 6681 factory.go:656] Stopping watch factory\\\\nI0316 15:15:20.784540 6681 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0316 15:15:20.784544 6681 handler.go:208] Removed *v1.Node event handler 7\\\\nI0316 15:15:20.784571 6681 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.738952 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.763048 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.778070 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.807650 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.821479 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.840174 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.854059 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.871360 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.885523 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.885844 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7"] Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.886629 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.888275 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.888964 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.899381 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.916561 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.932889 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.946129 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.955023 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e23aaec0-f0cc-4a14-a6a0-886adcac1171-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m74n7\" (UID: \"e23aaec0-f0cc-4a14-a6a0-886adcac1171\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.955171 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e23aaec0-f0cc-4a14-a6a0-886adcac1171-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m74n7\" (UID: \"e23aaec0-f0cc-4a14-a6a0-886adcac1171\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.955296 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e23aaec0-f0cc-4a14-a6a0-886adcac1171-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m74n7\" (UID: \"e23aaec0-f0cc-4a14-a6a0-886adcac1171\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.955407 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7rzj\" (UniqueName: \"kubernetes.io/projected/e23aaec0-f0cc-4a14-a6a0-886adcac1171-kube-api-access-g7rzj\") pod \"ovnkube-control-plane-749d76644c-m74n7\" (UID: \"e23aaec0-f0cc-4a14-a6a0-886adcac1171\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.958178 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.977426 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.977465 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:21 crc kubenswrapper[4736]: E0316 15:15:21.977614 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.977460 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:21 crc kubenswrapper[4736]: E0316 15:15:21.977808 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:21 crc kubenswrapper[4736]: E0316 15:15:21.978030 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.981931 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:19Z\\\",\\\"message\\\":\\\"informers/externalversions/factory.go:140\\\\nI0316 15:15:19.107332 6470 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0316 15:15:19.107367 6470 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0316 15:15:19.107381 6470 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0316 15:15:19.107481 6470 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0316 15:15:19.107496 6470 handler.go:208] Removed *v1.Node event handler 2\\\\nI0316 15:15:19.107503 6470 handler.go:208] Removed *v1.Node event handler 7\\\\nI0316 15:15:19.107532 6470 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0316 15:15:19.107557 6470 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0316 15:15:19.107597 6470 factory.go:656] Stopping watch factory\\\\nI0316 15:15:19.107614 6470 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0316 15:15:19.108447 6470 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0316 15:15:19.108469 6470 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0316 15:15:19.108507 6470 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0316 15:15:19.108526 6470 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:20Z\\\",\\\"message\\\":\\\" 6681 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0316 15:15:20.784255 6681 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0316 15:15:20.784312 6681 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0316 15:15:20.784339 6681 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0316 15:15:20.784358 6681 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0316 15:15:20.784357 6681 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0316 15:15:20.784410 6681 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0316 15:15:20.784418 6681 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0316 15:15:20.784431 6681 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0316 15:15:20.784442 6681 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0316 15:15:20.784453 6681 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0316 15:15:20.784461 6681 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0316 15:15:20.784507 6681 factory.go:656] Stopping watch factory\\\\nI0316 15:15:20.784540 6681 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0316 15:15:20.784544 6681 handler.go:208] Removed *v1.Node event handler 7\\\\nI0316 15:15:20.784571 6681 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:21 crc kubenswrapper[4736]: I0316 15:15:21.995459 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e23aaec0-f0cc-4a14-a6a0-886adcac1171\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m74n7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:21Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.015973 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.027671 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.037839 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.048264 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.056502 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7rzj\" (UniqueName: \"kubernetes.io/projected/e23aaec0-f0cc-4a14-a6a0-886adcac1171-kube-api-access-g7rzj\") pod \"ovnkube-control-plane-749d76644c-m74n7\" (UID: \"e23aaec0-f0cc-4a14-a6a0-886adcac1171\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.056744 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e23aaec0-f0cc-4a14-a6a0-886adcac1171-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m74n7\" (UID: \"e23aaec0-f0cc-4a14-a6a0-886adcac1171\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.056877 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e23aaec0-f0cc-4a14-a6a0-886adcac1171-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m74n7\" (UID: \"e23aaec0-f0cc-4a14-a6a0-886adcac1171\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.056990 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e23aaec0-f0cc-4a14-a6a0-886adcac1171-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m74n7\" (UID: \"e23aaec0-f0cc-4a14-a6a0-886adcac1171\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.057844 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e23aaec0-f0cc-4a14-a6a0-886adcac1171-env-overrides\") pod \"ovnkube-control-plane-749d76644c-m74n7\" (UID: \"e23aaec0-f0cc-4a14-a6a0-886adcac1171\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.058216 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e23aaec0-f0cc-4a14-a6a0-886adcac1171-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-m74n7\" (UID: \"e23aaec0-f0cc-4a14-a6a0-886adcac1171\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.061130 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.065556 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e23aaec0-f0cc-4a14-a6a0-886adcac1171-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-m74n7\" (UID: \"e23aaec0-f0cc-4a14-a6a0-886adcac1171\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.084120 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7rzj\" (UniqueName: \"kubernetes.io/projected/e23aaec0-f0cc-4a14-a6a0-886adcac1171-kube-api-access-g7rzj\") pod \"ovnkube-control-plane-749d76644c-m74n7\" (UID: \"e23aaec0-f0cc-4a14-a6a0-886adcac1171\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.096208 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.147176 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.167518 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.183645 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.195748 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.205430 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.207570 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.636083 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-smqd4"] Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.636777 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:22 crc kubenswrapper[4736]: E0316 15:15:22.636855 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.659236 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.668172 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs\") pod \"network-metrics-daemon-smqd4\" (UID: \"5586cec2-8615-4d8b-a695-65ee04613c35\") " pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.668384 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzksc\" (UniqueName: \"kubernetes.io/projected/5586cec2-8615-4d8b-a695-65ee04613c35-kube-api-access-dzksc\") pod \"network-metrics-daemon-smqd4\" (UID: \"5586cec2-8615-4d8b-a695-65ee04613c35\") " pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.680430 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.702009 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" event={"ID":"e23aaec0-f0cc-4a14-a6a0-886adcac1171","Type":"ContainerStarted","Data":"ebd8456b32f7f665ea16a7f00723a095c6f29c7b1b6313e7fca794fe5462117f"} Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.702064 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" event={"ID":"e23aaec0-f0cc-4a14-a6a0-886adcac1171","Type":"ContainerStarted","Data":"418fb85ce9d1a56dc5cf0dd384de5af22a212d610cf00567896d7229c0d43716"} Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.702077 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" event={"ID":"e23aaec0-f0cc-4a14-a6a0-886adcac1171","Type":"ContainerStarted","Data":"ba96aa60435db53dafc68ba1e395516d21b41a54677a24f94c38afabb9197776"} Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.704153 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovnkube-controller/1.log" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.707579 4736 scope.go:117] "RemoveContainer" containerID="173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191" Mar 16 15:15:22 crc kubenswrapper[4736]: E0316 15:15:22.707743 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w7kdw_openshift-ovn-kubernetes(83041fd9-2e75-4569-ab47-ac7590a189a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.720909 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a53aaccac7d4f4dedee8c3393843a866493faec99d1cc61a10a4dc4349814e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:19Z\\\",\\\"message\\\":\\\"informers/externalversions/factory.go:140\\\\nI0316 15:15:19.107332 6470 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0316 15:15:19.107367 6470 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0316 15:15:19.107381 6470 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0316 15:15:19.107481 6470 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0316 15:15:19.107496 6470 handler.go:208] Removed *v1.Node event handler 2\\\\nI0316 15:15:19.107503 6470 handler.go:208] Removed *v1.Node event handler 7\\\\nI0316 15:15:19.107532 6470 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0316 15:15:19.107557 6470 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0316 15:15:19.107597 6470 factory.go:656] Stopping watch factory\\\\nI0316 15:15:19.107614 6470 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0316 15:15:19.108447 6470 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0316 15:15:19.108469 6470 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0316 15:15:19.108507 6470 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0316 15:15:19.108526 6470 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:20Z\\\",\\\"message\\\":\\\" 6681 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0316 15:15:20.784255 6681 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0316 15:15:20.784312 6681 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0316 15:15:20.784339 6681 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0316 15:15:20.784358 6681 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0316 15:15:20.784357 6681 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0316 15:15:20.784410 6681 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0316 15:15:20.784418 6681 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0316 15:15:20.784431 6681 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0316 15:15:20.784442 6681 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0316 15:15:20.784453 6681 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0316 15:15:20.784461 6681 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0316 15:15:20.784507 6681 factory.go:656] Stopping watch factory\\\\nI0316 15:15:20.784540 6681 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0316 15:15:20.784544 6681 handler.go:208] Removed *v1.Node event handler 7\\\\nI0316 15:15:20.784571 6681 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.741893 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.758455 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.769444 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzksc\" (UniqueName: \"kubernetes.io/projected/5586cec2-8615-4d8b-a695-65ee04613c35-kube-api-access-dzksc\") pod \"network-metrics-daemon-smqd4\" (UID: \"5586cec2-8615-4d8b-a695-65ee04613c35\") " pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.769515 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs\") pod \"network-metrics-daemon-smqd4\" (UID: \"5586cec2-8615-4d8b-a695-65ee04613c35\") " pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:22 crc kubenswrapper[4736]: E0316 15:15:22.770535 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 16 15:15:22 crc kubenswrapper[4736]: E0316 15:15:22.770669 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs podName:5586cec2-8615-4d8b-a695-65ee04613c35 nodeName:}" failed. No retries permitted until 2026-03-16 15:15:23.270622089 +0000 UTC m=+124.998012366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs") pod "network-metrics-daemon-smqd4" (UID: "5586cec2-8615-4d8b-a695-65ee04613c35") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.778213 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e23aaec0-f0cc-4a14-a6a0-886adcac1171\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m74n7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.794916 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzksc\" (UniqueName: \"kubernetes.io/projected/5586cec2-8615-4d8b-a695-65ee04613c35-kube-api-access-dzksc\") pod \"network-metrics-daemon-smqd4\" (UID: \"5586cec2-8615-4d8b-a695-65ee04613c35\") " pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.795596 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.811352 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.823909 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-smqd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5586cec2-8615-4d8b-a695-65ee04613c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-smqd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.847851 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.862935 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.879068 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.892376 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.907700 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.920933 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.932646 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.945803 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.962779 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.973438 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.986838 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:22 crc kubenswrapper[4736]: I0316 15:15:22.997862 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:22Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.020837 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:20Z\\\",\\\"message\\\":\\\" 6681 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0316 15:15:20.784255 6681 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0316 15:15:20.784312 6681 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0316 15:15:20.784339 6681 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0316 15:15:20.784358 6681 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0316 15:15:20.784357 6681 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0316 15:15:20.784410 6681 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0316 15:15:20.784418 6681 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0316 15:15:20.784431 6681 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0316 15:15:20.784442 6681 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0316 15:15:20.784453 6681 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0316 15:15:20.784461 6681 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0316 15:15:20.784507 6681 factory.go:656] Stopping watch factory\\\\nI0316 15:15:20.784540 6681 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0316 15:15:20.784544 6681 handler.go:208] Removed *v1.Node event handler 7\\\\nI0316 15:15:20.784571 6681 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w7kdw_openshift-ovn-kubernetes(83041fd9-2e75-4569-ab47-ac7590a189a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:23Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.035734 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:23Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.048717 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e23aaec0-f0cc-4a14-a6a0-886adcac1171\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418fb85ce9d1a56dc5cf0dd384de5af22a212d610cf00567896d7229c0d43716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd8456b32f7f665ea16a7f00723a095c6f29c7b1b6313e7fca794fe5462117f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m74n7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:23Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.066043 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:23Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.079250 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:23Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.093306 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:23Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.107627 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:23Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.121149 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:23Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.133700 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-smqd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5586cec2-8615-4d8b-a695-65ee04613c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-smqd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:23Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.152754 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:23Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.166991 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:23Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.275807 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs\") pod \"network-metrics-daemon-smqd4\" (UID: \"5586cec2-8615-4d8b-a695-65ee04613c35\") " pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:23 crc kubenswrapper[4736]: E0316 15:15:23.276046 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 16 15:15:23 crc kubenswrapper[4736]: E0316 15:15:23.276211 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs podName:5586cec2-8615-4d8b-a695-65ee04613c35 nodeName:}" failed. No retries permitted until 2026-03-16 15:15:24.276169505 +0000 UTC m=+126.003559792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs") pod "network-metrics-daemon-smqd4" (UID: "5586cec2-8615-4d8b-a695-65ee04613c35") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.977169 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.977259 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.977423 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:23 crc kubenswrapper[4736]: I0316 15:15:23.977502 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:23 crc kubenswrapper[4736]: E0316 15:15:23.977431 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:23 crc kubenswrapper[4736]: E0316 15:15:23.977617 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:23 crc kubenswrapper[4736]: E0316 15:15:23.977794 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:23 crc kubenswrapper[4736]: E0316 15:15:23.977935 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:24 crc kubenswrapper[4736]: E0316 15:15:24.083813 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 16 15:15:24 crc kubenswrapper[4736]: I0316 15:15:24.286740 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs\") pod \"network-metrics-daemon-smqd4\" (UID: \"5586cec2-8615-4d8b-a695-65ee04613c35\") " pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:24 crc kubenswrapper[4736]: E0316 15:15:24.286935 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 16 15:15:24 crc kubenswrapper[4736]: E0316 15:15:24.287010 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs podName:5586cec2-8615-4d8b-a695-65ee04613c35 nodeName:}" failed. No retries permitted until 2026-03-16 15:15:26.286990738 +0000 UTC m=+128.014381025 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs") pod "network-metrics-daemon-smqd4" (UID: "5586cec2-8615-4d8b-a695-65ee04613c35") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 16 15:15:25 crc kubenswrapper[4736]: I0316 15:15:25.977536 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:25 crc kubenswrapper[4736]: I0316 15:15:25.977570 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:25 crc kubenswrapper[4736]: I0316 15:15:25.977696 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:25 crc kubenswrapper[4736]: I0316 15:15:25.977741 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:25 crc kubenswrapper[4736]: E0316 15:15:25.977751 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:25 crc kubenswrapper[4736]: E0316 15:15:25.977830 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:25 crc kubenswrapper[4736]: E0316 15:15:25.977933 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:25 crc kubenswrapper[4736]: E0316 15:15:25.978087 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:26 crc kubenswrapper[4736]: I0316 15:15:26.307625 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs\") pod \"network-metrics-daemon-smqd4\" (UID: \"5586cec2-8615-4d8b-a695-65ee04613c35\") " pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:26 crc kubenswrapper[4736]: E0316 15:15:26.307810 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 16 15:15:26 crc kubenswrapper[4736]: E0316 15:15:26.307878 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs podName:5586cec2-8615-4d8b-a695-65ee04613c35 nodeName:}" failed. No retries permitted until 2026-03-16 15:15:30.307859103 +0000 UTC m=+132.035249390 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs") pod "network-metrics-daemon-smqd4" (UID: "5586cec2-8615-4d8b-a695-65ee04613c35") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.528729 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.529945 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.530148 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.530326 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.530478 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:27Z","lastTransitionTime":"2026-03-16T15:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:27 crc kubenswrapper[4736]: E0316 15:15:27.547294 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:27Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.553692 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.553734 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.553747 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.553766 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.553779 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:27Z","lastTransitionTime":"2026-03-16T15:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:27 crc kubenswrapper[4736]: E0316 15:15:27.572722 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:27Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.577156 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.577378 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.577505 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.577603 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.577707 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:27Z","lastTransitionTime":"2026-03-16T15:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:27 crc kubenswrapper[4736]: E0316 15:15:27.594090 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:27Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.599984 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.600149 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.600256 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.600347 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.600685 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:27Z","lastTransitionTime":"2026-03-16T15:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:27 crc kubenswrapper[4736]: E0316 15:15:27.620133 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:27Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.624553 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.624714 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.624821 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.624922 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.625011 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:27Z","lastTransitionTime":"2026-03-16T15:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:27 crc kubenswrapper[4736]: E0316 15:15:27.639779 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:27Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:27 crc kubenswrapper[4736]: E0316 15:15:27.640253 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.977674 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.977706 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.977682 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:27 crc kubenswrapper[4736]: E0316 15:15:27.977918 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:27 crc kubenswrapper[4736]: E0316 15:15:27.977974 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:27 crc kubenswrapper[4736]: E0316 15:15:27.978063 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:27 crc kubenswrapper[4736]: I0316 15:15:27.978259 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:27 crc kubenswrapper[4736]: E0316 15:15:27.978430 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.005480 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:20Z\\\",\\\"message\\\":\\\" 6681 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0316 15:15:20.784255 6681 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0316 15:15:20.784312 6681 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0316 15:15:20.784339 6681 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0316 15:15:20.784358 6681 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0316 15:15:20.784357 6681 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0316 15:15:20.784410 6681 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0316 15:15:20.784418 6681 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0316 15:15:20.784431 6681 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0316 15:15:20.784442 6681 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0316 15:15:20.784453 6681 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0316 15:15:20.784461 6681 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0316 15:15:20.784507 6681 factory.go:656] Stopping watch factory\\\\nI0316 15:15:20.784540 6681 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0316 15:15:20.784544 6681 handler.go:208] Removed *v1.Node event handler 7\\\\nI0316 15:15:20.784571 6681 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w7kdw_openshift-ovn-kubernetes(83041fd9-2e75-4569-ab47-ac7590a189a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.024993 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.039257 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.055056 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.073075 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: E0316 15:15:29.084361 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.089764 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e23aaec0-f0cc-4a14-a6a0-886adcac1171\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418fb85ce9d1a56dc5cf0dd384de5af22a212d610cf00567896d7229c0d43716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd8456b32f7f665ea16a7f00723a095c6f29c7b1b6313e7fca794fe5462117f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m74n7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.108850 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.128318 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.144416 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.156143 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.168863 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.181233 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-smqd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5586cec2-8615-4d8b-a695-65ee04613c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-smqd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.206265 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.219324 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.236416 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.251263 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:29Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.977759 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.977775 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:29 crc kubenswrapper[4736]: E0316 15:15:29.978057 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.977772 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:29 crc kubenswrapper[4736]: I0316 15:15:29.977811 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:29 crc kubenswrapper[4736]: E0316 15:15:29.978181 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:29 crc kubenswrapper[4736]: E0316 15:15:29.978274 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:29 crc kubenswrapper[4736]: E0316 15:15:29.978388 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:30 crc kubenswrapper[4736]: I0316 15:15:30.351534 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs\") pod \"network-metrics-daemon-smqd4\" (UID: \"5586cec2-8615-4d8b-a695-65ee04613c35\") " pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:30 crc kubenswrapper[4736]: E0316 15:15:30.351846 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 16 15:15:30 crc kubenswrapper[4736]: E0316 15:15:30.352004 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs podName:5586cec2-8615-4d8b-a695-65ee04613c35 nodeName:}" failed. No retries permitted until 2026-03-16 15:15:38.351962707 +0000 UTC m=+140.079353024 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs") pod "network-metrics-daemon-smqd4" (UID: "5586cec2-8615-4d8b-a695-65ee04613c35") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 16 15:15:31 crc kubenswrapper[4736]: I0316 15:15:31.977837 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:31 crc kubenswrapper[4736]: I0316 15:15:31.977889 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:31 crc kubenswrapper[4736]: I0316 15:15:31.977888 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:31 crc kubenswrapper[4736]: E0316 15:15:31.978073 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:31 crc kubenswrapper[4736]: E0316 15:15:31.978260 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:31 crc kubenswrapper[4736]: E0316 15:15:31.978575 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:31 crc kubenswrapper[4736]: I0316 15:15:31.979848 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:31 crc kubenswrapper[4736]: E0316 15:15:31.980163 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:33 crc kubenswrapper[4736]: I0316 15:15:33.977773 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:33 crc kubenswrapper[4736]: I0316 15:15:33.977832 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:33 crc kubenswrapper[4736]: I0316 15:15:33.977773 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:33 crc kubenswrapper[4736]: I0316 15:15:33.977940 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:33 crc kubenswrapper[4736]: E0316 15:15:33.977923 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:33 crc kubenswrapper[4736]: E0316 15:15:33.978068 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:33 crc kubenswrapper[4736]: E0316 15:15:33.978378 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:33 crc kubenswrapper[4736]: E0316 15:15:33.978497 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:33 crc kubenswrapper[4736]: I0316 15:15:33.990672 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 16 15:15:34 crc kubenswrapper[4736]: E0316 15:15:34.085797 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 16 15:15:34 crc kubenswrapper[4736]: I0316 15:15:34.979675 4736 scope.go:117] "RemoveContainer" containerID="173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.777312 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovnkube-controller/1.log" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.781622 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerStarted","Data":"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2"} Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.782289 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.793797 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-smqd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5586cec2-8615-4d8b-a695-65ee04613c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-smqd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:35Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.814211 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:35Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.830009 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:35Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.843782 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:35Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.859509 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:35Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.873803 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:35Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.890871 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:35Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.905246 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:35Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.919387 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:35Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.933274 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:35Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.954231 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:35Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.969212 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:35Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.977051 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.977087 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.977164 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.977181 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:35 crc kubenswrapper[4736]: E0316 15:15:35.977265 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:35 crc kubenswrapper[4736]: E0316 15:15:35.977399 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:35 crc kubenswrapper[4736]: E0316 15:15:35.977521 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:35 crc kubenswrapper[4736]: E0316 15:15:35.977796 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:35 crc kubenswrapper[4736]: I0316 15:15:35.987704 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:20Z\\\",\\\"message\\\":\\\" 6681 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0316 15:15:20.784255 6681 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0316 15:15:20.784312 6681 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0316 15:15:20.784339 6681 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0316 15:15:20.784358 6681 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0316 15:15:20.784357 6681 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0316 15:15:20.784410 6681 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0316 15:15:20.784418 6681 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0316 15:15:20.784431 6681 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0316 15:15:20.784442 6681 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0316 15:15:20.784453 6681 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0316 15:15:20.784461 6681 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0316 15:15:20.784507 6681 factory.go:656] Stopping watch factory\\\\nI0316 15:15:20.784540 6681 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0316 15:15:20.784544 6681 handler.go:208] Removed *v1.Node event handler 7\\\\nI0316 15:15:20.784571 6681 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:35Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.000617 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87409d42-1969-4643-be10-0e44352fdfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25eaf0866f956837a0a1db9a3075cbf1183739638b6ce4b781ac949f91022dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86168885df00f265cece2f87ace481f79f876e8cce8778b418e4d96e49d510c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de312b07050fe5e198d06e375f105423adbccad8e0b4de4cc81c5ef969dfac7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2b714888ee00afb4c526878aa1fee1665b0bdbefd1bee7d45916128c68ca88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f2b714888ee00afb4c526878aa1fee1665b0bdbefd1bee7d45916128c68ca88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:35Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.014668 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:36Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.026400 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:36Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.039328 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e23aaec0-f0cc-4a14-a6a0-886adcac1171\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418fb85ce9d1a56dc5cf0dd384de5af22a212d610cf00567896d7229c0d43716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd8456b32f7f665ea16a7f00723a095c6f29c7b1b6313e7fca794fe5462117f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m74n7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:36Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.787539 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovnkube-controller/2.log" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.788301 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovnkube-controller/1.log" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.791449 4736 generic.go:334] "Generic (PLEG): container finished" podID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerID="100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2" exitCode=1 Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.791494 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerDied","Data":"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2"} Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.791556 4736 scope.go:117] "RemoveContainer" containerID="173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.792333 4736 scope.go:117] "RemoveContainer" containerID="100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2" Mar 16 15:15:36 crc kubenswrapper[4736]: E0316 15:15:36.792521 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w7kdw_openshift-ovn-kubernetes(83041fd9-2e75-4569-ab47-ac7590a189a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.810362 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:36Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.824970 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:36Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.849602 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173562ae2f2a1199ffe37d170473088db3382a2fff5deb458ed0c08dcc51a191\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:20Z\\\",\\\"message\\\":\\\" 6681 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0316 15:15:20.784255 6681 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0316 15:15:20.784312 6681 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0316 15:15:20.784339 6681 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0316 15:15:20.784358 6681 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0316 15:15:20.784357 6681 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0316 15:15:20.784410 6681 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0316 15:15:20.784418 6681 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0316 15:15:20.784431 6681 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0316 15:15:20.784442 6681 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0316 15:15:20.784453 6681 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0316 15:15:20.784461 6681 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0316 15:15:20.784507 6681 factory.go:656] Stopping watch factory\\\\nI0316 15:15:20.784540 6681 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0316 15:15:20.784544 6681 handler.go:208] Removed *v1.Node event handler 7\\\\nI0316 15:15:20.784571 6681 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:35Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0316 15:15:35.978727 6899 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0316 15:15:35.979209 6899 obj_retry.go:551] Creating *factory.egressNode crc took: 6.628207ms\\\\nI0316 15:15:35.979250 6899 factory.go:1336] Added *v1.Node event handler 7\\\\nI0316 15:15:35.979301 6899 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0316 15:15:35.979760 6899 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0316 15:15:35.979904 6899 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0316 15:15:35.979970 6899 ovnkube.go:599] Stopped ovnkube\\\\nI0316 15:15:35.980018 6899 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0316 15:15:35.980161 6899 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:36Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.865707 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87409d42-1969-4643-be10-0e44352fdfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25eaf0866f956837a0a1db9a3075cbf1183739638b6ce4b781ac949f91022dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86168885df00f265cece2f87ace481f79f876e8cce8778b418e4d96e49d510c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de312b07050fe5e198d06e375f105423adbccad8e0b4de4cc81c5ef969dfac7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2b714888ee00afb4c526878aa1fee1665b0bdbefd1bee7d45916128c68ca88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f2b714888ee00afb4c526878aa1fee1665b0bdbefd1bee7d45916128c68ca88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:36Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.892447 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:36Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.910865 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:36Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.927678 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e23aaec0-f0cc-4a14-a6a0-886adcac1171\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418fb85ce9d1a56dc5cf0dd384de5af22a212d610cf00567896d7229c0d43716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd8456b32f7f665ea16a7f00723a095c6f29c7b1b6313e7fca794fe5462117f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m74n7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:36Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.958262 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:36Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:36 crc kubenswrapper[4736]: I0316 15:15:36.987038 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:36Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.004045 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.018399 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.036892 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.054774 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.069432 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-smqd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5586cec2-8615-4d8b-a695-65ee04613c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-smqd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.084928 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.101095 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.117634 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.798849 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovnkube-controller/2.log" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.804371 4736 scope.go:117] "RemoveContainer" containerID="100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2" Mar 16 15:15:37 crc kubenswrapper[4736]: E0316 15:15:37.804904 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w7kdw_openshift-ovn-kubernetes(83041fd9-2e75-4569-ab47-ac7590a189a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.828378 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.844039 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.871454 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:35Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0316 15:15:35.978727 6899 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0316 15:15:35.979209 6899 obj_retry.go:551] Creating *factory.egressNode crc took: 6.628207ms\\\\nI0316 15:15:35.979250 6899 factory.go:1336] Added *v1.Node event handler 7\\\\nI0316 15:15:35.979301 6899 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0316 15:15:35.979760 6899 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0316 15:15:35.979904 6899 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0316 15:15:35.979970 6899 ovnkube.go:599] Stopped ovnkube\\\\nI0316 15:15:35.980018 6899 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0316 15:15:35.980161 6899 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w7kdw_openshift-ovn-kubernetes(83041fd9-2e75-4569-ab47-ac7590a189a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.888367 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.903452 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.903539 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.903557 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.903615 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.903634 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:37Z","lastTransitionTime":"2026-03-16T15:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.905703 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e23aaec0-f0cc-4a14-a6a0-886adcac1171\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418fb85ce9d1a56dc5cf0dd384de5af22a212d610cf00567896d7229c0d43716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd8456b32f7f665ea16a7f00723a095c6f29c7b1b6313e7fca794fe5462117f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m74n7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: E0316 15:15:37.921504 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.927160 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87409d42-1969-4643-be10-0e44352fdfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25eaf0866f956837a0a1db9a3075cbf1183739638b6ce4b781ac949f91022dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86168885df00f265cece2f87ace481f79f876e8cce8778b418e4d96e49d510c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de312b07050fe5e198d06e375f105423adbccad8e0b4de4cc81c5ef969dfac7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2b714888ee00afb4c526878aa1fee1665b0bdbefd1bee7d45916128c68ca88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f2b714888ee00afb4c526878aa1fee1665b0bdbefd1bee7d45916128c68ca88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.927612 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.927674 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.927698 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.927725 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.927745 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:37Z","lastTransitionTime":"2026-03-16T15:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:37 crc kubenswrapper[4736]: E0316 15:15:37.945892 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.950215 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.950259 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.950275 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.950299 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.950313 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:37Z","lastTransitionTime":"2026-03-16T15:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.958664 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: E0316 15:15:37.970640 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.976379 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.976450 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.976492 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.976526 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.976552 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:37Z","lastTransitionTime":"2026-03-16T15:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.977360 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.977405 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.977430 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.977371 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:37 crc kubenswrapper[4736]: E0316 15:15:37.977537 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:37 crc kubenswrapper[4736]: E0316 15:15:37.977650 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:37 crc kubenswrapper[4736]: E0316 15:15:37.977760 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:37 crc kubenswrapper[4736]: E0316 15:15:37.977843 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:37 crc kubenswrapper[4736]: I0316 15:15:37.983320 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:37 crc kubenswrapper[4736]: E0316 15:15:37.997912 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.000472 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:37Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.003030 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.003081 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.003099 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.003167 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.003186 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:38Z","lastTransitionTime":"2026-03-16T15:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.022401 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:38Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:38 crc kubenswrapper[4736]: E0316 15:15:38.022718 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:38Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:38 crc kubenswrapper[4736]: E0316 15:15:38.022978 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.048632 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:38Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.064766 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-smqd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5586cec2-8615-4d8b-a695-65ee04613c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-smqd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:38Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.100332 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:38Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.124194 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:38Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.144845 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:38Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.161839 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:38Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.178926 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:38Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.368261 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs\") pod \"network-metrics-daemon-smqd4\" (UID: \"5586cec2-8615-4d8b-a695-65ee04613c35\") " pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:38 crc kubenswrapper[4736]: E0316 15:15:38.368516 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 16 15:15:38 crc kubenswrapper[4736]: E0316 15:15:38.368665 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs podName:5586cec2-8615-4d8b-a695-65ee04613c35 nodeName:}" failed. No retries permitted until 2026-03-16 15:15:54.36863125 +0000 UTC m=+156.096021567 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs") pod "network-metrics-daemon-smqd4" (UID: "5586cec2-8615-4d8b-a695-65ee04613c35") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 16 15:15:38 crc kubenswrapper[4736]: I0316 15:15:38.991394 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:38Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.008892 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.039821 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:35Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0316 15:15:35.978727 6899 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0316 15:15:35.979209 6899 obj_retry.go:551] Creating *factory.egressNode crc took: 6.628207ms\\\\nI0316 15:15:35.979250 6899 factory.go:1336] Added *v1.Node event handler 7\\\\nI0316 15:15:35.979301 6899 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0316 15:15:35.979760 6899 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0316 15:15:35.979904 6899 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0316 15:15:35.979970 6899 ovnkube.go:599] Stopped ovnkube\\\\nI0316 15:15:35.980018 6899 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0316 15:15:35.980161 6899 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w7kdw_openshift-ovn-kubernetes(83041fd9-2e75-4569-ab47-ac7590a189a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.057194 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87409d42-1969-4643-be10-0e44352fdfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25eaf0866f956837a0a1db9a3075cbf1183739638b6ce4b781ac949f91022dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86168885df00f265cece2f87ace481f79f876e8cce8778b418e4d96e49d510c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de312b07050fe5e198d06e375f105423adbccad8e0b4de4cc81c5ef969dfac7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2b714888ee00afb4c526878aa1fee1665b0bdbefd1bee7d45916128c68ca88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f2b714888ee00afb4c526878aa1fee1665b0bdbefd1bee7d45916128c68ca88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.077026 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: E0316 15:15:39.086595 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.102517 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.122393 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e23aaec0-f0cc-4a14-a6a0-886adcac1171\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418fb85ce9d1a56dc5cf0dd384de5af22a212d610cf00567896d7229c0d43716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd8456b32f7f665ea16a7f00723a095c6f29c7b1b6313e7fca794fe5462117f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m74n7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.159061 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.184102 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.208906 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.234104 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.249422 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.265755 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.285601 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-smqd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5586cec2-8615-4d8b-a695-65ee04613c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-smqd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.305631 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.321561 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.336208 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:39Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.977634 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.977812 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:39 crc kubenswrapper[4736]: E0316 15:15:39.977941 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.978066 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:39 crc kubenswrapper[4736]: I0316 15:15:39.978302 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:39 crc kubenswrapper[4736]: E0316 15:15:39.978595 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:39 crc kubenswrapper[4736]: E0316 15:15:39.978830 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:39 crc kubenswrapper[4736]: E0316 15:15:39.979567 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:41 crc kubenswrapper[4736]: I0316 15:15:41.977785 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:41 crc kubenswrapper[4736]: I0316 15:15:41.977991 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:41 crc kubenswrapper[4736]: I0316 15:15:41.977847 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:41 crc kubenswrapper[4736]: I0316 15:15:41.977850 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:41 crc kubenswrapper[4736]: E0316 15:15:41.978185 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:41 crc kubenswrapper[4736]: E0316 15:15:41.978377 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:41 crc kubenswrapper[4736]: E0316 15:15:41.978511 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:41 crc kubenswrapper[4736]: E0316 15:15:41.978592 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:43 crc kubenswrapper[4736]: I0316 15:15:43.977289 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:43 crc kubenswrapper[4736]: I0316 15:15:43.977346 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:43 crc kubenswrapper[4736]: E0316 15:15:43.977426 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:43 crc kubenswrapper[4736]: I0316 15:15:43.977436 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:43 crc kubenswrapper[4736]: I0316 15:15:43.977456 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:43 crc kubenswrapper[4736]: E0316 15:15:43.977698 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:43 crc kubenswrapper[4736]: E0316 15:15:43.978336 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:43 crc kubenswrapper[4736]: E0316 15:15:43.978593 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:44 crc kubenswrapper[4736]: E0316 15:15:44.088445 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 16 15:15:45 crc kubenswrapper[4736]: I0316 15:15:45.978020 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:45 crc kubenswrapper[4736]: E0316 15:15:45.978139 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:45 crc kubenswrapper[4736]: I0316 15:15:45.980720 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:45 crc kubenswrapper[4736]: E0316 15:15:45.981100 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:45 crc kubenswrapper[4736]: I0316 15:15:45.981850 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:45 crc kubenswrapper[4736]: E0316 15:15:45.982135 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:45 crc kubenswrapper[4736]: I0316 15:15:45.982391 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:45 crc kubenswrapper[4736]: E0316 15:15:45.983385 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:46 crc kubenswrapper[4736]: I0316 15:15:46.006168 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 16 15:15:47 crc kubenswrapper[4736]: I0316 15:15:47.908340 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:15:47 crc kubenswrapper[4736]: E0316 15:15:47.908538 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:51.908501641 +0000 UTC m=+213.635891938 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:15:47 crc kubenswrapper[4736]: I0316 15:15:47.909332 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:47 crc kubenswrapper[4736]: I0316 15:15:47.909381 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:47 crc kubenswrapper[4736]: E0316 15:15:47.909495 4736 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:15:47 crc kubenswrapper[4736]: E0316 15:15:47.909534 4736 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:15:47 crc kubenswrapper[4736]: E0316 15:15:47.909553 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:16:51.9095436 +0000 UTC m=+213.636933907 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 16 15:15:47 crc kubenswrapper[4736]: E0316 15:15:47.909638 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-16 15:16:51.909617412 +0000 UTC m=+213.637007689 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 16 15:15:47 crc kubenswrapper[4736]: I0316 15:15:47.977725 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:47 crc kubenswrapper[4736]: I0316 15:15:47.977811 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:47 crc kubenswrapper[4736]: E0316 15:15:47.977885 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:47 crc kubenswrapper[4736]: E0316 15:15:47.977955 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:47 crc kubenswrapper[4736]: I0316 15:15:47.978015 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:47 crc kubenswrapper[4736]: E0316 15:15:47.978071 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:47 crc kubenswrapper[4736]: I0316 15:15:47.978553 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:47 crc kubenswrapper[4736]: E0316 15:15:47.978842 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.010370 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.010420 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.010584 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.010602 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.010616 4736 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.010628 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.010673 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-16 15:16:52.010655894 +0000 UTC m=+213.738046191 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.010677 4736 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.010697 4736 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.010792 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-16 15:16:52.010771507 +0000 UTC m=+213.738161804 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.197897 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.197938 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.197949 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.197964 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.197973 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:48Z","lastTransitionTime":"2026-03-16T15:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.215063 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:48Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.219263 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.219297 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.219305 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.219319 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.219328 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:48Z","lastTransitionTime":"2026-03-16T15:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.232025 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:48Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.236528 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.236569 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.236582 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.236599 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.236610 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:48Z","lastTransitionTime":"2026-03-16T15:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.249656 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:48Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.253840 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.253895 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.253947 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.253974 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.254024 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:48Z","lastTransitionTime":"2026-03-16T15:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.273073 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:48Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.277379 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.277453 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.277467 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.277483 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.277544 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:48Z","lastTransitionTime":"2026-03-16T15:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.297280 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6132c463-6382-43f1-ba00-8f3804f19383\\\",\\\"systemUUID\\\":\\\"378d5087-0041-4b41-a060-d9ae2cec6524\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:48Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.297410 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.978412 4736 scope.go:117] "RemoveContainer" containerID="100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2" Mar 16 15:15:48 crc kubenswrapper[4736]: E0316 15:15:48.978770 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w7kdw_openshift-ovn-kubernetes(83041fd9-2e75-4569-ab47-ac7590a189a6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" Mar 16 15:15:48 crc kubenswrapper[4736]: I0316 15:15:48.990364 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45c93e24-5358-402f-9ace-e85478dedb49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293b476aacf60644bc1a9f6f8e685423051b98250fda2682f35df75088f4bf65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tjlpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-j9cg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:48Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.000746 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e23aaec0-f0cc-4a14-a6a0-886adcac1171\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://418fb85ce9d1a56dc5cf0dd384de5af22a212d610cf00567896d7229c0d43716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebd8456b32f7f665ea16a7f00723a095c6f29c7b1b6313e7fca794fe5462117f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g7rzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-m74n7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:48Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.010000 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8d99619-84a4-4c1b-9292-1ff94b5f593e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54a9e8325d39b9a28e7029851abaf7a833637c19cddd4bc3ed91b4b25d90ce2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://508caf5d1d449d47efaa2dfbbda3533cfe86b7cfaa9c3d03cf1064a64d707cf8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0316 15:13:47.934471 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0316 15:13:47.935434 1 observer_polling.go:159] Starting file observer\\\\nI0316 15:13:47.936789 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0316 15:13:47.937516 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0316 15:14:12.379783 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0316 15:14:17.502420 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0316 15:14:17.502480 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:47Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5468b43d6997c8121c6f55c925c07ad833511ca569242bfbb99b4c49f9b712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cac7624f273e3e97ea9932a8346fb9552dd0cc7f4267a915e594d1b76a908c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f10f7586e8207e4e593cfb8c44fbc56ee1c0f25ec07cb0c9dfa5ee8d23413f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.021353 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87409d42-1969-4643-be10-0e44352fdfaf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25eaf0866f956837a0a1db9a3075cbf1183739638b6ce4b781ac949f91022dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86168885df00f265cece2f87ace481f79f876e8cce8778b418e4d96e49d510c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1de312b07050fe5e198d06e375f105423adbccad8e0b4de4cc81c5ef969dfac7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f2b714888ee00afb4c526878aa1fee1665b0bdbefd1bee7d45916128c68ca88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f2b714888ee00afb4c526878aa1fee1665b0bdbefd1bee7d45916128c68ca88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.036239 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fb586d7-2a83-4790-8de9-e7e993542167\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c770c779d20674c6cb8d2eaf99aba6b119bf8e8a816f800aacb524ce102bc28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8e1eb755724f90e2627d180d1a6fcfb11749e368ab6c1ccabd1622c699f33b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6783f6c68b99195f9cc4b4382837e2eaa64bc339afb6a79a82d1ccb7c222734\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://421f86abf4573000079d29a658da4eaff66f43b429e0c1d543aca1eba58887b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07279d9b9301af600f34fc26caa7eae45396871fae7395082e1a9818a8e9534d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebcef2b906163cdb97aebdd10f3fd14903cb8f27e63006f7d1033312d1ba3496\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19ef3b4b9487702a4fc7a23b63acad2a4723773aef1cf8c9979deaeadafcb4c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdplh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hsw5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.047644 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.058891 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.071194 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a8c4cd3cdc1af0eeace24e8125b5e95a9420f7d543f8a291b84ffa4c414769f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: E0316 15:15:49.089679 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.090137 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-zgcj2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eabe1535-f51c-4a72-b299-aab5ca4ab624\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nfnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-zgcj2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.102484 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-smqd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5586cec2-8615-4d8b-a695-65ee04613c35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:22Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzksc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:22Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-smqd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.126862 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d19d61c-cea4-4fd7-9a5c-6c69023175b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4b1fde2d282e46677a42b2514f8b0576f32bc1b03d206b74d0619a26cc6d359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://350c0c503cf90f227e5a520d1cb5de7d1eb82fc891ba91d80d02ffa28d8d0a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f7afc8f608e5c476b7f802f7fdd6f0c6eb8849f5b95683da7de7970fce15786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dc8a7ee26f00b6a375d3fdc3cfdc637100e666780da874b60ab8bbb7c3b5fc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff99808c37889a9547341039e35961dc2b70d69973b80813c28475c27e7b5d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7079efc204a87d924e7b1e25733c4c75e37d6bb2f90aacc9d791ecc5d43f473\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://139b87967aef30b2c3e70d0dcbd1c70b527a12b306dd34390aa2a71b8b497fb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebaaf3b1fe0bcfc39145c7b840079381bf13582ca4216178b1d1e7b904d026f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.152640 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90c5713e-4cbf-4152-9dd1-2ba1ec0df626\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:13:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-16T15:14:17Z\\\",\\\"message\\\":\\\"W0316 15:14:17.183496 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0316 15:14:17.183862 1 crypto.go:601] Generating new CA for check-endpoints-signer@1773674057 cert, and key in /tmp/serving-cert-3349447998/serving-signer.crt, /tmp/serving-cert-3349447998/serving-signer.key\\\\nI0316 15:14:17.498698 1 observer_polling.go:159] Starting file observer\\\\nW0316 15:14:17.505885 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0316 15:14:17.506076 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0316 15:14:17.506826 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3349447998/tls.crt::/tmp/serving-cert-3349447998/tls.key\\\\\\\"\\\\nF0316 15:14:17.930057 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:14:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:13:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:13:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:13:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:13:19Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.185968 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.202923 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.217670 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.229402 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.239401 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-99qcn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a828b340-068e-4918-8873-1137677926e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ce28debc1f81fa0930f7bd9c1fe6a191131f29c23462df10d4ea54ba88a4c51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c7ds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-99qcn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.256638 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:35Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0316 15:15:35.978727 6899 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0316 15:15:35.979209 6899 obj_retry.go:551] Creating *factory.egressNode crc took: 6.628207ms\\\\nI0316 15:15:35.979250 6899 factory.go:1336] Added *v1.Node event handler 7\\\\nI0316 15:15:35.979301 6899 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0316 15:15:35.979760 6899 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0316 15:15:35.979904 6899 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0316 15:15:35.979970 6899 ovnkube.go:599] Stopped ovnkube\\\\nI0316 15:15:35.980018 6899 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0316 15:15:35.980161 6899 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w7kdw_openshift-ovn-kubernetes(83041fd9-2e75-4569-ab47-ac7590a189a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:49Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.977440 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.977478 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.977464 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:49 crc kubenswrapper[4736]: I0316 15:15:49.977465 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:49 crc kubenswrapper[4736]: E0316 15:15:49.977633 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:49 crc kubenswrapper[4736]: E0316 15:15:49.977698 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:49 crc kubenswrapper[4736]: E0316 15:15:49.977791 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:49 crc kubenswrapper[4736]: E0316 15:15:49.977893 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:51 crc kubenswrapper[4736]: I0316 15:15:51.978345 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:51 crc kubenswrapper[4736]: I0316 15:15:51.978395 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:51 crc kubenswrapper[4736]: I0316 15:15:51.978456 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:51 crc kubenswrapper[4736]: E0316 15:15:51.978542 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:51 crc kubenswrapper[4736]: I0316 15:15:51.978554 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:51 crc kubenswrapper[4736]: E0316 15:15:51.978918 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:51 crc kubenswrapper[4736]: E0316 15:15:51.979074 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:51 crc kubenswrapper[4736]: E0316 15:15:51.979177 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:51 crc kubenswrapper[4736]: I0316 15:15:51.994011 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 16 15:15:53 crc kubenswrapper[4736]: I0316 15:15:53.977674 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:53 crc kubenswrapper[4736]: I0316 15:15:53.977748 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:53 crc kubenswrapper[4736]: I0316 15:15:53.977676 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:53 crc kubenswrapper[4736]: E0316 15:15:53.977955 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:53 crc kubenswrapper[4736]: I0316 15:15:53.977996 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:53 crc kubenswrapper[4736]: E0316 15:15:53.978282 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:53 crc kubenswrapper[4736]: E0316 15:15:53.978323 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:53 crc kubenswrapper[4736]: E0316 15:15:53.978815 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:54 crc kubenswrapper[4736]: E0316 15:15:54.090872 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 16 15:15:54 crc kubenswrapper[4736]: I0316 15:15:54.376774 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs\") pod \"network-metrics-daemon-smqd4\" (UID: \"5586cec2-8615-4d8b-a695-65ee04613c35\") " pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:54 crc kubenswrapper[4736]: E0316 15:15:54.376934 4736 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 16 15:15:54 crc kubenswrapper[4736]: E0316 15:15:54.377032 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs podName:5586cec2-8615-4d8b-a695-65ee04613c35 nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.376999958 +0000 UTC m=+188.104390265 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs") pod "network-metrics-daemon-smqd4" (UID: "5586cec2-8615-4d8b-a695-65ee04613c35") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 16 15:15:55 crc kubenswrapper[4736]: I0316 15:15:55.881780 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zgcj2_eabe1535-f51c-4a72-b299-aab5ca4ab624/kube-multus/0.log" Mar 16 15:15:55 crc kubenswrapper[4736]: I0316 15:15:55.881832 4736 generic.go:334] "Generic (PLEG): container finished" podID="eabe1535-f51c-4a72-b299-aab5ca4ab624" containerID="5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac" exitCode=1 Mar 16 15:15:55 crc kubenswrapper[4736]: I0316 15:15:55.881890 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zgcj2" event={"ID":"eabe1535-f51c-4a72-b299-aab5ca4ab624","Type":"ContainerDied","Data":"5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac"} Mar 16 15:15:55 crc kubenswrapper[4736]: I0316 15:15:55.883302 4736 scope.go:117] "RemoveContainer" containerID="5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac" Mar 16 15:15:55 crc kubenswrapper[4736]: I0316 15:15:55.901723 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nq8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7ec4aa5-6e80-415b-a253-6374da970e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcacda1ed7c866a402fe47ea039742248d867b84e58aceaf60a41adb5603aef4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xcrxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:14Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nq8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:55Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:55 crc kubenswrapper[4736]: I0316 15:15:55.925150 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b84b52c69615fbbe80eac59219c07ac3dd5c05a89177a6c40454e0609798c6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:55Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:55 crc kubenswrapper[4736]: I0316 15:15:55.952159 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:43Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:55Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:55 crc kubenswrapper[4736]: I0316 15:15:55.977764 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:55 crc kubenswrapper[4736]: E0316 15:15:55.977970 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:55 crc kubenswrapper[4736]: I0316 15:15:55.978362 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:55 crc kubenswrapper[4736]: I0316 15:15:55.978410 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:55 crc kubenswrapper[4736]: E0316 15:15:55.978486 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:55 crc kubenswrapper[4736]: I0316 15:15:55.978536 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:55 crc kubenswrapper[4736]: E0316 15:15:55.978593 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:55 crc kubenswrapper[4736]: E0316 15:15:55.978634 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:55 crc kubenswrapper[4736]: I0316 15:15:55.988540 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83041fd9-2e75-4569-ab47-ac7590a189a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-16T15:15:35Z\\\",\\\"message\\\":\\\"il\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0316 15:15:35.978727 6899 transact.go:42] Configuring OVN: [{Op:update Table:Logical_Router_Static_Route Row:map[ip_prefix:10.217.0.0/22 nexthop:100.64.0.2 policy:{GoSet:[src-ip]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {8944024f-deb7-4076-afb3-4b50a2ff4b4b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:static_routes Mutator:insert Value:{GoSet:[{GoUUID:8944024f-deb7-4076-afb3-4b50a2ff4b4b}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0316 15:15:35.979209 6899 obj_retry.go:551] Creating *factory.egressNode crc took: 6.628207ms\\\\nI0316 15:15:35.979250 6899 factory.go:1336] Added *v1.Node event handler 7\\\\nI0316 15:15:35.979301 6899 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0316 15:15:35.979760 6899 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0316 15:15:35.979904 6899 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0316 15:15:35.979970 6899 ovnkube.go:599] Stopped ovnkube\\\\nI0316 15:15:35.980018 6899 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0316 15:15:35.980161 6899 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w7kdw_openshift-ovn-kubernetes(83041fd9-2e75-4569-ab47-ac7590a189a6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:15:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-16T15:15:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-16T15:15:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9p48n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-16T15:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w7kdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:55Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:56 crc kubenswrapper[4736]: I0316 15:15:56.005563 4736 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-16T15:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://879d8b7a1fc275828109939927333baec176475d612f229181fd0ba4a9a027ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e207c1e9e9dba7b25f71ca9d47a383a004c337f7918e77573e3b4409051db638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-16T15:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-16T15:15:56Z is after 2025-08-24T17:21:41Z" Mar 16 15:15:56 crc kubenswrapper[4736]: I0316 15:15:56.064372 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-hsw5j" podStartSLOduration=95.064345196 podStartE2EDuration="1m35.064345196s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:15:56.064344906 +0000 UTC m=+157.791735193" watchObservedRunningTime="2026-03-16 15:15:56.064345196 +0000 UTC m=+157.791735493" Mar 16 15:15:56 crc kubenswrapper[4736]: I0316 15:15:56.064574 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-99qcn" podStartSLOduration=95.064568162 podStartE2EDuration="1m35.064568162s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:15:56.040685753 +0000 UTC m=+157.768076030" watchObservedRunningTime="2026-03-16 15:15:56.064568162 +0000 UTC m=+157.791958459" Mar 16 15:15:56 crc kubenswrapper[4736]: I0316 15:15:56.106810 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-m74n7" podStartSLOduration=95.106785211 podStartE2EDuration="1m35.106785211s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:15:56.106698529 +0000 UTC m=+157.834088856" watchObservedRunningTime="2026-03-16 15:15:56.106785211 +0000 UTC m=+157.834175508" Mar 16 15:15:56 crc kubenswrapper[4736]: I0316 15:15:56.106957 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podStartSLOduration=95.106951576 podStartE2EDuration="1m35.106951576s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:15:56.086372594 +0000 UTC m=+157.813762901" watchObservedRunningTime="2026-03-16 15:15:56.106951576 +0000 UTC m=+157.834341873" Mar 16 15:15:56 crc kubenswrapper[4736]: I0316 15:15:56.130890 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=11.130869575 podStartE2EDuration="11.130869575s" podCreationTimestamp="2026-03-16 15:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:15:56.12994317 +0000 UTC m=+157.857333457" watchObservedRunningTime="2026-03-16 15:15:56.130869575 +0000 UTC m=+157.858259862" Mar 16 15:15:56 crc kubenswrapper[4736]: I0316 15:15:56.154370 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=23.154343223 podStartE2EDuration="23.154343223s" podCreationTimestamp="2026-03-16 15:15:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:15:56.154004045 +0000 UTC m=+157.881394332" watchObservedRunningTime="2026-03-16 15:15:56.154343223 +0000 UTC m=+157.881733540" Mar 16 15:15:56 crc kubenswrapper[4736]: I0316 15:15:56.170375 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.170352081 podStartE2EDuration="1m12.170352081s" podCreationTimestamp="2026-03-16 15:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:15:56.16994861 +0000 UTC m=+157.897338897" watchObservedRunningTime="2026-03-16 15:15:56.170352081 +0000 UTC m=+157.897742398" Mar 16 15:15:56 crc kubenswrapper[4736]: I0316 15:15:56.262652 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=5.26263398 podStartE2EDuration="5.26263398s" podCreationTimestamp="2026-03-16 15:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:15:56.262546987 +0000 UTC m=+157.989937274" watchObservedRunningTime="2026-03-16 15:15:56.26263398 +0000 UTC m=+157.990024267" Mar 16 15:15:56 crc kubenswrapper[4736]: I0316 15:15:56.290880 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=68.290859285 podStartE2EDuration="1m8.290859285s" podCreationTimestamp="2026-03-16 15:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:15:56.29031439 +0000 UTC m=+158.017704717" watchObservedRunningTime="2026-03-16 15:15:56.290859285 +0000 UTC m=+158.018249572" Mar 16 15:15:56 crc kubenswrapper[4736]: I0316 15:15:56.888211 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zgcj2_eabe1535-f51c-4a72-b299-aab5ca4ab624/kube-multus/0.log" Mar 16 15:15:56 crc kubenswrapper[4736]: I0316 15:15:56.888271 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zgcj2" event={"ID":"eabe1535-f51c-4a72-b299-aab5ca4ab624","Type":"ContainerStarted","Data":"a6bfb6a3231ab025727278c14ea32655f87819defe1123015b98b189953c091e"} Mar 16 15:15:56 crc kubenswrapper[4736]: I0316 15:15:56.949631 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-nq8cg" podStartSLOduration=95.949605237 podStartE2EDuration="1m35.949605237s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:15:56.9489915 +0000 UTC m=+158.676381807" watchObservedRunningTime="2026-03-16 15:15:56.949605237 +0000 UTC m=+158.676995534" Mar 16 15:15:57 crc kubenswrapper[4736]: I0316 15:15:57.977063 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:57 crc kubenswrapper[4736]: I0316 15:15:57.977252 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:57 crc kubenswrapper[4736]: I0316 15:15:57.977340 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:57 crc kubenswrapper[4736]: I0316 15:15:57.977392 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:57 crc kubenswrapper[4736]: E0316 15:15:57.977408 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:57 crc kubenswrapper[4736]: E0316 15:15:57.977517 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:57 crc kubenswrapper[4736]: E0316 15:15:57.977714 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:15:57 crc kubenswrapper[4736]: E0316 15:15:57.977863 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.529854 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.529901 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.529912 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.529928 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.529939 4736 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-16T15:15:58Z","lastTransitionTime":"2026-03-16T15:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.578261 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-zgcj2" podStartSLOduration=97.578235094 podStartE2EDuration="1m37.578235094s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:15:57.058351375 +0000 UTC m=+158.785741662" watchObservedRunningTime="2026-03-16 15:15:58.578235094 +0000 UTC m=+160.305625381" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.579333 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4"] Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.579752 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.584185 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.585449 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.586016 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.587767 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.626924 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.627076 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.627277 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.627438 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.627513 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.728909 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.728979 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.729046 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.729145 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.729145 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.729279 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.729422 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.730809 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-service-ca\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.739491 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.763816 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e93d779-2c7b-4bbe-af37-6fd40a1689e2-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-299s4\" (UID: \"2e93d779-2c7b-4bbe-af37-6fd40a1689e2\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: I0316 15:15:58.896012 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" Mar 16 15:15:58 crc kubenswrapper[4736]: W0316 15:15:58.913778 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e93d779_2c7b_4bbe_af37_6fd40a1689e2.slice/crio-f3633d9733c452439c8b5b4fefd5c00a73bc1760658e0b7a96b2597a12cd6ba9 WatchSource:0}: Error finding container f3633d9733c452439c8b5b4fefd5c00a73bc1760658e0b7a96b2597a12cd6ba9: Status 404 returned error can't find the container with id f3633d9733c452439c8b5b4fefd5c00a73bc1760658e0b7a96b2597a12cd6ba9 Mar 16 15:15:59 crc kubenswrapper[4736]: I0316 15:15:59.001268 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 16 15:15:59 crc kubenswrapper[4736]: I0316 15:15:59.015440 4736 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 16 15:15:59 crc kubenswrapper[4736]: E0316 15:15:59.091925 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 16 15:15:59 crc kubenswrapper[4736]: I0316 15:15:59.903169 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" event={"ID":"2e93d779-2c7b-4bbe-af37-6fd40a1689e2","Type":"ContainerStarted","Data":"e01e7450fd54a9403cf95610a1c65261fa30f462b5d158a33dea07cdec7b1a84"} Mar 16 15:15:59 crc kubenswrapper[4736]: I0316 15:15:59.903660 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" event={"ID":"2e93d779-2c7b-4bbe-af37-6fd40a1689e2","Type":"ContainerStarted","Data":"f3633d9733c452439c8b5b4fefd5c00a73bc1760658e0b7a96b2597a12cd6ba9"} Mar 16 15:15:59 crc kubenswrapper[4736]: I0316 15:15:59.925416 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-299s4" podStartSLOduration=98.925380801 podStartE2EDuration="1m38.925380801s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:15:59.925173775 +0000 UTC m=+161.652564102" watchObservedRunningTime="2026-03-16 15:15:59.925380801 +0000 UTC m=+161.652771128" Mar 16 15:15:59 crc kubenswrapper[4736]: I0316 15:15:59.977442 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:15:59 crc kubenswrapper[4736]: E0316 15:15:59.977596 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:15:59 crc kubenswrapper[4736]: I0316 15:15:59.977841 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:15:59 crc kubenswrapper[4736]: E0316 15:15:59.977912 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:15:59 crc kubenswrapper[4736]: I0316 15:15:59.978124 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:15:59 crc kubenswrapper[4736]: E0316 15:15:59.978196 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:15:59 crc kubenswrapper[4736]: I0316 15:15:59.978337 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:15:59 crc kubenswrapper[4736]: E0316 15:15:59.978415 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:16:00 crc kubenswrapper[4736]: I0316 15:16:00.978621 4736 scope.go:117] "RemoveContainer" containerID="100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2" Mar 16 15:16:01 crc kubenswrapper[4736]: I0316 15:16:01.912851 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovnkube-controller/2.log" Mar 16 15:16:01 crc kubenswrapper[4736]: I0316 15:16:01.916229 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerStarted","Data":"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4"} Mar 16 15:16:01 crc kubenswrapper[4736]: I0316 15:16:01.917293 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:16:01 crc kubenswrapper[4736]: I0316 15:16:01.977829 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:16:01 crc kubenswrapper[4736]: E0316 15:16:01.977999 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:16:01 crc kubenswrapper[4736]: I0316 15:16:01.978278 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:16:01 crc kubenswrapper[4736]: E0316 15:16:01.978380 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:16:01 crc kubenswrapper[4736]: I0316 15:16:01.978534 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:16:01 crc kubenswrapper[4736]: E0316 15:16:01.978604 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:16:01 crc kubenswrapper[4736]: I0316 15:16:01.978783 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:16:01 crc kubenswrapper[4736]: E0316 15:16:01.978856 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:16:02 crc kubenswrapper[4736]: I0316 15:16:02.219394 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" podStartSLOduration=101.219361067 podStartE2EDuration="1m41.219361067s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:01.956337921 +0000 UTC m=+163.683728218" watchObservedRunningTime="2026-03-16 15:16:02.219361067 +0000 UTC m=+163.946751374" Mar 16 15:16:02 crc kubenswrapper[4736]: I0316 15:16:02.221177 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-smqd4"] Mar 16 15:16:02 crc kubenswrapper[4736]: I0316 15:16:02.918789 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:16:02 crc kubenswrapper[4736]: E0316 15:16:02.918922 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:16:03 crc kubenswrapper[4736]: I0316 15:16:03.977976 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:16:03 crc kubenswrapper[4736]: I0316 15:16:03.977976 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:16:03 crc kubenswrapper[4736]: E0316 15:16:03.978137 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:16:03 crc kubenswrapper[4736]: E0316 15:16:03.978258 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:16:03 crc kubenswrapper[4736]: I0316 15:16:03.979570 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:16:03 crc kubenswrapper[4736]: E0316 15:16:03.979957 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:16:04 crc kubenswrapper[4736]: E0316 15:16:04.094334 4736 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 16 15:16:04 crc kubenswrapper[4736]: I0316 15:16:04.977883 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:16:04 crc kubenswrapper[4736]: E0316 15:16:04.978198 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:16:05 crc kubenswrapper[4736]: I0316 15:16:05.977378 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:16:05 crc kubenswrapper[4736]: I0316 15:16:05.977474 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:16:05 crc kubenswrapper[4736]: I0316 15:16:05.977518 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:16:05 crc kubenswrapper[4736]: E0316 15:16:05.977610 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:16:05 crc kubenswrapper[4736]: E0316 15:16:05.977796 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:16:05 crc kubenswrapper[4736]: E0316 15:16:05.978043 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:16:06 crc kubenswrapper[4736]: I0316 15:16:06.977939 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:16:06 crc kubenswrapper[4736]: E0316 15:16:06.978169 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:16:07 crc kubenswrapper[4736]: I0316 15:16:07.977519 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:16:07 crc kubenswrapper[4736]: I0316 15:16:07.977556 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:16:07 crc kubenswrapper[4736]: I0316 15:16:07.977657 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:16:07 crc kubenswrapper[4736]: E0316 15:16:07.977773 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 16 15:16:07 crc kubenswrapper[4736]: E0316 15:16:07.977944 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 16 15:16:07 crc kubenswrapper[4736]: E0316 15:16:07.978180 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 16 15:16:08 crc kubenswrapper[4736]: I0316 15:16:08.872412 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:16:08 crc kubenswrapper[4736]: I0316 15:16:08.978166 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:16:08 crc kubenswrapper[4736]: E0316 15:16:08.979973 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-smqd4" podUID="5586cec2-8615-4d8b-a695-65ee04613c35" Mar 16 15:16:09 crc kubenswrapper[4736]: I0316 15:16:09.977569 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:16:09 crc kubenswrapper[4736]: I0316 15:16:09.977564 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:16:09 crc kubenswrapper[4736]: I0316 15:16:09.977604 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:16:09 crc kubenswrapper[4736]: I0316 15:16:09.982052 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 16 15:16:09 crc kubenswrapper[4736]: I0316 15:16:09.982274 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 16 15:16:09 crc kubenswrapper[4736]: I0316 15:16:09.982714 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 16 15:16:09 crc kubenswrapper[4736]: I0316 15:16:09.985051 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 16 15:16:10 crc kubenswrapper[4736]: I0316 15:16:10.977341 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:16:10 crc kubenswrapper[4736]: I0316 15:16:10.981299 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 16 15:16:10 crc kubenswrapper[4736]: I0316 15:16:10.981873 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.832868 4736 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.879124 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qgcn7"] Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.879710 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4dzg6"] Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.879926 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9"] Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.880185 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.880396 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.881414 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.888309 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dswrb"] Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.888853 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5jx6b"] Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.889150 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.889542 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.889543 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf"] Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.898746 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.899042 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.899193 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.899306 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.901275 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.902158 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.902321 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.902445 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.902605 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.902753 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.902902 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.903022 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c"] Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.903532 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.907362 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.907793 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.908064 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.908270 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.908859 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.909144 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.911984 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.912438 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.912646 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.912980 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.913127 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5"] Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.913613 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65"] Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.914145 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.914521 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.914524 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz"] Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.943216 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.948088 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2j5dh"] Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.950697 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.952186 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-p78nr"] Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.973465 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.973749 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.974223 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.974429 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.974583 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.974598 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.974848 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.974869 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.975019 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.975440 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.975744 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2bzh5"] Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.976222 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2bzh5" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.974645 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.976483 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.974772 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.974818 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.975181 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.975215 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.976560 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.976644 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.976669 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.976750 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.976834 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.976844 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.976943 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.976965 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.977050 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.977149 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.977206 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.977231 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.977306 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.977378 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.977452 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.977567 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.978232 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.978453 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.978636 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.980952 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.981152 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.981286 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.981376 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.981510 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.981571 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.981653 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.981934 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.982035 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.982231 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.978233 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.982661 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.982800 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.982883 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.988787 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.989168 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.990050 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.990066 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.990263 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.990619 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.990738 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.990795 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 16 15:16:18 crc kubenswrapper[4736]: I0316 15:16:18.990863 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.000147 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.000581 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.002967 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbr9b\" (UniqueName: \"kubernetes.io/projected/aee91c3b-8c99-4023-a891-2aaa3ab5ebcc-kube-api-access-jbr9b\") pod \"machine-api-operator-5694c8668f-dswrb\" (UID: \"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.003015 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77900183-2391-4a47-8468-b36847297446-config\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.003040 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c41b72-0beb-4da9-9717-f975a6608475-serving-cert\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.003059 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7984ce35-90df-4462-b2d5-6d25102d7bb5-config\") pod \"route-controller-manager-6576b87f9c-cpv4c\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.005785 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-client-ca\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.005832 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6c41b72-0beb-4da9-9717-f975a6608475-audit-dir\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.005865 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1b7712f1-6d20-477f-a190-a50a6d35c238-encryption-config\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.005886 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee91c3b-8c99-4023-a891-2aaa3ab5ebcc-config\") pod \"machine-api-operator-5694c8668f-dswrb\" (UID: \"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.005904 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2850ca2-f7d9-4aa6-9163-e0b32c53cdce-config\") pod \"machine-approver-56656f9798-rfgmf\" (UID: \"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.005921 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6c41b72-0beb-4da9-9717-f975a6608475-etcd-client\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.005931 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.005942 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.006260 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7984ce35-90df-4462-b2d5-6d25102d7bb5-client-ca\") pod \"route-controller-manager-6576b87f9c-cpv4c\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.006299 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e2850ca2-f7d9-4aa6-9163-e0b32c53cdce-machine-approver-tls\") pod \"machine-approver-56656f9798-rfgmf\" (UID: \"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.006328 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6c41b72-0beb-4da9-9717-f975a6608475-encryption-config\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.006360 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-image-import-ca\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.006380 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-config\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.006423 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-etcd-serving-ca\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.006447 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-config\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.006478 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-serving-cert\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.017040 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptmrm\" (UniqueName: \"kubernetes.io/projected/b6c41b72-0beb-4da9-9717-f975a6608475-kube-api-access-ptmrm\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.017604 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvm6p\" (UniqueName: \"kubernetes.io/projected/e2850ca2-f7d9-4aa6-9163-e0b32c53cdce-kube-api-access-nvm6p\") pod \"machine-approver-56656f9798-rfgmf\" (UID: \"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.017636 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6c41b72-0beb-4da9-9717-f975a6608475-audit-policies\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.017683 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1b7712f1-6d20-477f-a190-a50a6d35c238-etcd-client\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.017705 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77900183-2391-4a47-8468-b36847297446-service-ca-bundle\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.017729 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.017767 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.017911 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6c41b72-0beb-4da9-9717-f975a6608475-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.017998 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dcrs\" (UniqueName: \"kubernetes.io/projected/7984ce35-90df-4462-b2d5-6d25102d7bb5-kube-api-access-5dcrs\") pod \"route-controller-manager-6576b87f9c-cpv4c\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.018055 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b7712f1-6d20-477f-a190-a50a6d35c238-audit-dir\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.018087 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77900183-2391-4a47-8468-b36847297446-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.018176 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7712f1-6d20-477f-a190-a50a6d35c238-serving-cert\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.018261 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aee91c3b-8c99-4023-a891-2aaa3ab5ebcc-images\") pod \"machine-api-operator-5694c8668f-dswrb\" (UID: \"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.018315 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/aee91c3b-8c99-4023-a891-2aaa3ab5ebcc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dswrb\" (UID: \"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.018345 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-audit\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.018401 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6c41b72-0beb-4da9-9717-f975a6608475-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.018435 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77900183-2391-4a47-8468-b36847297446-serving-cert\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.018486 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x959c\" (UniqueName: \"kubernetes.io/projected/77900183-2391-4a47-8468-b36847297446-kube-api-access-x959c\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.018515 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xjw4\" (UniqueName: \"kubernetes.io/projected/1b7712f1-6d20-477f-a190-a50a6d35c238-kube-api-access-5xjw4\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.018575 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2850ca2-f7d9-4aa6-9163-e0b32c53cdce-auth-proxy-config\") pod \"machine-approver-56656f9798-rfgmf\" (UID: \"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.018643 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1b7712f1-6d20-477f-a190-a50a6d35c238-node-pullsecrets\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.018717 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgwzp\" (UniqueName: \"kubernetes.io/projected/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-kube-api-access-qgwzp\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.018755 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7984ce35-90df-4462-b2d5-6d25102d7bb5-serving-cert\") pod \"route-controller-manager-6576b87f9c-cpv4c\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.070896 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j58gg"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.071383 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.072159 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7gf6z"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.072282 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.072424 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vv5d4"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.072768 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.073698 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.075600 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v8kpj"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.077441 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.078458 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-df26x"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.078951 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.079281 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.079623 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.081041 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.081476 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.081602 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.081707 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.081822 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.081916 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qgcn7"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.081941 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.081949 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.082085 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.082487 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.082920 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.083409 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vv5d4" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.084011 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.084080 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-76btc"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.085094 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.085273 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.085933 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.085842 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.086461 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.086694 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.087938 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g4ggq"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.088492 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.089082 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.089783 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g4ggq" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.094343 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.094447 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.094357 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.094803 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.094841 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.094940 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.095231 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.096908 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.097095 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.097204 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.097253 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.098034 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zxgkb"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.100758 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zxgkb" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.101434 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.102017 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ns4mr"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.102388 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kzg9q"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.102782 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.102938 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.103480 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.105407 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.109450 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.118845 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kr8vn"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.119141 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kzg9q" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.119348 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.119374 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7984ce35-90df-4462-b2d5-6d25102d7bb5-client-ca\") pod \"route-controller-manager-6576b87f9c-cpv4c\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.119397 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcef29de-d881-4b9e-871e-6a2cc33484b6-trusted-ca\") pod \"console-operator-58897d9998-2j5dh\" (UID: \"fcef29de-d881-4b9e-871e-6a2cc33484b6\") " pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.119418 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9c7f\" (UniqueName: \"kubernetes.io/projected/5881f71f-e94a-4bc8-8d08-8bf079fd10e3-kube-api-access-r9c7f\") pod \"downloads-7954f5f757-2bzh5\" (UID: \"5881f71f-e94a-4bc8-8d08-8bf079fd10e3\") " pod="openshift-console/downloads-7954f5f757-2bzh5" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.120364 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49jwf\" (UniqueName: \"kubernetes.io/projected/56799fb0-e11b-40fb-812c-bb7907d5b25f-kube-api-access-49jwf\") pod \"openshift-apiserver-operator-796bbdcf4f-4vrcz\" (UID: \"56799fb0-e11b-40fb-812c-bb7907d5b25f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.120444 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e2850ca2-f7d9-4aa6-9163-e0b32c53cdce-machine-approver-tls\") pod \"machine-approver-56656f9798-rfgmf\" (UID: \"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.119948 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.120752 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4dzg6"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.120774 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tbntp"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.121252 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.121825 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.122427 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7984ce35-90df-4462-b2d5-6d25102d7bb5-client-ca\") pod \"route-controller-manager-6576b87f9c-cpv4c\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.122432 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.119992 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.123019 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.120463 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6c41b72-0beb-4da9-9717-f975a6608475-encryption-config\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124281 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-image-import-ca\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124315 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-config\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124341 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-etcd-serving-ca\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124362 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-oauth-config\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124382 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-config\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124404 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-serving-cert\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124422 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptmrm\" (UniqueName: \"kubernetes.io/projected/b6c41b72-0beb-4da9-9717-f975a6608475-kube-api-access-ptmrm\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124441 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv5w4\" (UniqueName: \"kubernetes.io/projected/f971a0f2-eb9c-4fe6-a2df-52267d16f41a-kube-api-access-zv5w4\") pod \"migrator-59844c95c7-g4ggq\" (UID: \"f971a0f2-eb9c-4fe6-a2df-52267d16f41a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g4ggq" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124465 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvm6p\" (UniqueName: \"kubernetes.io/projected/e2850ca2-f7d9-4aa6-9163-e0b32c53cdce-kube-api-access-nvm6p\") pod \"machine-approver-56656f9798-rfgmf\" (UID: \"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124486 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-trusted-ca-bundle\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124504 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcef29de-d881-4b9e-871e-6a2cc33484b6-config\") pod \"console-operator-58897d9998-2j5dh\" (UID: \"fcef29de-d881-4b9e-871e-6a2cc33484b6\") " pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124523 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6c41b72-0beb-4da9-9717-f975a6608475-audit-policies\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124538 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1b7712f1-6d20-477f-a190-a50a6d35c238-etcd-client\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124557 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77900183-2391-4a47-8468-b36847297446-service-ca-bundle\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124575 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124591 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-serving-cert\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124608 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa766eb-c490-46d8-8154-13eb2719e6e0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-grkd5\" (UID: \"2fa766eb-c490-46d8-8154-13eb2719e6e0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124645 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6c41b72-0beb-4da9-9717-f975a6608475-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124662 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcef29de-d881-4b9e-871e-6a2cc33484b6-serving-cert\") pod \"console-operator-58897d9998-2j5dh\" (UID: \"fcef29de-d881-4b9e-871e-6a2cc33484b6\") " pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124680 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s99mf\" (UniqueName: \"kubernetes.io/projected/2fa766eb-c490-46d8-8154-13eb2719e6e0-kube-api-access-s99mf\") pod \"openshift-controller-manager-operator-756b6f6bc6-grkd5\" (UID: \"2fa766eb-c490-46d8-8154-13eb2719e6e0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124704 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dcrs\" (UniqueName: \"kubernetes.io/projected/7984ce35-90df-4462-b2d5-6d25102d7bb5-kube-api-access-5dcrs\") pod \"route-controller-manager-6576b87f9c-cpv4c\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124720 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fa766eb-c490-46d8-8154-13eb2719e6e0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-grkd5\" (UID: \"2fa766eb-c490-46d8-8154-13eb2719e6e0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124740 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b7712f1-6d20-477f-a190-a50a6d35c238-audit-dir\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124757 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77900183-2391-4a47-8468-b36847297446-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124776 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7712f1-6d20-477f-a190-a50a6d35c238-serving-cert\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124813 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aee91c3b-8c99-4023-a891-2aaa3ab5ebcc-images\") pod \"machine-api-operator-5694c8668f-dswrb\" (UID: \"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124828 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/aee91c3b-8c99-4023-a891-2aaa3ab5ebcc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dswrb\" (UID: \"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124844 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-audit\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124863 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6c41b72-0beb-4da9-9717-f975a6608475-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124878 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77900183-2391-4a47-8468-b36847297446-serving-cert\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124894 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/99f1c1c6-cc63-4104-afb0-ff540cf588a9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t9m65\" (UID: \"99f1c1c6-cc63-4104-afb0-ff540cf588a9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124913 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x959c\" (UniqueName: \"kubernetes.io/projected/77900183-2391-4a47-8468-b36847297446-kube-api-access-x959c\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124927 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd6lg\" (UniqueName: \"kubernetes.io/projected/bdf738c2-dd67-4aea-9d3e-03d68658ee50-kube-api-access-nd6lg\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124955 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xjw4\" (UniqueName: \"kubernetes.io/projected/1b7712f1-6d20-477f-a190-a50a6d35c238-kube-api-access-5xjw4\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124973 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2850ca2-f7d9-4aa6-9163-e0b32c53cdce-auth-proxy-config\") pod \"machine-approver-56656f9798-rfgmf\" (UID: \"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.124988 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbg4r\" (UniqueName: \"kubernetes.io/projected/fcef29de-d881-4b9e-871e-6a2cc33484b6-kube-api-access-fbg4r\") pod \"console-operator-58897d9998-2j5dh\" (UID: \"fcef29de-d881-4b9e-871e-6a2cc33484b6\") " pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125013 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1b7712f1-6d20-477f-a190-a50a6d35c238-node-pullsecrets\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125038 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgwzp\" (UniqueName: \"kubernetes.io/projected/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-kube-api-access-qgwzp\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125052 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7984ce35-90df-4462-b2d5-6d25102d7bb5-serving-cert\") pod \"route-controller-manager-6576b87f9c-cpv4c\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125094 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-service-ca\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125126 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbr9b\" (UniqueName: \"kubernetes.io/projected/aee91c3b-8c99-4023-a891-2aaa3ab5ebcc-kube-api-access-jbr9b\") pod \"machine-api-operator-5694c8668f-dswrb\" (UID: \"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125147 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77900183-2391-4a47-8468-b36847297446-config\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125173 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-oauth-serving-cert\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125192 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c41b72-0beb-4da9-9717-f975a6608475-serving-cert\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125209 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7984ce35-90df-4462-b2d5-6d25102d7bb5-config\") pod \"route-controller-manager-6576b87f9c-cpv4c\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125227 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56799fb0-e11b-40fb-812c-bb7907d5b25f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4vrcz\" (UID: \"56799fb0-e11b-40fb-812c-bb7907d5b25f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125246 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-client-ca\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125267 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6c41b72-0beb-4da9-9717-f975a6608475-audit-dir\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125287 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzwbn\" (UniqueName: \"kubernetes.io/projected/99f1c1c6-cc63-4104-afb0-ff540cf588a9-kube-api-access-lzwbn\") pod \"cluster-samples-operator-665b6dd947-t9m65\" (UID: \"99f1c1c6-cc63-4104-afb0-ff540cf588a9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125320 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1b7712f1-6d20-477f-a190-a50a6d35c238-encryption-config\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125340 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-config\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125363 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee91c3b-8c99-4023-a891-2aaa3ab5ebcc-config\") pod \"machine-api-operator-5694c8668f-dswrb\" (UID: \"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125539 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.125897 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tbntp" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.126159 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.127766 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-image-import-ca\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.127846 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2850ca2-f7d9-4aa6-9163-e0b32c53cdce-config\") pod \"machine-approver-56656f9798-rfgmf\" (UID: \"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.127891 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6c41b72-0beb-4da9-9717-f975a6608475-etcd-client\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.127918 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56799fb0-e11b-40fb-812c-bb7907d5b25f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4vrcz\" (UID: \"56799fb0-e11b-40fb-812c-bb7907d5b25f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.130874 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.132183 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aee91c3b-8c99-4023-a891-2aaa3ab5ebcc-images\") pod \"machine-api-operator-5694c8668f-dswrb\" (UID: \"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.132697 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.133188 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2850ca2-f7d9-4aa6-9163-e0b32c53cdce-auth-proxy-config\") pod \"machine-approver-56656f9798-rfgmf\" (UID: \"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.133305 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1b7712f1-6d20-477f-a190-a50a6d35c238-node-pullsecrets\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.133307 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.134767 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-serving-cert\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.135341 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b6c41b72-0beb-4da9-9717-f975a6608475-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.135794 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6c41b72-0beb-4da9-9717-f975a6608475-audit-policies\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.136337 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6c41b72-0beb-4da9-9717-f975a6608475-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.136996 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee91c3b-8c99-4023-a891-2aaa3ab5ebcc-config\") pod \"machine-api-operator-5694c8668f-dswrb\" (UID: \"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.137142 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7984ce35-90df-4462-b2d5-6d25102d7bb5-config\") pod \"route-controller-manager-6576b87f9c-cpv4c\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.149933 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-etcd-serving-ca\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.152995 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1b7712f1-6d20-477f-a190-a50a6d35c238-encryption-config\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.153160 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2850ca2-f7d9-4aa6-9163-e0b32c53cdce-config\") pod \"machine-approver-56656f9798-rfgmf\" (UID: \"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.153371 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b6c41b72-0beb-4da9-9717-f975a6608475-encryption-config\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.157182 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561236-cxwv6"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.158196 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561236-cxwv6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.159119 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-audit\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.161091 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6c41b72-0beb-4da9-9717-f975a6608475-audit-dir\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.161644 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1b7712f1-6d20-477f-a190-a50a6d35c238-etcd-client\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.166554 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c41b72-0beb-4da9-9717-f975a6608475-serving-cert\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.166554 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.166966 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7984ce35-90df-4462-b2d5-6d25102d7bb5-serving-cert\") pod \"route-controller-manager-6576b87f9c-cpv4c\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.168235 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77900183-2391-4a47-8468-b36847297446-service-ca-bundle\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.168319 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7712f1-6d20-477f-a190-a50a6d35c238-config\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.169150 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-config\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.170285 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77900183-2391-4a47-8468-b36847297446-serving-cert\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.170716 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b7712f1-6d20-477f-a190-a50a6d35c238-audit-dir\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.171724 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77900183-2391-4a47-8468-b36847297446-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.172965 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77900183-2391-4a47-8468-b36847297446-config\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.172976 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.173178 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e2850ca2-f7d9-4aa6-9163-e0b32c53cdce-machine-approver-tls\") pod \"machine-approver-56656f9798-rfgmf\" (UID: \"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.173332 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.173717 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7712f1-6d20-477f-a190-a50a6d35c238-serving-cert\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.177654 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.179878 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.180532 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.181397 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6c41b72-0beb-4da9-9717-f975a6608475-etcd-client\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.181472 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-client-ca\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.182533 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.186827 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/aee91c3b-8c99-4023-a891-2aaa3ab5ebcc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dswrb\" (UID: \"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.187045 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.187887 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-bx5tm"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.188631 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.189493 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.189577 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bx5tm" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.192791 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2j5dh"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.196186 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qv77g"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.197945 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qv77g" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.198060 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.199165 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vv5d4"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.201663 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561236-cxwv6"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.203322 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dswrb"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.203759 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.207032 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7gf6z"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.209183 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-p78nr"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.211457 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.213094 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.215246 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-76btc"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.216567 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j58gg"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.218120 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.218984 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.220091 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.222971 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.223216 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.224331 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.225929 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.227387 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ngjsb"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.228703 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv5w4\" (UniqueName: \"kubernetes.io/projected/f971a0f2-eb9c-4fe6-a2df-52267d16f41a-kube-api-access-zv5w4\") pod \"migrator-59844c95c7-g4ggq\" (UID: \"f971a0f2-eb9c-4fe6-a2df-52267d16f41a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g4ggq" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.228779 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-trusted-ca-bundle\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.228821 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcef29de-d881-4b9e-871e-6a2cc33484b6-config\") pod \"console-operator-58897d9998-2j5dh\" (UID: \"fcef29de-d881-4b9e-871e-6a2cc33484b6\") " pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.228855 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-serving-cert\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.229151 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcef29de-d881-4b9e-871e-6a2cc33484b6-serving-cert\") pod \"console-operator-58897d9998-2j5dh\" (UID: \"fcef29de-d881-4b9e-871e-6a2cc33484b6\") " pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.229192 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa766eb-c490-46d8-8154-13eb2719e6e0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-grkd5\" (UID: \"2fa766eb-c490-46d8-8154-13eb2719e6e0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.229616 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s99mf\" (UniqueName: \"kubernetes.io/projected/2fa766eb-c490-46d8-8154-13eb2719e6e0-kube-api-access-s99mf\") pod \"openshift-controller-manager-operator-756b6f6bc6-grkd5\" (UID: \"2fa766eb-c490-46d8-8154-13eb2719e6e0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.229662 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fa766eb-c490-46d8-8154-13eb2719e6e0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-grkd5\" (UID: \"2fa766eb-c490-46d8-8154-13eb2719e6e0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230005 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fa766eb-c490-46d8-8154-13eb2719e6e0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-grkd5\" (UID: \"2fa766eb-c490-46d8-8154-13eb2719e6e0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230046 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-trusted-ca-bundle\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230048 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/99f1c1c6-cc63-4104-afb0-ff540cf588a9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t9m65\" (UID: \"99f1c1c6-cc63-4104-afb0-ff540cf588a9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230101 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd6lg\" (UniqueName: \"kubernetes.io/projected/bdf738c2-dd67-4aea-9d3e-03d68658ee50-kube-api-access-nd6lg\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230149 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbg4r\" (UniqueName: \"kubernetes.io/projected/fcef29de-d881-4b9e-871e-6a2cc33484b6-kube-api-access-fbg4r\") pod \"console-operator-58897d9998-2j5dh\" (UID: \"fcef29de-d881-4b9e-871e-6a2cc33484b6\") " pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230193 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-service-ca\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230220 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-oauth-serving-cert\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230240 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56799fb0-e11b-40fb-812c-bb7907d5b25f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4vrcz\" (UID: \"56799fb0-e11b-40fb-812c-bb7907d5b25f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230261 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzwbn\" (UniqueName: \"kubernetes.io/projected/99f1c1c6-cc63-4104-afb0-ff540cf588a9-kube-api-access-lzwbn\") pod \"cluster-samples-operator-665b6dd947-t9m65\" (UID: \"99f1c1c6-cc63-4104-afb0-ff540cf588a9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230278 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-config\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230309 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56799fb0-e11b-40fb-812c-bb7907d5b25f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4vrcz\" (UID: \"56799fb0-e11b-40fb-812c-bb7907d5b25f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230332 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcef29de-d881-4b9e-871e-6a2cc33484b6-trusted-ca\") pod \"console-operator-58897d9998-2j5dh\" (UID: \"fcef29de-d881-4b9e-871e-6a2cc33484b6\") " pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230359 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49jwf\" (UniqueName: \"kubernetes.io/projected/56799fb0-e11b-40fb-812c-bb7907d5b25f-kube-api-access-49jwf\") pod \"openshift-apiserver-operator-796bbdcf4f-4vrcz\" (UID: \"56799fb0-e11b-40fb-812c-bb7907d5b25f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230384 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9c7f\" (UniqueName: \"kubernetes.io/projected/5881f71f-e94a-4bc8-8d08-8bf079fd10e3-kube-api-access-r9c7f\") pod \"downloads-7954f5f757-2bzh5\" (UID: \"5881f71f-e94a-4bc8-8d08-8bf079fd10e3\") " pod="openshift-console/downloads-7954f5f757-2bzh5" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.230410 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-oauth-config\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.232401 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-service-ca\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.232551 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-oauth-serving-cert\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.232587 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56799fb0-e11b-40fb-812c-bb7907d5b25f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4vrcz\" (UID: \"56799fb0-e11b-40fb-812c-bb7907d5b25f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.233321 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-serving-cert\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.233397 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-config\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.233817 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcef29de-d881-4b9e-871e-6a2cc33484b6-trusted-ca\") pod \"console-operator-58897d9998-2j5dh\" (UID: \"fcef29de-d881-4b9e-871e-6a2cc33484b6\") " pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.233998 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcef29de-d881-4b9e-871e-6a2cc33484b6-config\") pod \"console-operator-58897d9998-2j5dh\" (UID: \"fcef29de-d881-4b9e-871e-6a2cc33484b6\") " pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.234256 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g4ggq"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.235571 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p47pv"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.234657 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcef29de-d881-4b9e-871e-6a2cc33484b6-serving-cert\") pod \"console-operator-58897d9998-2j5dh\" (UID: \"fcef29de-d881-4b9e-871e-6a2cc33484b6\") " pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.234437 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ngjsb" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.235761 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/99f1c1c6-cc63-4104-afb0-ff540cf588a9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t9m65\" (UID: \"99f1c1c6-cc63-4104-afb0-ff540cf588a9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.235506 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56799fb0-e11b-40fb-812c-bb7907d5b25f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4vrcz\" (UID: \"56799fb0-e11b-40fb-812c-bb7907d5b25f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.237413 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2fa766eb-c490-46d8-8154-13eb2719e6e0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-grkd5\" (UID: \"2fa766eb-c490-46d8-8154-13eb2719e6e0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.237989 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-oauth-config\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.238739 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-df26x"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.238940 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.241304 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.242923 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v8kpj"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.244468 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2bzh5"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.244846 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.245903 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5jx6b"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.247352 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.248841 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zxgkb"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.251776 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kzg9q"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.253656 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ngjsb"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.255388 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kr8vn"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.257241 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.257677 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.258931 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.260435 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p47pv"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.263030 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.266146 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qv77g"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.267347 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.269159 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.269597 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tbntp"] Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.283215 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.304180 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.328229 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.342621 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.363333 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.382796 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.402691 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.422879 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.442575 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.462951 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.493028 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.508907 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.522736 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.546141 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.563164 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.582266 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.604069 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.622214 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.642588 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.663068 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.684168 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.703330 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.722700 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.742637 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.770033 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.783417 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.803469 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.822803 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.842725 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.862739 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.885061 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.903391 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.923307 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.944404 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.968900 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 16 15:16:19 crc kubenswrapper[4736]: I0316 15:16:19.983181 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.004582 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.023414 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.043982 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.063712 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.083554 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.106217 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.121179 4736 request.go:700] Waited for 1.018112255s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&limit=500&resourceVersion=0 Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.123183 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.144136 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.162971 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.223090 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.243510 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.262636 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.282572 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.303588 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.323442 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.342047 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.363055 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.390549 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.402117 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.423984 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.442257 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.462712 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.483030 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.504031 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.522246 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.542084 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.563672 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.582722 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.602229 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.638007 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgwzp\" (UniqueName: \"kubernetes.io/projected/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-kube-api-access-qgwzp\") pod \"controller-manager-879f6c89f-4dzg6\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.656286 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbr9b\" (UniqueName: \"kubernetes.io/projected/aee91c3b-8c99-4023-a891-2aaa3ab5ebcc-kube-api-access-jbr9b\") pod \"machine-api-operator-5694c8668f-dswrb\" (UID: \"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.679600 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xjw4\" (UniqueName: \"kubernetes.io/projected/1b7712f1-6d20-477f-a190-a50a6d35c238-kube-api-access-5xjw4\") pod \"apiserver-76f77b778f-qgcn7\" (UID: \"1b7712f1-6d20-477f-a190-a50a6d35c238\") " pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.701340 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptmrm\" (UniqueName: \"kubernetes.io/projected/b6c41b72-0beb-4da9-9717-f975a6608475-kube-api-access-ptmrm\") pod \"apiserver-7bbb656c7d-wx2v9\" (UID: \"b6c41b72-0beb-4da9-9717-f975a6608475\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.703289 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.722943 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.745310 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.748455 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.763726 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.783718 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.800996 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.825035 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvm6p\" (UniqueName: \"kubernetes.io/projected/e2850ca2-f7d9-4aa6-9163-e0b32c53cdce-kube-api-access-nvm6p\") pod \"machine-approver-56656f9798-rfgmf\" (UID: \"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.842526 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x959c\" (UniqueName: \"kubernetes.io/projected/77900183-2391-4a47-8468-b36847297446-kube-api-access-x959c\") pod \"authentication-operator-69f744f599-5jx6b\" (UID: \"77900183-2391-4a47-8468-b36847297446\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.843143 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.864536 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.871398 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.887411 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.904361 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.909622 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.917450 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dcrs\" (UniqueName: \"kubernetes.io/projected/7984ce35-90df-4462-b2d5-6d25102d7bb5-kube-api-access-5dcrs\") pod \"route-controller-manager-6576b87f9c-cpv4c\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.918130 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.924371 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.927847 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.944582 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.952620 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qgcn7"] Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.964357 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.983495 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 16 15:16:20 crc kubenswrapper[4736]: I0316 15:16:20.991114 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9"] Mar 16 15:16:20 crc kubenswrapper[4736]: W0316 15:16:20.996007 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2850ca2_f7d9_4aa6_9163_e0b32c53cdce.slice/crio-2dde38b74029fc65e97f1d4a685e30c8d5c7fa66a24ccab04412ae8a9e6871e9 WatchSource:0}: Error finding container 2dde38b74029fc65e97f1d4a685e30c8d5c7fa66a24ccab04412ae8a9e6871e9: Status 404 returned error can't find the container with id 2dde38b74029fc65e97f1d4a685e30c8d5c7fa66a24ccab04412ae8a9e6871e9 Mar 16 15:16:21 crc kubenswrapper[4736]: W0316 15:16:21.000580 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b7712f1_6d20_477f_a190_a50a6d35c238.slice/crio-b771310df54d771a9665a8f8ee3d82824013088af75c1df22544be37ae301ff6 WatchSource:0}: Error finding container b771310df54d771a9665a8f8ee3d82824013088af75c1df22544be37ae301ff6: Status 404 returned error can't find the container with id b771310df54d771a9665a8f8ee3d82824013088af75c1df22544be37ae301ff6 Mar 16 15:16:21 crc kubenswrapper[4736]: W0316 15:16:21.001295 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6c41b72_0beb_4da9_9717_f975a6608475.slice/crio-68bb307d1b5d255c3ed702ff558ad2e5eeb4c95cfd084dfc5b64c6fd2907804a WatchSource:0}: Error finding container 68bb307d1b5d255c3ed702ff558ad2e5eeb4c95cfd084dfc5b64c6fd2907804a: Status 404 returned error can't find the container with id 68bb307d1b5d255c3ed702ff558ad2e5eeb4c95cfd084dfc5b64c6fd2907804a Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.004908 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.024681 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.043352 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.063650 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.105814 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv5w4\" (UniqueName: \"kubernetes.io/projected/f971a0f2-eb9c-4fe6-a2df-52267d16f41a-kube-api-access-zv5w4\") pod \"migrator-59844c95c7-g4ggq\" (UID: \"f971a0f2-eb9c-4fe6-a2df-52267d16f41a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g4ggq" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.122817 4736 request.go:700] Waited for 1.890133373s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.129520 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4dzg6"] Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.131400 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzwbn\" (UniqueName: \"kubernetes.io/projected/99f1c1c6-cc63-4104-afb0-ff540cf588a9-kube-api-access-lzwbn\") pod \"cluster-samples-operator-665b6dd947-t9m65\" (UID: \"99f1c1c6-cc63-4104-afb0-ff540cf588a9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.146584 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd6lg\" (UniqueName: \"kubernetes.io/projected/bdf738c2-dd67-4aea-9d3e-03d68658ee50-kube-api-access-nd6lg\") pod \"console-f9d7485db-p78nr\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.157633 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s99mf\" (UniqueName: \"kubernetes.io/projected/2fa766eb-c490-46d8-8154-13eb2719e6e0-kube-api-access-s99mf\") pod \"openshift-controller-manager-operator-756b6f6bc6-grkd5\" (UID: \"2fa766eb-c490-46d8-8154-13eb2719e6e0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.160589 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g4ggq" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.179279 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbg4r\" (UniqueName: \"kubernetes.io/projected/fcef29de-d881-4b9e-871e-6a2cc33484b6-kube-api-access-fbg4r\") pod \"console-operator-58897d9998-2j5dh\" (UID: \"fcef29de-d881-4b9e-871e-6a2cc33484b6\") " pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.204168 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49jwf\" (UniqueName: \"kubernetes.io/projected/56799fb0-e11b-40fb-812c-bb7907d5b25f-kube-api-access-49jwf\") pod \"openshift-apiserver-operator-796bbdcf4f-4vrcz\" (UID: \"56799fb0-e11b-40fb-812c-bb7907d5b25f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.208234 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dswrb"] Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.226688 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.227198 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9c7f\" (UniqueName: \"kubernetes.io/projected/5881f71f-e94a-4bc8-8d08-8bf079fd10e3-kube-api-access-r9c7f\") pod \"downloads-7954f5f757-2bzh5\" (UID: \"5881f71f-e94a-4bc8-8d08-8bf079fd10e3\") " pod="openshift-console/downloads-7954f5f757-2bzh5" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.243604 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.263775 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.285697 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5jx6b"] Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.285809 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.296728 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.296886 4736 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.301092 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c"] Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.303234 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.307358 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.318753 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.323934 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.327394 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.343731 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2bzh5" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.400721 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g4ggq"] Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462313 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/365f3914-692f-4b54-b389-89e9a276a9d9-etcd-service-ca\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462361 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ce839b1-2853-4984-9105-73e482b62cfb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pwfh6\" (UID: \"4ce839b1-2853-4984-9105-73e482b62cfb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462430 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462453 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26014393-61ac-493d-b8b5-c71abee3a415-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qwc6v\" (UID: \"26014393-61ac-493d-b8b5-c71abee3a415\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462470 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462500 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9m7c\" (UniqueName: \"kubernetes.io/projected/a3081e47-19ee-40fa-b5db-9c01bc56228a-kube-api-access-m9m7c\") pod \"service-ca-9c57cc56f-kzg9q\" (UID: \"a3081e47-19ee-40fa-b5db-9c01bc56228a\") " pod="openshift-service-ca/service-ca-9c57cc56f-kzg9q" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462526 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/365f3914-692f-4b54-b389-89e9a276a9d9-etcd-ca\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462557 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnlzv\" (UniqueName: \"kubernetes.io/projected/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-kube-api-access-lnlzv\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462582 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3d36700-672c-441c-bf3a-23f4ef140fd5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6d989\" (UID: \"a3d36700-672c-441c-bf3a-23f4ef140fd5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462616 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cb215d7-e71c-4a0b-9817-49a4f8c7f60e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rmd28\" (UID: \"4cb215d7-e71c-4a0b-9817-49a4f8c7f60e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462634 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462662 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab78d179-e8ea-44ca-a95a-c634dab5aa6b-metrics-tls\") pod \"dns-operator-744455d44c-vv5d4\" (UID: \"ab78d179-e8ea-44ca-a95a-c634dab5aa6b\") " pod="openshift-dns-operator/dns-operator-744455d44c-vv5d4" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462676 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cb215d7-e71c-4a0b-9817-49a4f8c7f60e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rmd28\" (UID: \"4cb215d7-e71c-4a0b-9817-49a4f8c7f60e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462699 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prl2f\" (UID: \"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462714 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a3081e47-19ee-40fa-b5db-9c01bc56228a-signing-cabundle\") pod \"service-ca-9c57cc56f-kzg9q\" (UID: \"a3081e47-19ee-40fa-b5db-9c01bc56228a\") " pod="openshift-service-ca/service-ca-9c57cc56f-kzg9q" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462740 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ccfe744-3d0c-404c-aed7-94c575a05b34-ca-trust-extracted\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462756 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/365f3914-692f-4b54-b389-89e9a276a9d9-serving-cert\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462825 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spjm6\" (UniqueName: \"kubernetes.io/projected/ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13-kube-api-access-spjm6\") pod \"machine-config-operator-74547568cd-prl2f\" (UID: \"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462841 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462857 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462882 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f022dad1-489f-4a36-b60b-6b34e8dbd5aa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7d4rb\" (UID: \"f022dad1-489f-4a36-b60b-6b34e8dbd5aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462904 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d36700-672c-441c-bf3a-23f4ef140fd5-config\") pod \"kube-controller-manager-operator-78b949d7b-6d989\" (UID: \"a3d36700-672c-441c-bf3a-23f4ef140fd5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462932 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3d36700-672c-441c-bf3a-23f4ef140fd5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6d989\" (UID: \"a3d36700-672c-441c-bf3a-23f4ef140fd5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462950 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462976 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-service-ca-bundle\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.462991 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463008 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nszdn\" (UniqueName: \"kubernetes.io/projected/26014393-61ac-493d-b8b5-c71abee3a415-kube-api-access-nszdn\") pod \"kube-storage-version-migrator-operator-b67b599dd-qwc6v\" (UID: \"26014393-61ac-493d-b8b5-c71abee3a415\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463056 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10773cef-e8d2-43c6-a821-901ab3ebd72f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5rwsm\" (UID: \"10773cef-e8d2-43c6-a821-901ab3ebd72f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463079 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61ab67d8-7624-44b8-8891-37490ba9ab4b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-df26x\" (UID: \"61ab67d8-7624-44b8-8891-37490ba9ab4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463129 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13-proxy-tls\") pod \"machine-config-operator-74547568cd-prl2f\" (UID: \"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463158 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljtrl\" (UniqueName: \"kubernetes.io/projected/61ab67d8-7624-44b8-8891-37490ba9ab4b-kube-api-access-ljtrl\") pod \"ingress-operator-5b745b69d9-df26x\" (UID: \"61ab67d8-7624-44b8-8891-37490ba9ab4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463176 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f022dad1-489f-4a36-b60b-6b34e8dbd5aa-proxy-tls\") pod \"machine-config-controller-84d6567774-7d4rb\" (UID: \"f022dad1-489f-4a36-b60b-6b34e8dbd5aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463192 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cb215d7-e71c-4a0b-9817-49a4f8c7f60e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rmd28\" (UID: \"4cb215d7-e71c-4a0b-9817-49a4f8c7f60e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463210 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-audit-policies\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463229 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-metrics-certs\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463246 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463317 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlhc6\" (UniqueName: \"kubernetes.io/projected/4cb215d7-e71c-4a0b-9817-49a4f8c7f60e-kube-api-access-jlhc6\") pod \"cluster-image-registry-operator-dc59b4c8b-rmd28\" (UID: \"4cb215d7-e71c-4a0b-9817-49a4f8c7f60e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463384 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ccfe744-3d0c-404c-aed7-94c575a05b34-installation-pull-secrets\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463401 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-default-certificate\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463420 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/365f3914-692f-4b54-b389-89e9a276a9d9-config\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463440 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463457 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-477lq\" (UniqueName: \"kubernetes.io/projected/6c3b56d3-03e4-4bba-8ac0-072c4a281513-kube-api-access-477lq\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463475 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce839b1-2853-4984-9105-73e482b62cfb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pwfh6\" (UID: \"4ce839b1-2853-4984-9105-73e482b62cfb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463494 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bfbf1c26-496a-4fc3-a248-9c2db09bf334-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j58gg\" (UID: \"bfbf1c26-496a-4fc3-a248-9c2db09bf334\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463511 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dca4fa92-819d-4973-87b1-b6282946f072-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zxgkb\" (UID: \"dca4fa92-819d-4973-87b1-b6282946f072\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zxgkb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463546 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ccfe744-3d0c-404c-aed7-94c575a05b34-trusted-ca\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463565 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnj7z\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-kube-api-access-rnj7z\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463581 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce839b1-2853-4984-9105-73e482b62cfb-config\") pod \"kube-apiserver-operator-766d6c64bb-pwfh6\" (UID: \"4ce839b1-2853-4984-9105-73e482b62cfb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463629 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/365f3914-692f-4b54-b389-89e9a276a9d9-etcd-client\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463647 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c3b56d3-03e4-4bba-8ac0-072c4a281513-audit-dir\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463664 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqzd2\" (UniqueName: \"kubernetes.io/projected/dca4fa92-819d-4973-87b1-b6282946f072-kube-api-access-lqzd2\") pod \"control-plane-machine-set-operator-78cbb6b69f-zxgkb\" (UID: \"dca4fa92-819d-4973-87b1-b6282946f072\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zxgkb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463685 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13-images\") pod \"machine-config-operator-74547568cd-prl2f\" (UID: \"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463704 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463734 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61ab67d8-7624-44b8-8891-37490ba9ab4b-trusted-ca\") pod \"ingress-operator-5b745b69d9-df26x\" (UID: \"61ab67d8-7624-44b8-8891-37490ba9ab4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463751 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-bound-sa-token\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463776 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbdlq\" (UniqueName: \"kubernetes.io/projected/bfbf1c26-496a-4fc3-a248-9c2db09bf334-kube-api-access-tbdlq\") pod \"openshift-config-operator-7777fb866f-j58gg\" (UID: \"bfbf1c26-496a-4fc3-a248-9c2db09bf334\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463793 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463827 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10773cef-e8d2-43c6-a821-901ab3ebd72f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5rwsm\" (UID: \"10773cef-e8d2-43c6-a821-901ab3ebd72f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463848 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfbf1c26-496a-4fc3-a248-9c2db09bf334-serving-cert\") pod \"openshift-config-operator-7777fb866f-j58gg\" (UID: \"bfbf1c26-496a-4fc3-a248-9c2db09bf334\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463864 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmcsl\" (UniqueName: \"kubernetes.io/projected/f022dad1-489f-4a36-b60b-6b34e8dbd5aa-kube-api-access-hmcsl\") pod \"machine-config-controller-84d6567774-7d4rb\" (UID: \"f022dad1-489f-4a36-b60b-6b34e8dbd5aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463886 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqvmk\" (UniqueName: \"kubernetes.io/projected/ab78d179-e8ea-44ca-a95a-c634dab5aa6b-kube-api-access-xqvmk\") pod \"dns-operator-744455d44c-vv5d4\" (UID: \"ab78d179-e8ea-44ca-a95a-c634dab5aa6b\") " pod="openshift-dns-operator/dns-operator-744455d44c-vv5d4" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463916 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26014393-61ac-493d-b8b5-c71abee3a415-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qwc6v\" (UID: \"26014393-61ac-493d-b8b5-c71abee3a415\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463959 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ccfe744-3d0c-404c-aed7-94c575a05b34-registry-certificates\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463974 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a3081e47-19ee-40fa-b5db-9c01bc56228a-signing-key\") pod \"service-ca-9c57cc56f-kzg9q\" (UID: \"a3081e47-19ee-40fa-b5db-9c01bc56228a\") " pod="openshift-service-ca/service-ca-9c57cc56f-kzg9q" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.463989 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61ab67d8-7624-44b8-8891-37490ba9ab4b-metrics-tls\") pod \"ingress-operator-5b745b69d9-df26x\" (UID: \"61ab67d8-7624-44b8-8891-37490ba9ab4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" Mar 16 15:16:21 crc kubenswrapper[4736]: E0316 15:16:21.468644 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:21.968620551 +0000 UTC m=+183.696010838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.468783 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10773cef-e8d2-43c6-a821-901ab3ebd72f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5rwsm\" (UID: \"10773cef-e8d2-43c6-a821-901ab3ebd72f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.469939 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-stats-auth\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.469971 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.470017 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-registry-tls\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.470315 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk7dk\" (UniqueName: \"kubernetes.io/projected/365f3914-692f-4b54-b389-89e9a276a9d9-kube-api-access-sk7dk\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.574368 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.574806 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ccfe744-3d0c-404c-aed7-94c575a05b34-trusted-ca\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.574838 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnj7z\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-kube-api-access-rnj7z\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.574856 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce839b1-2853-4984-9105-73e482b62cfb-config\") pod \"kube-apiserver-operator-766d6c64bb-pwfh6\" (UID: \"4ce839b1-2853-4984-9105-73e482b62cfb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.574878 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4c57\" (UniqueName: \"kubernetes.io/projected/db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74-kube-api-access-c4c57\") pod \"olm-operator-6b444d44fb-hxs96\" (UID: \"db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.574896 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kr8vn\" (UID: \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.574912 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/90e2fab8-e507-485e-9c3d-63d00f092fa1-node-bootstrap-token\") pod \"machine-config-server-bx5tm\" (UID: \"90e2fab8-e507-485e-9c3d-63d00f092fa1\") " pod="openshift-machine-config-operator/machine-config-server-bx5tm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.574929 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/365f3914-692f-4b54-b389-89e9a276a9d9-etcd-client\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.574952 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c3b56d3-03e4-4bba-8ac0-072c4a281513-audit-dir\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.574971 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13-images\") pod \"machine-config-operator-74547568cd-prl2f\" (UID: \"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575004 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqzd2\" (UniqueName: \"kubernetes.io/projected/dca4fa92-819d-4973-87b1-b6282946f072-kube-api-access-lqzd2\") pod \"control-plane-machine-set-operator-78cbb6b69f-zxgkb\" (UID: \"dca4fa92-819d-4973-87b1-b6282946f072\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zxgkb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575020 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61ab67d8-7624-44b8-8891-37490ba9ab4b-trusted-ca\") pod \"ingress-operator-5b745b69d9-df26x\" (UID: \"61ab67d8-7624-44b8-8891-37490ba9ab4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575039 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575057 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdgjv\" (UniqueName: \"kubernetes.io/projected/6d00785d-6730-42d9-8004-f1bbc451d581-kube-api-access-sdgjv\") pod \"packageserver-d55dfcdfc-qnszh\" (UID: \"6d00785d-6730-42d9-8004-f1bbc451d581\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575071 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a14c47f9-54bf-455c-9e43-a36fb2fd871b-cert\") pod \"ingress-canary-qv77g\" (UID: \"a14c47f9-54bf-455c-9e43-a36fb2fd871b\") " pod="openshift-ingress-canary/ingress-canary-qv77g" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575086 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-bound-sa-token\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575157 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-mountpoint-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575174 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbdlq\" (UniqueName: \"kubernetes.io/projected/bfbf1c26-496a-4fc3-a248-9c2db09bf334-kube-api-access-tbdlq\") pod \"openshift-config-operator-7777fb866f-j58gg\" (UID: \"bfbf1c26-496a-4fc3-a248-9c2db09bf334\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575191 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10773cef-e8d2-43c6-a821-901ab3ebd72f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5rwsm\" (UID: \"10773cef-e8d2-43c6-a821-901ab3ebd72f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575210 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfbf1c26-496a-4fc3-a248-9c2db09bf334-serving-cert\") pod \"openshift-config-operator-7777fb866f-j58gg\" (UID: \"bfbf1c26-496a-4fc3-a248-9c2db09bf334\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575227 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575245 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef5af3b4-3c61-4469-84e2-5e0bac26ed18-serving-cert\") pod \"service-ca-operator-777779d784-ncvk2\" (UID: \"ef5af3b4-3c61-4469-84e2-5e0bac26ed18\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575263 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvxx8\" (UniqueName: \"kubernetes.io/projected/4aa7778f-ab77-4cbe-ac51-99c2f2206b15-kube-api-access-xvxx8\") pod \"package-server-manager-789f6589d5-wtl7f\" (UID: \"4aa7778f-ab77-4cbe-ac51-99c2f2206b15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575284 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqvmk\" (UniqueName: \"kubernetes.io/projected/ab78d179-e8ea-44ca-a95a-c634dab5aa6b-kube-api-access-xqvmk\") pod \"dns-operator-744455d44c-vv5d4\" (UID: \"ab78d179-e8ea-44ca-a95a-c634dab5aa6b\") " pod="openshift-dns-operator/dns-operator-744455d44c-vv5d4" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575304 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prp65\" (UniqueName: \"kubernetes.io/projected/ef5af3b4-3c61-4469-84e2-5e0bac26ed18-kube-api-access-prp65\") pod \"service-ca-operator-777779d784-ncvk2\" (UID: \"ef5af3b4-3c61-4469-84e2-5e0bac26ed18\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575323 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmcsl\" (UniqueName: \"kubernetes.io/projected/f022dad1-489f-4a36-b60b-6b34e8dbd5aa-kube-api-access-hmcsl\") pod \"machine-config-controller-84d6567774-7d4rb\" (UID: \"f022dad1-489f-4a36-b60b-6b34e8dbd5aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575342 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26014393-61ac-493d-b8b5-c71abee3a415-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qwc6v\" (UID: \"26014393-61ac-493d-b8b5-c71abee3a415\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575360 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgzlh\" (UniqueName: \"kubernetes.io/projected/f8234512-ffa3-4229-b3d5-be9360b59dac-kube-api-access-vgzlh\") pod \"catalog-operator-68c6474976-xzwwg\" (UID: \"f8234512-ffa3-4229-b3d5-be9360b59dac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575378 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ccfe744-3d0c-404c-aed7-94c575a05b34-registry-certificates\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575394 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a3081e47-19ee-40fa-b5db-9c01bc56228a-signing-key\") pod \"service-ca-9c57cc56f-kzg9q\" (UID: \"a3081e47-19ee-40fa-b5db-9c01bc56228a\") " pod="openshift-service-ca/service-ca-9c57cc56f-kzg9q" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575412 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61ab67d8-7624-44b8-8891-37490ba9ab4b-metrics-tls\") pod \"ingress-operator-5b745b69d9-df26x\" (UID: \"61ab67d8-7624-44b8-8891-37490ba9ab4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575427 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwnrm\" (UniqueName: \"kubernetes.io/projected/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-kube-api-access-fwnrm\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575443 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/90e2fab8-e507-485e-9c3d-63d00f092fa1-certs\") pod \"machine-config-server-bx5tm\" (UID: \"90e2fab8-e507-485e-9c3d-63d00f092fa1\") " pod="openshift-machine-config-operator/machine-config-server-bx5tm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575460 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e95b535a-2e38-4797-97c6-5ab54160b983-config-volume\") pod \"collect-profiles-29561235-7vr89\" (UID: \"e95b535a-2e38-4797-97c6-5ab54160b983\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575477 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f48t4\" (UniqueName: \"kubernetes.io/projected/9ad17779-e378-4bfe-bf98-b746baf0d153-kube-api-access-f48t4\") pod \"multus-admission-controller-857f4d67dd-tbntp\" (UID: \"9ad17779-e378-4bfe-bf98-b746baf0d153\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tbntp" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575683 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6d00785d-6730-42d9-8004-f1bbc451d581-tmpfs\") pod \"packageserver-d55dfcdfc-qnszh\" (UID: \"6d00785d-6730-42d9-8004-f1bbc451d581\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575710 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10773cef-e8d2-43c6-a821-901ab3ebd72f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5rwsm\" (UID: \"10773cef-e8d2-43c6-a821-901ab3ebd72f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575730 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-registry-tls\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575747 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-stats-auth\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575763 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575780 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk7dk\" (UniqueName: \"kubernetes.io/projected/365f3914-692f-4b54-b389-89e9a276a9d9-kube-api-access-sk7dk\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575797 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-registration-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575815 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/365f3914-692f-4b54-b389-89e9a276a9d9-etcd-service-ca\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575853 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw8jm\" (UniqueName: \"kubernetes.io/projected/6f744d0a-ff18-46d3-b3ab-731e0308e0b6-kube-api-access-bw8jm\") pod \"dns-default-ngjsb\" (UID: \"6f744d0a-ff18-46d3-b3ab-731e0308e0b6\") " pod="openshift-dns/dns-default-ngjsb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575871 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ce839b1-2853-4984-9105-73e482b62cfb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pwfh6\" (UID: \"4ce839b1-2853-4984-9105-73e482b62cfb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575928 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575945 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa7778f-ab77-4cbe-ac51-99c2f2206b15-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wtl7f\" (UID: \"4aa7778f-ab77-4cbe-ac51-99c2f2206b15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575962 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26014393-61ac-493d-b8b5-c71abee3a415-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qwc6v\" (UID: \"26014393-61ac-493d-b8b5-c71abee3a415\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.575982 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9m7c\" (UniqueName: \"kubernetes.io/projected/a3081e47-19ee-40fa-b5db-9c01bc56228a-kube-api-access-m9m7c\") pod \"service-ca-9c57cc56f-kzg9q\" (UID: \"a3081e47-19ee-40fa-b5db-9c01bc56228a\") " pod="openshift-service-ca/service-ca-9c57cc56f-kzg9q" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576000 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5zbw\" (UniqueName: \"kubernetes.io/projected/90e2fab8-e507-485e-9c3d-63d00f092fa1-kube-api-access-z5zbw\") pod \"machine-config-server-bx5tm\" (UID: \"90e2fab8-e507-485e-9c3d-63d00f092fa1\") " pod="openshift-machine-config-operator/machine-config-server-bx5tm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576024 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/365f3914-692f-4b54-b389-89e9a276a9d9-etcd-ca\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576040 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnlzv\" (UniqueName: \"kubernetes.io/projected/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-kube-api-access-lnlzv\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576057 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cb215d7-e71c-4a0b-9817-49a4f8c7f60e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rmd28\" (UID: \"4cb215d7-e71c-4a0b-9817-49a4f8c7f60e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576072 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3d36700-672c-441c-bf3a-23f4ef140fd5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6d989\" (UID: \"a3d36700-672c-441c-bf3a-23f4ef140fd5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576088 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e95b535a-2e38-4797-97c6-5ab54160b983-secret-volume\") pod \"collect-profiles-29561235-7vr89\" (UID: \"e95b535a-2e38-4797-97c6-5ab54160b983\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576128 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab78d179-e8ea-44ca-a95a-c634dab5aa6b-metrics-tls\") pod \"dns-operator-744455d44c-vv5d4\" (UID: \"ab78d179-e8ea-44ca-a95a-c634dab5aa6b\") " pod="openshift-dns-operator/dns-operator-744455d44c-vv5d4" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576143 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cb215d7-e71c-4a0b-9817-49a4f8c7f60e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rmd28\" (UID: \"4cb215d7-e71c-4a0b-9817-49a4f8c7f60e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576161 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576177 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prl2f\" (UID: \"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576200 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ccfe744-3d0c-404c-aed7-94c575a05b34-ca-trust-extracted\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576215 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/365f3914-692f-4b54-b389-89e9a276a9d9-serving-cert\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576230 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a3081e47-19ee-40fa-b5db-9c01bc56228a-signing-cabundle\") pod \"service-ca-9c57cc56f-kzg9q\" (UID: \"a3081e47-19ee-40fa-b5db-9c01bc56228a\") " pod="openshift-service-ca/service-ca-9c57cc56f-kzg9q" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576255 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8234512-ffa3-4229-b3d5-be9360b59dac-profile-collector-cert\") pod \"catalog-operator-68c6474976-xzwwg\" (UID: \"f8234512-ffa3-4229-b3d5-be9360b59dac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576277 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d00785d-6730-42d9-8004-f1bbc451d581-apiservice-cert\") pod \"packageserver-d55dfcdfc-qnszh\" (UID: \"6d00785d-6730-42d9-8004-f1bbc451d581\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576292 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6fjr\" (UniqueName: \"kubernetes.io/projected/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-kube-api-access-f6fjr\") pod \"marketplace-operator-79b997595-kr8vn\" (UID: \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576308 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ad17779-e378-4bfe-bf98-b746baf0d153-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tbntp\" (UID: \"9ad17779-e378-4bfe-bf98-b746baf0d153\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tbntp" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576330 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576346 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cjqz\" (UniqueName: \"kubernetes.io/projected/ae9de77b-767a-4a87-b4bd-648728dc9826-kube-api-access-8cjqz\") pod \"auto-csr-approver-29561236-cxwv6\" (UID: \"ae9de77b-767a-4a87-b4bd-648728dc9826\") " pod="openshift-infra/auto-csr-approver-29561236-cxwv6" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.576361 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74-srv-cert\") pod \"olm-operator-6b444d44fb-hxs96\" (UID: \"db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.579770 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spjm6\" (UniqueName: \"kubernetes.io/projected/ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13-kube-api-access-spjm6\") pod \"machine-config-operator-74547568cd-prl2f\" (UID: \"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.579830 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f022dad1-489f-4a36-b60b-6b34e8dbd5aa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7d4rb\" (UID: \"f022dad1-489f-4a36-b60b-6b34e8dbd5aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.579848 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.579868 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drp9v\" (UniqueName: \"kubernetes.io/projected/a14c47f9-54bf-455c-9e43-a36fb2fd871b-kube-api-access-drp9v\") pod \"ingress-canary-qv77g\" (UID: \"a14c47f9-54bf-455c-9e43-a36fb2fd871b\") " pod="openshift-ingress-canary/ingress-canary-qv77g" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.579884 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f744d0a-ff18-46d3-b3ab-731e0308e0b6-config-volume\") pod \"dns-default-ngjsb\" (UID: \"6f744d0a-ff18-46d3-b3ab-731e0308e0b6\") " pod="openshift-dns/dns-default-ngjsb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.579908 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d36700-672c-441c-bf3a-23f4ef140fd5-config\") pod \"kube-controller-manager-operator-78b949d7b-6d989\" (UID: \"a3d36700-672c-441c-bf3a-23f4ef140fd5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.579942 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3d36700-672c-441c-bf3a-23f4ef140fd5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6d989\" (UID: \"a3d36700-672c-441c-bf3a-23f4ef140fd5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.579960 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-service-ca-bundle\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.579979 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.579995 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580014 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6f744d0a-ff18-46d3-b3ab-731e0308e0b6-metrics-tls\") pod \"dns-default-ngjsb\" (UID: \"6f744d0a-ff18-46d3-b3ab-731e0308e0b6\") " pod="openshift-dns/dns-default-ngjsb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580032 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef5af3b4-3c61-4469-84e2-5e0bac26ed18-config\") pod \"service-ca-operator-777779d784-ncvk2\" (UID: \"ef5af3b4-3c61-4469-84e2-5e0bac26ed18\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580048 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thmhf\" (UniqueName: \"kubernetes.io/projected/e95b535a-2e38-4797-97c6-5ab54160b983-kube-api-access-thmhf\") pod \"collect-profiles-29561235-7vr89\" (UID: \"e95b535a-2e38-4797-97c6-5ab54160b983\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580072 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10773cef-e8d2-43c6-a821-901ab3ebd72f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5rwsm\" (UID: \"10773cef-e8d2-43c6-a821-901ab3ebd72f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580090 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nszdn\" (UniqueName: \"kubernetes.io/projected/26014393-61ac-493d-b8b5-c71abee3a415-kube-api-access-nszdn\") pod \"kube-storage-version-migrator-operator-b67b599dd-qwc6v\" (UID: \"26014393-61ac-493d-b8b5-c71abee3a415\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580122 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-plugins-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580141 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61ab67d8-7624-44b8-8891-37490ba9ab4b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-df26x\" (UID: \"61ab67d8-7624-44b8-8891-37490ba9ab4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580164 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13-proxy-tls\") pod \"machine-config-operator-74547568cd-prl2f\" (UID: \"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580184 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kr8vn\" (UID: \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580203 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljtrl\" (UniqueName: \"kubernetes.io/projected/61ab67d8-7624-44b8-8891-37490ba9ab4b-kube-api-access-ljtrl\") pod \"ingress-operator-5b745b69d9-df26x\" (UID: \"61ab67d8-7624-44b8-8891-37490ba9ab4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580221 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cb215d7-e71c-4a0b-9817-49a4f8c7f60e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rmd28\" (UID: \"4cb215d7-e71c-4a0b-9817-49a4f8c7f60e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580239 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-audit-policies\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580259 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f022dad1-489f-4a36-b60b-6b34e8dbd5aa-proxy-tls\") pod \"machine-config-controller-84d6567774-7d4rb\" (UID: \"f022dad1-489f-4a36-b60b-6b34e8dbd5aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580278 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlhc6\" (UniqueName: \"kubernetes.io/projected/4cb215d7-e71c-4a0b-9817-49a4f8c7f60e-kube-api-access-jlhc6\") pod \"cluster-image-registry-operator-dc59b4c8b-rmd28\" (UID: \"4cb215d7-e71c-4a0b-9817-49a4f8c7f60e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580293 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-metrics-certs\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580309 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580326 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hxs96\" (UID: \"db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580346 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-socket-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580360 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8234512-ffa3-4229-b3d5-be9360b59dac-srv-cert\") pod \"catalog-operator-68c6474976-xzwwg\" (UID: \"f8234512-ffa3-4229-b3d5-be9360b59dac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580393 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ccfe744-3d0c-404c-aed7-94c575a05b34-installation-pull-secrets\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580409 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-default-certificate\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580425 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-csi-data-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580447 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/365f3914-692f-4b54-b389-89e9a276a9d9-config\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580465 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce839b1-2853-4984-9105-73e482b62cfb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pwfh6\" (UID: \"4ce839b1-2853-4984-9105-73e482b62cfb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580482 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580499 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-477lq\" (UniqueName: \"kubernetes.io/projected/6c3b56d3-03e4-4bba-8ac0-072c4a281513-kube-api-access-477lq\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.580517 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bfbf1c26-496a-4fc3-a248-9c2db09bf334-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j58gg\" (UID: \"bfbf1c26-496a-4fc3-a248-9c2db09bf334\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.584889 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/365f3914-692f-4b54-b389-89e9a276a9d9-etcd-ca\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.586283 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ccfe744-3d0c-404c-aed7-94c575a05b34-trusted-ca\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.589800 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4cb215d7-e71c-4a0b-9817-49a4f8c7f60e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rmd28\" (UID: \"4cb215d7-e71c-4a0b-9817-49a4f8c7f60e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.603012 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dca4fa92-819d-4973-87b1-b6282946f072-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zxgkb\" (UID: \"dca4fa92-819d-4973-87b1-b6282946f072\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zxgkb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.603087 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d00785d-6730-42d9-8004-f1bbc451d581-webhook-cert\") pod \"packageserver-d55dfcdfc-qnszh\" (UID: \"6d00785d-6730-42d9-8004-f1bbc451d581\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:21 crc kubenswrapper[4736]: E0316 15:16:21.603444 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:22.103301854 +0000 UTC m=+183.830692141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.605721 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c3b56d3-03e4-4bba-8ac0-072c4a281513-audit-dir\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.609716 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61ab67d8-7624-44b8-8891-37490ba9ab4b-trusted-ca\") pod \"ingress-operator-5b745b69d9-df26x\" (UID: \"61ab67d8-7624-44b8-8891-37490ba9ab4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.610283 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce839b1-2853-4984-9105-73e482b62cfb-config\") pod \"kube-apiserver-operator-766d6c64bb-pwfh6\" (UID: \"4ce839b1-2853-4984-9105-73e482b62cfb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.610929 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10773cef-e8d2-43c6-a821-901ab3ebd72f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5rwsm\" (UID: \"10773cef-e8d2-43c6-a821-901ab3ebd72f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.612826 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.616436 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f022dad1-489f-4a36-b60b-6b34e8dbd5aa-proxy-tls\") pod \"machine-config-controller-84d6567774-7d4rb\" (UID: \"f022dad1-489f-4a36-b60b-6b34e8dbd5aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.617603 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.617875 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13-proxy-tls\") pod \"machine-config-operator-74547568cd-prl2f\" (UID: \"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.620466 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ccfe744-3d0c-404c-aed7-94c575a05b34-installation-pull-secrets\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.624258 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-default-certificate\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.624716 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/365f3914-692f-4b54-b389-89e9a276a9d9-config\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.628866 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ccfe744-3d0c-404c-aed7-94c575a05b34-ca-trust-extracted\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.629269 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnlzv\" (UniqueName: \"kubernetes.io/projected/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-kube-api-access-lnlzv\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.633856 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a3081e47-19ee-40fa-b5db-9c01bc56228a-signing-key\") pod \"service-ca-9c57cc56f-kzg9q\" (UID: \"a3081e47-19ee-40fa-b5db-9c01bc56228a\") " pod="openshift-service-ca/service-ca-9c57cc56f-kzg9q" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.635363 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/365f3914-692f-4b54-b389-89e9a276a9d9-serving-cert\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.636169 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-metrics-certs\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.636448 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-stats-auth\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.639467 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10773cef-e8d2-43c6-a821-901ab3ebd72f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5rwsm\" (UID: \"10773cef-e8d2-43c6-a821-901ab3ebd72f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.639824 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/365f3914-692f-4b54-b389-89e9a276a9d9-etcd-service-ca\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.645703 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prl2f\" (UID: \"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.645813 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61ab67d8-7624-44b8-8891-37490ba9ab4b-metrics-tls\") pod \"ingress-operator-5b745b69d9-df26x\" (UID: \"61ab67d8-7624-44b8-8891-37490ba9ab4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.648892 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bfbf1c26-496a-4fc3-a248-9c2db09bf334-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j58gg\" (UID: \"bfbf1c26-496a-4fc3-a248-9c2db09bf334\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.649323 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.652947 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-audit-policies\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.653361 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a3081e47-19ee-40fa-b5db-9c01bc56228a-signing-cabundle\") pod \"service-ca-9c57cc56f-kzg9q\" (UID: \"a3081e47-19ee-40fa-b5db-9c01bc56228a\") " pod="openshift-service-ca/service-ca-9c57cc56f-kzg9q" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.654448 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26014393-61ac-493d-b8b5-c71abee3a415-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qwc6v\" (UID: \"26014393-61ac-493d-b8b5-c71abee3a415\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.655013 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cb215d7-e71c-4a0b-9817-49a4f8c7f60e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rmd28\" (UID: \"4cb215d7-e71c-4a0b-9817-49a4f8c7f60e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.655382 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ccfe744-3d0c-404c-aed7-94c575a05b34-registry-certificates\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.657282 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/365f3914-692f-4b54-b389-89e9a276a9d9-etcd-client\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.657676 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13-images\") pod \"machine-config-operator-74547568cd-prl2f\" (UID: \"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.658710 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26014393-61ac-493d-b8b5-c71abee3a415-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qwc6v\" (UID: \"26014393-61ac-493d-b8b5-c71abee3a415\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.659971 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f022dad1-489f-4a36-b60b-6b34e8dbd5aa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-7d4rb\" (UID: \"f022dad1-489f-4a36-b60b-6b34e8dbd5aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.659975 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.660207 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce839b1-2853-4984-9105-73e482b62cfb-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pwfh6\" (UID: \"4ce839b1-2853-4984-9105-73e482b62cfb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.660836 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlhc6\" (UniqueName: \"kubernetes.io/projected/4cb215d7-e71c-4a0b-9817-49a4f8c7f60e-kube-api-access-jlhc6\") pod \"cluster-image-registry-operator-dc59b4c8b-rmd28\" (UID: \"4cb215d7-e71c-4a0b-9817-49a4f8c7f60e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.661390 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.661402 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.661883 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.662861 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3d36700-672c-441c-bf3a-23f4ef140fd5-config\") pod \"kube-controller-manager-operator-78b949d7b-6d989\" (UID: \"a3d36700-672c-441c-bf3a-23f4ef140fd5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.664099 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.664930 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3d36700-672c-441c-bf3a-23f4ef140fd5-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6d989\" (UID: \"a3d36700-672c-441c-bf3a-23f4ef140fd5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.666715 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.666936 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/584c55a3-9d43-42ab-9fcd-b3a938b52dc1-service-ca-bundle\") pod \"router-default-5444994796-ns4mr\" (UID: \"584c55a3-9d43-42ab-9fcd-b3a938b52dc1\") " pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.667348 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfbf1c26-496a-4fc3-a248-9c2db09bf334-serving-cert\") pod \"openshift-config-operator-7777fb866f-j58gg\" (UID: \"bfbf1c26-496a-4fc3-a248-9c2db09bf334\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.667815 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.669299 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab78d179-e8ea-44ca-a95a-c634dab5aa6b-metrics-tls\") pod \"dns-operator-744455d44c-vv5d4\" (UID: \"ab78d179-e8ea-44ca-a95a-c634dab5aa6b\") " pod="openshift-dns-operator/dns-operator-744455d44c-vv5d4" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.669681 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3d36700-672c-441c-bf3a-23f4ef140fd5-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6d989\" (UID: \"a3d36700-672c-441c-bf3a-23f4ef140fd5\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.672537 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-registry-tls\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.677404 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.680574 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nszdn\" (UniqueName: \"kubernetes.io/projected/26014393-61ac-493d-b8b5-c71abee3a415-kube-api-access-nszdn\") pod \"kube-storage-version-migrator-operator-b67b599dd-qwc6v\" (UID: \"26014393-61ac-493d-b8b5-c71abee3a415\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.695280 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.696884 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnj7z\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-kube-api-access-rnj7z\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.699158 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz"] Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.711041 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/61ab67d8-7624-44b8-8891-37490ba9ab4b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-df26x\" (UID: \"61ab67d8-7624-44b8-8891-37490ba9ab4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.717978 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dca4fa92-819d-4973-87b1-b6282946f072-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zxgkb\" (UID: \"dca4fa92-819d-4973-87b1-b6282946f072\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zxgkb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721410 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwnrm\" (UniqueName: \"kubernetes.io/projected/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-kube-api-access-fwnrm\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721439 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/90e2fab8-e507-485e-9c3d-63d00f092fa1-certs\") pod \"machine-config-server-bx5tm\" (UID: \"90e2fab8-e507-485e-9c3d-63d00f092fa1\") " pod="openshift-machine-config-operator/machine-config-server-bx5tm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721460 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e95b535a-2e38-4797-97c6-5ab54160b983-config-volume\") pod \"collect-profiles-29561235-7vr89\" (UID: \"e95b535a-2e38-4797-97c6-5ab54160b983\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721478 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f48t4\" (UniqueName: \"kubernetes.io/projected/9ad17779-e378-4bfe-bf98-b746baf0d153-kube-api-access-f48t4\") pod \"multus-admission-controller-857f4d67dd-tbntp\" (UID: \"9ad17779-e378-4bfe-bf98-b746baf0d153\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tbntp" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721497 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6d00785d-6730-42d9-8004-f1bbc451d581-tmpfs\") pod \"packageserver-d55dfcdfc-qnszh\" (UID: \"6d00785d-6730-42d9-8004-f1bbc451d581\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721533 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-registration-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721553 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw8jm\" (UniqueName: \"kubernetes.io/projected/6f744d0a-ff18-46d3-b3ab-731e0308e0b6-kube-api-access-bw8jm\") pod \"dns-default-ngjsb\" (UID: \"6f744d0a-ff18-46d3-b3ab-731e0308e0b6\") " pod="openshift-dns/dns-default-ngjsb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721579 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721599 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa7778f-ab77-4cbe-ac51-99c2f2206b15-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wtl7f\" (UID: \"4aa7778f-ab77-4cbe-ac51-99c2f2206b15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721625 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5zbw\" (UniqueName: \"kubernetes.io/projected/90e2fab8-e507-485e-9c3d-63d00f092fa1-kube-api-access-z5zbw\") pod \"machine-config-server-bx5tm\" (UID: \"90e2fab8-e507-485e-9c3d-63d00f092fa1\") " pod="openshift-machine-config-operator/machine-config-server-bx5tm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721655 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e95b535a-2e38-4797-97c6-5ab54160b983-secret-volume\") pod \"collect-profiles-29561235-7vr89\" (UID: \"e95b535a-2e38-4797-97c6-5ab54160b983\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721693 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8234512-ffa3-4229-b3d5-be9360b59dac-profile-collector-cert\") pod \"catalog-operator-68c6474976-xzwwg\" (UID: \"f8234512-ffa3-4229-b3d5-be9360b59dac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721712 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d00785d-6730-42d9-8004-f1bbc451d581-apiservice-cert\") pod \"packageserver-d55dfcdfc-qnszh\" (UID: \"6d00785d-6730-42d9-8004-f1bbc451d581\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721732 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6fjr\" (UniqueName: \"kubernetes.io/projected/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-kube-api-access-f6fjr\") pod \"marketplace-operator-79b997595-kr8vn\" (UID: \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721750 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ad17779-e378-4bfe-bf98-b746baf0d153-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tbntp\" (UID: \"9ad17779-e378-4bfe-bf98-b746baf0d153\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tbntp" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721768 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cjqz\" (UniqueName: \"kubernetes.io/projected/ae9de77b-767a-4a87-b4bd-648728dc9826-kube-api-access-8cjqz\") pod \"auto-csr-approver-29561236-cxwv6\" (UID: \"ae9de77b-767a-4a87-b4bd-648728dc9826\") " pod="openshift-infra/auto-csr-approver-29561236-cxwv6" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721784 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74-srv-cert\") pod \"olm-operator-6b444d44fb-hxs96\" (UID: \"db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721809 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drp9v\" (UniqueName: \"kubernetes.io/projected/a14c47f9-54bf-455c-9e43-a36fb2fd871b-kube-api-access-drp9v\") pod \"ingress-canary-qv77g\" (UID: \"a14c47f9-54bf-455c-9e43-a36fb2fd871b\") " pod="openshift-ingress-canary/ingress-canary-qv77g" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721824 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f744d0a-ff18-46d3-b3ab-731e0308e0b6-config-volume\") pod \"dns-default-ngjsb\" (UID: \"6f744d0a-ff18-46d3-b3ab-731e0308e0b6\") " pod="openshift-dns/dns-default-ngjsb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721855 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6f744d0a-ff18-46d3-b3ab-731e0308e0b6-metrics-tls\") pod \"dns-default-ngjsb\" (UID: \"6f744d0a-ff18-46d3-b3ab-731e0308e0b6\") " pod="openshift-dns/dns-default-ngjsb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721871 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef5af3b4-3c61-4469-84e2-5e0bac26ed18-config\") pod \"service-ca-operator-777779d784-ncvk2\" (UID: \"ef5af3b4-3c61-4469-84e2-5e0bac26ed18\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721893 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thmhf\" (UniqueName: \"kubernetes.io/projected/e95b535a-2e38-4797-97c6-5ab54160b983-kube-api-access-thmhf\") pod \"collect-profiles-29561235-7vr89\" (UID: \"e95b535a-2e38-4797-97c6-5ab54160b983\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721921 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-plugins-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721945 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kr8vn\" (UID: \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721975 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hxs96\" (UID: \"db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721970 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-registration-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.721994 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-socket-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.722018 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8234512-ffa3-4229-b3d5-be9360b59dac-srv-cert\") pod \"catalog-operator-68c6474976-xzwwg\" (UID: \"f8234512-ffa3-4229-b3d5-be9360b59dac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.722217 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-socket-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: E0316 15:16:21.722508 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:22.222490502 +0000 UTC m=+183.949880789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.724010 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-csi-data-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.733348 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa7778f-ab77-4cbe-ac51-99c2f2206b15-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wtl7f\" (UID: \"4aa7778f-ab77-4cbe-ac51-99c2f2206b15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.743656 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-plugins-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.746257 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d00785d-6730-42d9-8004-f1bbc451d581-apiservice-cert\") pod \"packageserver-d55dfcdfc-qnszh\" (UID: \"6d00785d-6730-42d9-8004-f1bbc451d581\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.746786 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10773cef-e8d2-43c6-a821-901ab3ebd72f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5rwsm\" (UID: \"10773cef-e8d2-43c6-a821-901ab3ebd72f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.748696 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f744d0a-ff18-46d3-b3ab-731e0308e0b6-config-volume\") pod \"dns-default-ngjsb\" (UID: \"6f744d0a-ff18-46d3-b3ab-731e0308e0b6\") " pod="openshift-dns/dns-default-ngjsb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.748810 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6d00785d-6730-42d9-8004-f1bbc451d581-tmpfs\") pod \"packageserver-d55dfcdfc-qnszh\" (UID: \"6d00785d-6730-42d9-8004-f1bbc451d581\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.750402 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8234512-ffa3-4229-b3d5-be9360b59dac-srv-cert\") pod \"catalog-operator-68c6474976-xzwwg\" (UID: \"f8234512-ffa3-4229-b3d5-be9360b59dac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.750926 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef5af3b4-3c61-4469-84e2-5e0bac26ed18-config\") pod \"service-ca-operator-777779d784-ncvk2\" (UID: \"ef5af3b4-3c61-4469-84e2-5e0bac26ed18\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.751330 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kr8vn\" (UID: \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.751930 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8234512-ffa3-4229-b3d5-be9360b59dac-profile-collector-cert\") pod \"catalog-operator-68c6474976-xzwwg\" (UID: \"f8234512-ffa3-4229-b3d5-be9360b59dac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.752838 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e95b535a-2e38-4797-97c6-5ab54160b983-secret-volume\") pod \"collect-profiles-29561235-7vr89\" (UID: \"e95b535a-2e38-4797-97c6-5ab54160b983\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.753813 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hxs96\" (UID: \"db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.761231 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65"] Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.761756 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e95b535a-2e38-4797-97c6-5ab54160b983-config-volume\") pod \"collect-profiles-29561235-7vr89\" (UID: \"e95b535a-2e38-4797-97c6-5ab54160b983\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.774703 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6f744d0a-ff18-46d3-b3ab-731e0308e0b6-metrics-tls\") pod \"dns-default-ngjsb\" (UID: \"6f744d0a-ff18-46d3-b3ab-731e0308e0b6\") " pod="openshift-dns/dns-default-ngjsb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.786901 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74-srv-cert\") pod \"olm-operator-6b444d44fb-hxs96\" (UID: \"db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.789126 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-bound-sa-token\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.789410 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/90e2fab8-e507-485e-9c3d-63d00f092fa1-certs\") pod \"machine-config-server-bx5tm\" (UID: \"90e2fab8-e507-485e-9c3d-63d00f092fa1\") " pod="openshift-machine-config-operator/machine-config-server-bx5tm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.790454 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2bzh5"] Mar 16 15:16:21 crc kubenswrapper[4736]: W0316 15:16:21.790604 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56799fb0_e11b_40fb_812c_bb7907d5b25f.slice/crio-b1e44e41452ceb6b5528f02c937eddbce1565e3507e33c99c3289f1a76279d25 WatchSource:0}: Error finding container b1e44e41452ceb6b5528f02c937eddbce1565e3507e33c99c3289f1a76279d25: Status 404 returned error can't find the container with id b1e44e41452ceb6b5528f02c937eddbce1565e3507e33c99c3289f1a76279d25 Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.791038 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.797584 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-csi-data-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.797872 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.798557 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbdlq\" (UniqueName: \"kubernetes.io/projected/bfbf1c26-496a-4fc3-a248-9c2db09bf334-kube-api-access-tbdlq\") pod \"openshift-config-operator-7777fb866f-j58gg\" (UID: \"bfbf1c26-496a-4fc3-a248-9c2db09bf334\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.798565 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d00785d-6730-42d9-8004-f1bbc451d581-webhook-cert\") pod \"packageserver-d55dfcdfc-qnszh\" (UID: \"6d00785d-6730-42d9-8004-f1bbc451d581\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.802674 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4c57\" (UniqueName: \"kubernetes.io/projected/db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74-kube-api-access-c4c57\") pod \"olm-operator-6b444d44fb-hxs96\" (UID: \"db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.802698 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kr8vn\" (UID: \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.802717 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/90e2fab8-e507-485e-9c3d-63d00f092fa1-node-bootstrap-token\") pod \"machine-config-server-bx5tm\" (UID: \"90e2fab8-e507-485e-9c3d-63d00f092fa1\") " pod="openshift-machine-config-operator/machine-config-server-bx5tm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.802744 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdgjv\" (UniqueName: \"kubernetes.io/projected/6d00785d-6730-42d9-8004-f1bbc451d581-kube-api-access-sdgjv\") pod \"packageserver-d55dfcdfc-qnszh\" (UID: \"6d00785d-6730-42d9-8004-f1bbc451d581\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.802760 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a14c47f9-54bf-455c-9e43-a36fb2fd871b-cert\") pod \"ingress-canary-qv77g\" (UID: \"a14c47f9-54bf-455c-9e43-a36fb2fd871b\") " pod="openshift-ingress-canary/ingress-canary-qv77g" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.802778 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-mountpoint-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.802800 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef5af3b4-3c61-4469-84e2-5e0bac26ed18-serving-cert\") pod \"service-ca-operator-777779d784-ncvk2\" (UID: \"ef5af3b4-3c61-4469-84e2-5e0bac26ed18\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.802816 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvxx8\" (UniqueName: \"kubernetes.io/projected/4aa7778f-ab77-4cbe-ac51-99c2f2206b15-kube-api-access-xvxx8\") pod \"package-server-manager-789f6589d5-wtl7f\" (UID: \"4aa7778f-ab77-4cbe-ac51-99c2f2206b15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.802839 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prp65\" (UniqueName: \"kubernetes.io/projected/ef5af3b4-3c61-4469-84e2-5e0bac26ed18-kube-api-access-prp65\") pod \"service-ca-operator-777779d784-ncvk2\" (UID: \"ef5af3b4-3c61-4469-84e2-5e0bac26ed18\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.802862 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgzlh\" (UniqueName: \"kubernetes.io/projected/f8234512-ffa3-4229-b3d5-be9360b59dac-kube-api-access-vgzlh\") pod \"catalog-operator-68c6474976-xzwwg\" (UID: \"f8234512-ffa3-4229-b3d5-be9360b59dac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.803685 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-mountpoint-dir\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.803915 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9ad17779-e378-4bfe-bf98-b746baf0d153-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tbntp\" (UID: \"9ad17779-e378-4bfe-bf98-b746baf0d153\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tbntp" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.804619 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kr8vn\" (UID: \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.805116 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d00785d-6730-42d9-8004-f1bbc451d581-webhook-cert\") pod \"packageserver-d55dfcdfc-qnszh\" (UID: \"6d00785d-6730-42d9-8004-f1bbc451d581\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.811945 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/90e2fab8-e507-485e-9c3d-63d00f092fa1-node-bootstrap-token\") pod \"machine-config-server-bx5tm\" (UID: \"90e2fab8-e507-485e-9c3d-63d00f092fa1\") " pod="openshift-machine-config-operator/machine-config-server-bx5tm" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.812563 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a14c47f9-54bf-455c-9e43-a36fb2fd871b-cert\") pod \"ingress-canary-qv77g\" (UID: \"a14c47f9-54bf-455c-9e43-a36fb2fd871b\") " pod="openshift-ingress-canary/ingress-canary-qv77g" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.812737 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqzd2\" (UniqueName: \"kubernetes.io/projected/dca4fa92-819d-4973-87b1-b6282946f072-kube-api-access-lqzd2\") pod \"control-plane-machine-set-operator-78cbb6b69f-zxgkb\" (UID: \"dca4fa92-819d-4973-87b1-b6282946f072\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zxgkb" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.825696 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4cb215d7-e71c-4a0b-9817-49a4f8c7f60e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rmd28\" (UID: \"4cb215d7-e71c-4a0b-9817-49a4f8c7f60e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.829494 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef5af3b4-3c61-4469-84e2-5e0bac26ed18-serving-cert\") pod \"service-ca-operator-777779d784-ncvk2\" (UID: \"ef5af3b4-3c61-4469-84e2-5e0bac26ed18\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.841517 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljtrl\" (UniqueName: \"kubernetes.io/projected/61ab67d8-7624-44b8-8891-37490ba9ab4b-kube-api-access-ljtrl\") pod \"ingress-operator-5b745b69d9-df26x\" (UID: \"61ab67d8-7624-44b8-8891-37490ba9ab4b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.858926 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk7dk\" (UniqueName: \"kubernetes.io/projected/365f3914-692f-4b54-b389-89e9a276a9d9-kube-api-access-sk7dk\") pod \"etcd-operator-b45778765-7gf6z\" (UID: \"365f3914-692f-4b54-b389-89e9a276a9d9\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.888737 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ce839b1-2853-4984-9105-73e482b62cfb-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pwfh6\" (UID: \"4ce839b1-2853-4984-9105-73e482b62cfb\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.910825 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9m7c\" (UniqueName: \"kubernetes.io/projected/a3081e47-19ee-40fa-b5db-9c01bc56228a-kube-api-access-m9m7c\") pod \"service-ca-9c57cc56f-kzg9q\" (UID: \"a3081e47-19ee-40fa-b5db-9c01bc56228a\") " pod="openshift-service-ca/service-ca-9c57cc56f-kzg9q" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.911996 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:21 crc kubenswrapper[4736]: E0316 15:16:21.912197 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:22.412171636 +0000 UTC m=+184.139561923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.912573 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:21 crc kubenswrapper[4736]: E0316 15:16:21.913737 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:22.413718898 +0000 UTC m=+184.141109185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.927203 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-477lq\" (UniqueName: \"kubernetes.io/projected/6c3b56d3-03e4-4bba-8ac0-072c4a281513-kube-api-access-477lq\") pod \"oauth-openshift-558db77b4-76btc\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.941945 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spjm6\" (UniqueName: \"kubernetes.io/projected/ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13-kube-api-access-spjm6\") pod \"machine-config-operator-74547568cd-prl2f\" (UID: \"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.944051 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5"] Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.945736 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.958447 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.959351 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqvmk\" (UniqueName: \"kubernetes.io/projected/ab78d179-e8ea-44ca-a95a-c634dab5aa6b-kube-api-access-xqvmk\") pod \"dns-operator-744455d44c-vv5d4\" (UID: \"ab78d179-e8ea-44ca-a95a-c634dab5aa6b\") " pod="openshift-dns-operator/dns-operator-744455d44c-vv5d4" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.966433 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.976544 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6" Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.980219 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-vv5d4" Mar 16 15:16:21 crc kubenswrapper[4736]: W0316 15:16:21.984362 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod584c55a3_9d43_42ab_9fcd_b3a938b52dc1.slice/crio-01aa617bbad403a371236aafe9e6b072630abb69eb05625bd2171908d08ea04c WatchSource:0}: Error finding container 01aa617bbad403a371236aafe9e6b072630abb69eb05625bd2171908d08ea04c: Status 404 returned error can't find the container with id 01aa617bbad403a371236aafe9e6b072630abb69eb05625bd2171908d08ea04c Mar 16 15:16:21 crc kubenswrapper[4736]: I0316 15:16:21.988463 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmcsl\" (UniqueName: \"kubernetes.io/projected/f022dad1-489f-4a36-b60b-6b34e8dbd5aa-kube-api-access-hmcsl\") pod \"machine-config-controller-84d6567774-7d4rb\" (UID: \"f022dad1-489f-4a36-b60b-6b34e8dbd5aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.000016 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwnrm\" (UniqueName: \"kubernetes.io/projected/02c1a3b8-f0be-49ef-ad4e-6734d14df6b2-kube-api-access-fwnrm\") pod \"csi-hostpathplugin-p47pv\" (UID: \"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2\") " pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.002602 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.008006 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65" event={"ID":"99f1c1c6-cc63-4104-afb0-ff540cf588a9","Type":"ContainerStarted","Data":"893e988b7c3372d9147b3ea6effdfcb6ced26a7f738ed41d59346cf7399bac47"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.013129 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.020598 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:22 crc kubenswrapper[4736]: E0316 15:16:22.023086 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:22.522836537 +0000 UTC m=+184.250226824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.024294 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drp9v\" (UniqueName: \"kubernetes.io/projected/a14c47f9-54bf-455c-9e43-a36fb2fd871b-kube-api-access-drp9v\") pod \"ingress-canary-qv77g\" (UID: \"a14c47f9-54bf-455c-9e43-a36fb2fd871b\") " pod="openshift-ingress-canary/ingress-canary-qv77g" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.028446 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" event={"ID":"7984ce35-90df-4462-b2d5-6d25102d7bb5","Type":"ContainerStarted","Data":"c058829e7179e721f29a0829eb6fe919ff56c20f2fcd754f262685d7495f7c50"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.028519 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" event={"ID":"7984ce35-90df-4462-b2d5-6d25102d7bb5","Type":"ContainerStarted","Data":"cd185c058fff283118c1ba65a771933a13840afc5d6a0c78ea18e7f91a6cd1df"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.028937 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.029092 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.032985 4736 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cpv4c container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.033043 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" podUID="7984ce35-90df-4462-b2d5-6d25102d7bb5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.037340 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz" event={"ID":"56799fb0-e11b-40fb-812c-bb7907d5b25f","Type":"ContainerStarted","Data":"b1e44e41452ceb6b5528f02c937eddbce1565e3507e33c99c3289f1a76279d25"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.038872 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.051332 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.070670 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zxgkb" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.074222 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5zbw\" (UniqueName: \"kubernetes.io/projected/90e2fab8-e507-485e-9c3d-63d00f092fa1-kube-api-access-z5zbw\") pod \"machine-config-server-bx5tm\" (UID: \"90e2fab8-e507-485e-9c3d-63d00f092fa1\") " pod="openshift-machine-config-operator/machine-config-server-bx5tm" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.090011 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6fjr\" (UniqueName: \"kubernetes.io/projected/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-kube-api-access-f6fjr\") pod \"marketplace-operator-79b997595-kr8vn\" (UID: \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.091492 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kzg9q" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.096854 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.097351 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" event={"ID":"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc","Type":"ContainerStarted","Data":"f5899a1e32234710192b86644293d047044ef2d9362a13bdc9c3978d4b2b4f06"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.097400 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" event={"ID":"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc","Type":"ContainerStarted","Data":"7f1c4e1a4760118ad4d3d33877b4eeaf3641314ae7527dbe2f15a7317dc1b6ea"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.105895 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-p78nr"] Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.110008 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thmhf\" (UniqueName: \"kubernetes.io/projected/e95b535a-2e38-4797-97c6-5ab54160b983-kube-api-access-thmhf\") pod \"collect-profiles-29561235-7vr89\" (UID: \"e95b535a-2e38-4797-97c6-5ab54160b983\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.114550 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.115578 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2j5dh"] Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.126380 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f48t4\" (UniqueName: \"kubernetes.io/projected/9ad17779-e378-4bfe-bf98-b746baf0d153-kube-api-access-f48t4\") pod \"multus-admission-controller-857f4d67dd-tbntp\" (UID: \"9ad17779-e378-4bfe-bf98-b746baf0d153\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tbntp" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.127621 4736 generic.go:334] "Generic (PLEG): container finished" podID="b6c41b72-0beb-4da9-9717-f975a6608475" containerID="f89d84c73f45c6fb0f0bd4d898f43a0506e5ef5bf943a517df9d543018b55cd2" exitCode=0 Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.127705 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" event={"ID":"b6c41b72-0beb-4da9-9717-f975a6608475","Type":"ContainerDied","Data":"f89d84c73f45c6fb0f0bd4d898f43a0506e5ef5bf943a517df9d543018b55cd2"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.127743 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" event={"ID":"b6c41b72-0beb-4da9-9717-f975a6608475","Type":"ContainerStarted","Data":"68bb307d1b5d255c3ed702ff558ad2e5eeb4c95cfd084dfc5b64c6fd2907804a"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.128709 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.130999 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5" event={"ID":"2fa766eb-c490-46d8-8154-13eb2719e6e0","Type":"ContainerStarted","Data":"98ae41d9e39059e74240ebf1b4647beb81ba50428220678bc2c37d8ed0c400fb"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.134407 4736 generic.go:334] "Generic (PLEG): container finished" podID="1b7712f1-6d20-477f-a190-a50a6d35c238" containerID="4604c2ea2160752b610a2a4d4eee9d1cf9a84045d1879383c842a3eabd42abd0" exitCode=0 Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.134482 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" event={"ID":"1b7712f1-6d20-477f-a190-a50a6d35c238","Type":"ContainerDied","Data":"4604c2ea2160752b610a2a4d4eee9d1cf9a84045d1879383c842a3eabd42abd0"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.134505 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" event={"ID":"1b7712f1-6d20-477f-a190-a50a6d35c238","Type":"ContainerStarted","Data":"b771310df54d771a9665a8f8ee3d82824013088af75c1df22544be37ae301ff6"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.153273 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw8jm\" (UniqueName: \"kubernetes.io/projected/6f744d0a-ff18-46d3-b3ab-731e0308e0b6-kube-api-access-bw8jm\") pod \"dns-default-ngjsb\" (UID: \"6f744d0a-ff18-46d3-b3ab-731e0308e0b6\") " pod="openshift-dns/dns-default-ngjsb" Mar 16 15:16:22 crc kubenswrapper[4736]: E0316 15:16:22.153380 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:22.653344888 +0000 UTC m=+184.380735245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.154540 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2bzh5" event={"ID":"5881f71f-e94a-4bc8-8d08-8bf079fd10e3","Type":"ContainerStarted","Data":"57545ea2bc7c59ec00c3fbc072d32911a3a8e9f710c51ff6a94c1ac16ff025c1"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.173352 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cjqz\" (UniqueName: \"kubernetes.io/projected/ae9de77b-767a-4a87-b4bd-648728dc9826-kube-api-access-8cjqz\") pod \"auto-csr-approver-29561236-cxwv6\" (UID: \"ae9de77b-767a-4a87-b4bd-648728dc9826\") " pod="openshift-infra/auto-csr-approver-29561236-cxwv6" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.173690 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" event={"ID":"84a3bf2d-cfc8-4cc4-a284-1e34af54f453","Type":"ContainerStarted","Data":"0ac6ae88a01ff4b19d5652e4c94f7f54025610c3ad8f6c3417fa9da00ed8c558"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.173934 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" event={"ID":"84a3bf2d-cfc8-4cc4-a284-1e34af54f453","Type":"ContainerStarted","Data":"61f729f3e2fe8a327c29147af6afa24aa0b1c8e638d0fbaf1d186ebc86277b39"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.174921 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.201224 4736 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4dzg6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.201276 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" podUID="84a3bf2d-cfc8-4cc4-a284-1e34af54f453" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.204352 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bx5tm" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.207373 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgzlh\" (UniqueName: \"kubernetes.io/projected/f8234512-ffa3-4229-b3d5-be9360b59dac-kube-api-access-vgzlh\") pod \"catalog-operator-68c6474976-xzwwg\" (UID: \"f8234512-ffa3-4229-b3d5-be9360b59dac\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.222687 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdgjv\" (UniqueName: \"kubernetes.io/projected/6d00785d-6730-42d9-8004-f1bbc451d581-kube-api-access-sdgjv\") pod \"packageserver-d55dfcdfc-qnszh\" (UID: \"6d00785d-6730-42d9-8004-f1bbc451d581\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.224823 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ngjsb" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.226666 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-p47pv" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.226910 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qv77g" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.230881 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4c57\" (UniqueName: \"kubernetes.io/projected/db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74-kube-api-access-c4c57\") pod \"olm-operator-6b444d44fb-hxs96\" (UID: \"db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.263669 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:22 crc kubenswrapper[4736]: E0316 15:16:22.264593 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:22.764577763 +0000 UTC m=+184.491968050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.293644 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989"] Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.315101 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" event={"ID":"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce","Type":"ContainerStarted","Data":"ba1e8362d52a2360dd8a4899e3ca85fb6fd90e77c2f094b2ee31b1b68328a5e3"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.315159 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" event={"ID":"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce","Type":"ContainerStarted","Data":"2dde38b74029fc65e97f1d4a685e30c8d5c7fa66a24ccab04412ae8a9e6871e9"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.330383 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" event={"ID":"77900183-2391-4a47-8468-b36847297446","Type":"ContainerStarted","Data":"01e036e136dca802df3300b0a23d280c1a19ae0e5ba046fd3abe3825dc0d4d38"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.330550 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" event={"ID":"77900183-2391-4a47-8468-b36847297446","Type":"ContainerStarted","Data":"7583de6f553ec0b5a508d3858d94f51624be2cb6b8fb7e41546197bcca37f30a"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.336844 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ns4mr" event={"ID":"584c55a3-9d43-42ab-9fcd-b3a938b52dc1","Type":"ContainerStarted","Data":"01aa617bbad403a371236aafe9e6b072630abb69eb05625bd2171908d08ea04c"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.337631 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prp65\" (UniqueName: \"kubernetes.io/projected/ef5af3b4-3c61-4469-84e2-5e0bac26ed18-kube-api-access-prp65\") pod \"service-ca-operator-777779d784-ncvk2\" (UID: \"ef5af3b4-3c61-4469-84e2-5e0bac26ed18\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.346635 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g4ggq" event={"ID":"f971a0f2-eb9c-4fe6-a2df-52267d16f41a","Type":"ContainerStarted","Data":"9695379c19e48e2db9f254c7ee79ec8edb9db26c01fb958432e65ca8744a79cf"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.346687 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g4ggq" event={"ID":"f971a0f2-eb9c-4fe6-a2df-52267d16f41a","Type":"ContainerStarted","Data":"b8c912bcef9fa9b220dd0900a0de83a425e3ec0513e3b2597f5260b641140caa"} Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.347996 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvxx8\" (UniqueName: \"kubernetes.io/projected/4aa7778f-ab77-4cbe-ac51-99c2f2206b15-kube-api-access-xvxx8\") pod \"package-server-manager-789f6589d5-wtl7f\" (UID: \"4aa7778f-ab77-4cbe-ac51-99c2f2206b15\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.368890 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:22 crc kubenswrapper[4736]: E0316 15:16:22.369383 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:22.869370637 +0000 UTC m=+184.596760924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.407846 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.426997 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tbntp" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.430954 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v"] Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.434607 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.444187 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.468188 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561236-cxwv6" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.470389 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:22 crc kubenswrapper[4736]: E0316 15:16:22.472369 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:22.972331651 +0000 UTC m=+184.699721978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.475155 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.497328 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.572926 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:22 crc kubenswrapper[4736]: E0316 15:16:22.573404 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:23.073386904 +0000 UTC m=+184.800777191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.577779 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" podStartSLOduration=121.577748961 podStartE2EDuration="2m1.577748961s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:22.576501388 +0000 UTC m=+184.303891695" watchObservedRunningTime="2026-03-16 15:16:22.577748961 +0000 UTC m=+184.305139248" Mar 16 15:16:22 crc kubenswrapper[4736]: W0316 15:16:22.603417 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26014393_61ac_493d_b8b5_c71abee3a415.slice/crio-6e9512e8ddd0cb14be2fb96009059a03ec1b6406261d2d3afb14243dbf35aad2 WatchSource:0}: Error finding container 6e9512e8ddd0cb14be2fb96009059a03ec1b6406261d2d3afb14243dbf35aad2: Status 404 returned error can't find the container with id 6e9512e8ddd0cb14be2fb96009059a03ec1b6406261d2d3afb14243dbf35aad2 Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.674635 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:22 crc kubenswrapper[4736]: E0316 15:16:22.675604 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:23.175572518 +0000 UTC m=+184.902962805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.770313 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j58gg"] Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.785316 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:22 crc kubenswrapper[4736]: E0316 15:16:22.785941 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:23.285921339 +0000 UTC m=+185.013311626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.830822 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm"] Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.878048 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28"] Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.887010 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:22 crc kubenswrapper[4736]: E0316 15:16:22.887671 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:23.387654372 +0000 UTC m=+185.115044659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.960575 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6"] Mar 16 15:16:22 crc kubenswrapper[4736]: I0316 15:16:22.989625 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:22 crc kubenswrapper[4736]: E0316 15:16:22.990308 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:23.490295657 +0000 UTC m=+185.217685944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.031900 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb"] Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.091648 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:23 crc kubenswrapper[4736]: E0316 15:16:23.092010 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:23.591987318 +0000 UTC m=+185.319377605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.092426 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:23 crc kubenswrapper[4736]: E0316 15:16:23.093990 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:23.59397738 +0000 UTC m=+185.321367667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.193637 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:23 crc kubenswrapper[4736]: E0316 15:16:23.193947 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:23.693932595 +0000 UTC m=+185.421322882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.210235 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7gf6z"] Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.263444 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" podStartSLOduration=122.263421484 podStartE2EDuration="2m2.263421484s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:23.235312022 +0000 UTC m=+184.962702309" watchObservedRunningTime="2026-03-16 15:16:23.263421484 +0000 UTC m=+184.990811761" Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.297976 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:23 crc kubenswrapper[4736]: E0316 15:16:23.298993 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:23.798982124 +0000 UTC m=+185.526372411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.417908 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-76btc"] Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.423420 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:23 crc kubenswrapper[4736]: E0316 15:16:23.423832 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:23.923817164 +0000 UTC m=+185.651207451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.449895 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" event={"ID":"e2850ca2-f7d9-4aa6-9163-e0b32c53cdce","Type":"ContainerStarted","Data":"702f92b935258dd4754d5efb63051c403946fab7f5322c63f9a282053f73cebd"} Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.474392 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989" event={"ID":"a3d36700-672c-441c-bf3a-23f4ef140fd5","Type":"ContainerStarted","Data":"203ac2f84ee3efc3380ae4d599e44a069e269c7bd57b7d029ba4848d923f69bd"} Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.504524 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ns4mr" event={"ID":"584c55a3-9d43-42ab-9fcd-b3a938b52dc1","Type":"ContainerStarted","Data":"3dd1cd85117fc970ac69e46b951b84d418dfe10375c91f03231edb03f1a8f73c"} Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.522756 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" podStartSLOduration=122.522714459 podStartE2EDuration="2m2.522714459s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:23.503670361 +0000 UTC m=+185.231060648" watchObservedRunningTime="2026-03-16 15:16:23.522714459 +0000 UTC m=+185.250104746" Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.525257 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:23 crc kubenswrapper[4736]: E0316 15:16:23.525692 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:24.025680189 +0000 UTC m=+185.753070476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.552642 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5" event={"ID":"2fa766eb-c490-46d8-8154-13eb2719e6e0","Type":"ContainerStarted","Data":"734af9ed9a84ec2e85f0c19eb831187a83b8bf4584407ca6121e14e068d4025e"} Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.626337 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.626600 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65" event={"ID":"99f1c1c6-cc63-4104-afb0-ff540cf588a9","Type":"ContainerStarted","Data":"bfee9ed5121abb4865c4161b3e90bdaabeeaeeeb20e53d3723c57cdb5bae2644"} Mar 16 15:16:23 crc kubenswrapper[4736]: E0316 15:16:23.627117 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:24.127071632 +0000 UTC m=+185.854461919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.627271 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:23 crc kubenswrapper[4736]: E0316 15:16:23.635844 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:24.135821876 +0000 UTC m=+185.863212163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.669760 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6" event={"ID":"4ce839b1-2853-4984-9105-73e482b62cfb","Type":"ContainerStarted","Data":"b00face8d5ce8967326f0bcfdea243c241c76e2a0dc18883f07ca72a1dc1c5a3"} Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.691907 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f"] Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.734914 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:23 crc kubenswrapper[4736]: E0316 15:16:23.735633 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:24.235605244 +0000 UTC m=+185.962995531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.779069 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz" event={"ID":"56799fb0-e11b-40fb-812c-bb7907d5b25f","Type":"ContainerStarted","Data":"90634c43d47cf3bcafbf56d3605ee19215cf7e9f6177cd1f92c2f2b0cd632e10"} Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.790511 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2j5dh" event={"ID":"fcef29de-d881-4b9e-871e-6a2cc33484b6","Type":"ContainerStarted","Data":"765ab0d0fa5eaeb19105ee0bb70de88acbbdf5af3d186dad461e72501e48de8c"} Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.801937 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.804290 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.804360 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.821689 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2bzh5" event={"ID":"5881f71f-e94a-4bc8-8d08-8bf079fd10e3","Type":"ContainerStarted","Data":"2fa0b67781f410edcf339d616dd8624686ee8e115417becd9f2c73d0f317b071"} Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.822034 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2bzh5" Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.834155 4736 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bzh5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.834246 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2bzh5" podUID="5881f71f-e94a-4bc8-8d08-8bf079fd10e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.857994 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p78nr" event={"ID":"bdf738c2-dd67-4aea-9d3e-03d68658ee50","Type":"ContainerStarted","Data":"fe5afa948477d3bb0b33a7b26ffe22214def79f125635dd292c53d100e6eb72f"} Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.863401 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:23 crc kubenswrapper[4736]: E0316 15:16:23.863951 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:24.363878096 +0000 UTC m=+186.091268383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.891699 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" event={"ID":"4cb215d7-e71c-4a0b-9817-49a4f8c7f60e","Type":"ContainerStarted","Data":"b3a59d533bdd8823951efea3c13b49db877c8f21c9f260cb37d732f651065cd7"} Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.895648 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-df26x"] Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.906185 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm" event={"ID":"10773cef-e8d2-43c6-a821-901ab3ebd72f","Type":"ContainerStarted","Data":"7595860e0cfa4425fd62214886525f82c86a3d5451a11bfa2e5c122bd71bcef2"} Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.926447 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" event={"ID":"f022dad1-489f-4a36-b60b-6b34e8dbd5aa","Type":"ContainerStarted","Data":"47abf65155d1ce1339b492935abd706c649fe6cf03a3bd0dcc4f4676e23f943d"} Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.957568 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p47pv"] Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.970222 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:23 crc kubenswrapper[4736]: I0316 15:16:23.972451 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" event={"ID":"bfbf1c26-496a-4fc3-a248-9c2db09bf334","Type":"ContainerStarted","Data":"7540670a1ddb0e209681af725514cb0b208405cf2b03df7be5c78e79d924a422"} Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.004410 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g4ggq" event={"ID":"f971a0f2-eb9c-4fe6-a2df-52267d16f41a","Type":"ContainerStarted","Data":"98ec2bd9c3b8911667f8c74ee2498d9db86a7cb10e7243d71b761d03e616b65b"} Mar 16 15:16:24 crc kubenswrapper[4736]: E0316 15:16:24.008320 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:24.50829375 +0000 UTC m=+186.235684037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.041558 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-vv5d4"] Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.049131 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" event={"ID":"365f3914-692f-4b54-b389-89e9a276a9d9","Type":"ContainerStarted","Data":"14b5f792e9b4ed533c2df8ddc67842556e19cd6925cce38ea7c00297de4e0297"} Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.073360 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" event={"ID":"aee91c3b-8c99-4023-a891-2aaa3ab5ebcc","Type":"ContainerStarted","Data":"5eea423cc82032c36c94587e710c807b2a42a3f7698d13a72f18e129b30a6b1b"} Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.075573 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:24 crc kubenswrapper[4736]: E0316 15:16:24.076057 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:24.576041622 +0000 UTC m=+186.303431909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.105907 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh"] Mar 16 15:16:24 crc kubenswrapper[4736]: W0316 15:16:24.149238 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02c1a3b8_f0be_49ef_ad4e_6734d14df6b2.slice/crio-c02a8e06627270920511e0be256e7a5729ac49f72a452c9b3503a6d77edc996b WatchSource:0}: Error finding container c02a8e06627270920511e0be256e7a5729ac49f72a452c9b3503a6d77edc996b: Status 404 returned error can't find the container with id c02a8e06627270920511e0be256e7a5729ac49f72a452c9b3503a6d77edc996b Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.150738 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v" event={"ID":"26014393-61ac-493d-b8b5-c71abee3a415","Type":"ContainerStarted","Data":"6e9512e8ddd0cb14be2fb96009059a03ec1b6406261d2d3afb14243dbf35aad2"} Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.176409 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.176537 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rfgmf" podStartSLOduration=123.17651274 podStartE2EDuration="2m3.17651274s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:24.17616667 +0000 UTC m=+185.903556957" watchObservedRunningTime="2026-03-16 15:16:24.17651274 +0000 UTC m=+185.903903027" Mar 16 15:16:24 crc kubenswrapper[4736]: E0316 15:16:24.178370 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:24.678343748 +0000 UTC m=+186.405734035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.187798 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.204208 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.251653 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vrcz" podStartSLOduration=123.251623299 podStartE2EDuration="2m3.251623299s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:24.250900639 +0000 UTC m=+185.978290926" watchObservedRunningTime="2026-03-16 15:16:24.251623299 +0000 UTC m=+185.979013576" Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.278622 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:24 crc kubenswrapper[4736]: E0316 15:16:24.288131 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:24.788093035 +0000 UTC m=+186.515483322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.301673 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-grkd5" podStartSLOduration=123.301654697 podStartE2EDuration="2m3.301654697s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:24.300720402 +0000 UTC m=+186.028110689" watchObservedRunningTime="2026-03-16 15:16:24.301654697 +0000 UTC m=+186.029044984" Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.394674 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:24 crc kubenswrapper[4736]: E0316 15:16:24.395324 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:24.895304242 +0000 UTC m=+186.622694539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.405697 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ns4mr" podStartSLOduration=123.40567048 podStartE2EDuration="2m3.40567048s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:24.398640161 +0000 UTC m=+186.126030448" watchObservedRunningTime="2026-03-16 15:16:24.40567048 +0000 UTC m=+186.133060767" Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.409087 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kr8vn"] Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.506039 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:24 crc kubenswrapper[4736]: E0316 15:16:24.506606 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:25.00658913 +0000 UTC m=+186.733979417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.550940 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2bzh5" podStartSLOduration=123.550904744 podStartE2EDuration="2m3.550904744s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:24.543370103 +0000 UTC m=+186.270760390" watchObservedRunningTime="2026-03-16 15:16:24.550904744 +0000 UTC m=+186.278295031" Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.551883 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-dswrb" podStartSLOduration=123.551877651 podStartE2EDuration="2m3.551877651s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:24.471086229 +0000 UTC m=+186.198476506" watchObservedRunningTime="2026-03-16 15:16:24.551877651 +0000 UTC m=+186.279267938" Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.580014 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g4ggq" podStartSLOduration=123.579993763 podStartE2EDuration="2m3.579993763s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:24.578887403 +0000 UTC m=+186.306277690" watchObservedRunningTime="2026-03-16 15:16:24.579993763 +0000 UTC m=+186.307384050" Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.611849 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:24 crc kubenswrapper[4736]: E0316 15:16:24.612357 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:25.112335778 +0000 UTC m=+186.839726065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.662019 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zxgkb"] Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.715022 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:24 crc kubenswrapper[4736]: E0316 15:16:24.719626 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:25.219597338 +0000 UTC m=+186.946987625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.824094 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:24 crc kubenswrapper[4736]: E0316 15:16:24.824589 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:25.324572495 +0000 UTC m=+187.051962782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.825451 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:24 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:24 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:24 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.825538 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.828635 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f"] Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.874442 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561236-cxwv6"] Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.911326 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ngjsb"] Mar 16 15:16:24 crc kubenswrapper[4736]: I0316 15:16:24.928000 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:24 crc kubenswrapper[4736]: E0316 15:16:24.928365 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:25.428352311 +0000 UTC m=+187.155742598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.017917 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96"] Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.018145 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.032232 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:25 crc kubenswrapper[4736]: E0316 15:16:25.039405 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:25.539375762 +0000 UTC m=+187.266766049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:25 crc kubenswrapper[4736]: W0316 15:16:25.053038 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f744d0a_ff18_46d3_b3ab_731e0308e0b6.slice/crio-c53086059bd55c841d5faf6c364aa95290a6e2e106c323671dd6ded0a60a2dcb WatchSource:0}: Error finding container c53086059bd55c841d5faf6c364aa95290a6e2e106c323671dd6ded0a60a2dcb: Status 404 returned error can't find the container with id c53086059bd55c841d5faf6c364aa95290a6e2e106c323671dd6ded0a60a2dcb Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.129275 4736 ???:1] "http: TLS handshake error from 192.168.126.11:60702: no serving certificate available for the kubelet" Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.143451 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:25 crc kubenswrapper[4736]: E0316 15:16:25.143787 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:25.643773654 +0000 UTC m=+187.371163941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.206054 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kzg9q"] Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.250245 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:25 crc kubenswrapper[4736]: E0316 15:16:25.250365 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:25.750348035 +0000 UTC m=+187.477738322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.250630 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:25 crc kubenswrapper[4736]: E0316 15:16:25.250969 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:25.750960111 +0000 UTC m=+187.478350398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.271971 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v" event={"ID":"26014393-61ac-493d-b8b5-c71abee3a415","Type":"ContainerStarted","Data":"1d576d8517905e7eddb70ae684e47fc2929f4c83a375caf8f083cc7a49f1dc26"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.282266 4736 ???:1] "http: TLS handshake error from 192.168.126.11:60714: no serving certificate available for the kubelet" Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.293958 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ngjsb" event={"ID":"6f744d0a-ff18-46d3-b3ab-731e0308e0b6","Type":"ContainerStarted","Data":"c53086059bd55c841d5faf6c364aa95290a6e2e106c323671dd6ded0a60a2dcb"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.312291 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89"] Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.351694 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:25 crc kubenswrapper[4736]: E0316 15:16:25.352943 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:25.85292636 +0000 UTC m=+187.580316647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.355659 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" event={"ID":"f022dad1-489f-4a36-b60b-6b34e8dbd5aa","Type":"ContainerStarted","Data":"80ad4cbc0739ebb7293a7526749a5df413db85ef66e6919e1df99607a0054500"} Mar 16 15:16:25 crc kubenswrapper[4736]: W0316 15:16:25.376345 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3081e47_19ee_40fa_b5db_9c01bc56228a.slice/crio-7d28acb8b78ce5cc5ef301f5bb14faaf96e359c41dded139c892524c70fef2ab WatchSource:0}: Error finding container 7d28acb8b78ce5cc5ef301f5bb14faaf96e359c41dded139c892524c70fef2ab: Status 404 returned error can't find the container with id 7d28acb8b78ce5cc5ef301f5bb14faaf96e359c41dded139c892524c70fef2ab Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.377176 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bx5tm" event={"ID":"90e2fab8-e507-485e-9c3d-63d00f092fa1","Type":"ContainerStarted","Data":"2268d255975618cba1c1182b5d052393a388cf4bec84d44a6d381ab83331b4f7"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.378493 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qwc6v" podStartSLOduration=124.378468863 podStartE2EDuration="2m4.378468863s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:25.335275207 +0000 UTC m=+187.062665494" watchObservedRunningTime="2026-03-16 15:16:25.378468863 +0000 UTC m=+187.105859150" Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.380659 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qv77g"] Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.389335 4736 ???:1] "http: TLS handshake error from 192.168.126.11:60724: no serving certificate available for the kubelet" Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.410397 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" event={"ID":"1b7712f1-6d20-477f-a190-a50a6d35c238","Type":"ContainerStarted","Data":"0af99b7d1e96c8127bac7564978a0e5ce1d27fdc29107132e6d0a56d6609e4f5"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.423790 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p78nr" event={"ID":"bdf738c2-dd67-4aea-9d3e-03d68658ee50","Type":"ContainerStarted","Data":"c2370959f27f2be328ad11a23448ca0280a8c7e5b2ee0c4253cec9e2358e1ba2"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.429020 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" event={"ID":"4cb215d7-e71c-4a0b-9817-49a4f8c7f60e","Type":"ContainerStarted","Data":"c90763506282ca6e37638b7c91faf6501640cb5db7df05fef2397df79edf407d"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.453284 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:25 crc kubenswrapper[4736]: E0316 15:16:25.453640 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:25.953627554 +0000 UTC m=+187.681017841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.462508 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" event={"ID":"db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74","Type":"ContainerStarted","Data":"02a772e48836f6fe0dc6f97d8079d997a05f7fb7d0e4bde961d4672dddd7b8f2"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.508007 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zxgkb" event={"ID":"dca4fa92-819d-4973-87b1-b6282946f072","Type":"ContainerStarted","Data":"2bf1c2e64dd037dabb1a0ac4d18c52716447a5a6274e1a83af7d10538828b0ef"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.511721 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p47pv" event={"ID":"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2","Type":"ContainerStarted","Data":"c02a8e06627270920511e0be256e7a5729ac49f72a452c9b3503a6d77edc996b"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.512627 4736 ???:1] "http: TLS handshake error from 192.168.126.11:60728: no serving certificate available for the kubelet" Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.513402 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2j5dh" event={"ID":"fcef29de-d881-4b9e-871e-6a2cc33484b6","Type":"ContainerStarted","Data":"6726ca88553fdbb72dc084832b7135f9990c65c7df2841457940896d3db678ef"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.514374 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.525417 4736 patch_prober.go:28] interesting pod/console-operator-58897d9998-2j5dh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.525483 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2j5dh" podUID="fcef29de-d881-4b9e-871e-6a2cc33484b6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.527602 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" event={"ID":"6d00785d-6730-42d9-8004-f1bbc451d581","Type":"ContainerStarted","Data":"bdca0d7d2918722c9770849b2bfc49450a3b51d6fad9f402c81a5d47327b7341"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.536811 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" event={"ID":"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b","Type":"ContainerStarted","Data":"881a605d8d36bfc008bf35936cff8e59e819f3376209b711756765d7d206ecd9"} Mar 16 15:16:25 crc kubenswrapper[4736]: W0316 15:16:25.536952 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode95b535a_2e38_4797_97c6_5ab54160b983.slice/crio-d1374411c6439fc2150dfe79c03f4e2c3580c7e7cef253759648e4c8d3b64500 WatchSource:0}: Error finding container d1374411c6439fc2150dfe79c03f4e2c3580c7e7cef253759648e4c8d3b64500: Status 404 returned error can't find the container with id d1374411c6439fc2150dfe79c03f4e2c3580c7e7cef253759648e4c8d3b64500 Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.547703 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" event={"ID":"4aa7778f-ab77-4cbe-ac51-99c2f2206b15","Type":"ContainerStarted","Data":"3edf87da856baae817767ac453b7f0bce1cf0e21621372fc64d050dfbf2121f3"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.555885 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:25 crc kubenswrapper[4736]: E0316 15:16:25.557773 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.057754289 +0000 UTC m=+187.785144576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.565784 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vv5d4" event={"ID":"ab78d179-e8ea-44ca-a95a-c634dab5aa6b","Type":"ContainerStarted","Data":"efcbd7b905f8c88abafbb2f11e4377a7743278732e6ec2048149351c32cf6ae7"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.591588 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" event={"ID":"6c3b56d3-03e4-4bba-8ac0-072c4a281513","Type":"ContainerStarted","Data":"6fe4dc579d879caf4eda3622821f4b59b461b7401bb994f72747d6f5aefb827a"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.647509 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" event={"ID":"61ab67d8-7624-44b8-8891-37490ba9ab4b","Type":"ContainerStarted","Data":"642a899f4266e612ebbc76de79f3997f943e2fd9cd775feb8ea5a619bba758db"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.651580 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-p78nr" podStartSLOduration=124.651564568 podStartE2EDuration="2m4.651564568s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:25.475179849 +0000 UTC m=+187.202570136" watchObservedRunningTime="2026-03-16 15:16:25.651564568 +0000 UTC m=+187.378954855" Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.651810 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rmd28" podStartSLOduration=124.651805034 podStartE2EDuration="2m4.651805034s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:25.599626839 +0000 UTC m=+187.327017126" watchObservedRunningTime="2026-03-16 15:16:25.651805034 +0000 UTC m=+187.379195321" Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.657541 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:25 crc kubenswrapper[4736]: E0316 15:16:25.658003 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.15798781 +0000 UTC m=+187.885378097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.665639 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" event={"ID":"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13","Type":"ContainerStarted","Data":"a19ba4ba6b4c7db445181fd2145c6bd6eabb8c9d0bff761fc4a5fd9ce2dd2cfd"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.727120 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg"] Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.756923 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65" event={"ID":"99f1c1c6-cc63-4104-afb0-ff540cf588a9","Type":"ContainerStarted","Data":"b4f7700d7fcb5e06ab39868525fbc6a250db236dd1d8f93ca89a220f4cdf6785"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.758819 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:25 crc kubenswrapper[4736]: E0316 15:16:25.758941 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.25891431 +0000 UTC m=+187.986304597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.759540 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:25 crc kubenswrapper[4736]: E0316 15:16:25.760496 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.260477521 +0000 UTC m=+187.987867808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.763787 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2"] Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.768500 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-2j5dh" podStartSLOduration=124.768472436 podStartE2EDuration="2m4.768472436s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:25.749247971 +0000 UTC m=+187.476638268" watchObservedRunningTime="2026-03-16 15:16:25.768472436 +0000 UTC m=+187.495862723" Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.775287 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561236-cxwv6" event={"ID":"ae9de77b-767a-4a87-b4bd-648728dc9826","Type":"ContainerStarted","Data":"d988d5aa945c64ec483107c52ba2ae2505e9c617cd165f7ed2b39b1ff740d2a4"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.806621 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:25 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:25 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:25 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.806694 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.843239 4736 ???:1] "http: TLS handshake error from 192.168.126.11:60742: no serving certificate available for the kubelet" Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.844036 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t9m65" podStartSLOduration=124.844016816 podStartE2EDuration="2m4.844016816s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:25.842787654 +0000 UTC m=+187.570177931" watchObservedRunningTime="2026-03-16 15:16:25.844016816 +0000 UTC m=+187.571407103" Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.878881 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:25 crc kubenswrapper[4736]: E0316 15:16:25.881274 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.381241082 +0000 UTC m=+188.108631369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.885196 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:25 crc kubenswrapper[4736]: E0316 15:16:25.885805 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.385789814 +0000 UTC m=+188.113180101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.887850 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" event={"ID":"b6c41b72-0beb-4da9-9717-f975a6608475","Type":"ContainerStarted","Data":"d633e0da43b4eca0dfac99f59009cef9d15429dae2a89ad77b0bd834f9b95dd5"} Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.898286 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tbntp"] Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.901718 4736 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bzh5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.901808 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2bzh5" podUID="5881f71f-e94a-4bc8-8d08-8bf079fd10e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.986464 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:25 crc kubenswrapper[4736]: E0316 15:16:25.986618 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.486601951 +0000 UTC m=+188.213992238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:25 crc kubenswrapper[4736]: I0316 15:16:25.987028 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:26 crc kubenswrapper[4736]: E0316 15:16:25.993834 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.493813673 +0000 UTC m=+188.221203950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.036631 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" podStartSLOduration=125.036614229 podStartE2EDuration="2m5.036614229s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:26.01052331 +0000 UTC m=+187.737913597" watchObservedRunningTime="2026-03-16 15:16:26.036614229 +0000 UTC m=+187.764004516" Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.088767 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:26 crc kubenswrapper[4736]: E0316 15:16:26.089297 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.589276137 +0000 UTC m=+188.316666424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:26 crc kubenswrapper[4736]: W0316 15:16:26.109376 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ad17779_e378_4bfe_bf98_b746baf0d153.slice/crio-0550431e377fc9f90a75b5cc1269ffc291e313e6d9ce20e998451b7f3c49826c WatchSource:0}: Error finding container 0550431e377fc9f90a75b5cc1269ffc291e313e6d9ce20e998451b7f3c49826c: Status 404 returned error can't find the container with id 0550431e377fc9f90a75b5cc1269ffc291e313e6d9ce20e998451b7f3c49826c Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.133522 4736 ???:1] "http: TLS handshake error from 192.168.126.11:60754: no serving certificate available for the kubelet" Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.190303 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:26 crc kubenswrapper[4736]: E0316 15:16:26.190675 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.69066328 +0000 UTC m=+188.418053567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.271275 4736 ???:1] "http: TLS handshake error from 192.168.126.11:60762: no serving certificate available for the kubelet" Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.293557 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:26 crc kubenswrapper[4736]: E0316 15:16:26.293623 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.793583952 +0000 UTC m=+188.520974239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.293955 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:26 crc kubenswrapper[4736]: E0316 15:16:26.294593 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.794575809 +0000 UTC m=+188.521966096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.401594 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:26 crc kubenswrapper[4736]: E0316 15:16:26.401841 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.901824408 +0000 UTC m=+188.629214695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.402236 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.402476 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs\") pod \"network-metrics-daemon-smqd4\" (UID: \"5586cec2-8615-4d8b-a695-65ee04613c35\") " pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:16:26 crc kubenswrapper[4736]: E0316 15:16:26.403599 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:26.903582135 +0000 UTC m=+188.630972422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.431969 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.449036 4736 ???:1] "http: TLS handshake error from 192.168.126.11:60766: no serving certificate available for the kubelet" Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.457714 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5586cec2-8615-4d8b-a695-65ee04613c35-metrics-certs\") pod \"network-metrics-daemon-smqd4\" (UID: \"5586cec2-8615-4d8b-a695-65ee04613c35\") " pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.505796 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:26 crc kubenswrapper[4736]: E0316 15:16:26.512532 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:27.012502039 +0000 UTC m=+188.739892326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.608260 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:26 crc kubenswrapper[4736]: E0316 15:16:26.608950 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:27.108934608 +0000 UTC m=+188.836324885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.619263 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.629363 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-smqd4" Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.709911 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:26 crc kubenswrapper[4736]: E0316 15:16:26.710372 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:27.210343091 +0000 UTC m=+188.937733378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.710884 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:26 crc kubenswrapper[4736]: E0316 15:16:26.711392 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:27.211373439 +0000 UTC m=+188.938763726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.809036 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:26 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:26 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:26 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.809129 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.811918 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:26 crc kubenswrapper[4736]: E0316 15:16:26.812386 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:27.312368541 +0000 UTC m=+189.039758828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.913253 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:26 crc kubenswrapper[4736]: E0316 15:16:26.913628 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:27.413614549 +0000 UTC m=+189.141004826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.965754 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" event={"ID":"6c3b56d3-03e4-4bba-8ac0-072c4a281513","Type":"ContainerStarted","Data":"9976cff7d658c06e9bdff4eec6c60a79a3a9eed47e219f988a97b7c8888f7d6e"} Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.966885 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.970159 4736 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-76btc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Mar 16 15:16:26 crc kubenswrapper[4736]: I0316 15:16:26.970208 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" podUID="6c3b56d3-03e4-4bba-8ac0-072c4a281513" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.014860 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:27 crc kubenswrapper[4736]: E0316 15:16:27.015346 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:27.51532313 +0000 UTC m=+189.242713417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.043964 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" podStartSLOduration=126.043938885 podStartE2EDuration="2m6.043938885s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:27.029941791 +0000 UTC m=+188.757332078" watchObservedRunningTime="2026-03-16 15:16:27.043938885 +0000 UTC m=+188.771329172" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.086973 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" event={"ID":"e95b535a-2e38-4797-97c6-5ab54160b983","Type":"ContainerStarted","Data":"988ab0a1449a75719b2eaa713d728f43e859fed2b271c60d45a8ff342d1ac67d"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.087449 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" event={"ID":"e95b535a-2e38-4797-97c6-5ab54160b983","Type":"ContainerStarted","Data":"d1374411c6439fc2150dfe79c03f4e2c3580c7e7cef253759648e4c8d3b64500"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.105060 4736 generic.go:334] "Generic (PLEG): container finished" podID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerID="fb941c81b17ae6f00c2222ccc6114e61cfb460b2670a82cf71403b9dd0c78576" exitCode=0 Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.105175 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" event={"ID":"bfbf1c26-496a-4fc3-a248-9c2db09bf334","Type":"ContainerDied","Data":"fb941c81b17ae6f00c2222ccc6114e61cfb460b2670a82cf71403b9dd0c78576"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.117997 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:27 crc kubenswrapper[4736]: E0316 15:16:27.118345 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:27.618330326 +0000 UTC m=+189.345720613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.122794 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" podStartSLOduration=87.122778214 podStartE2EDuration="1m27.122778214s" podCreationTimestamp="2026-03-16 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:27.121056668 +0000 UTC m=+188.848446955" watchObservedRunningTime="2026-03-16 15:16:27.122778214 +0000 UTC m=+188.850168501" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.159904 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" event={"ID":"4aa7778f-ab77-4cbe-ac51-99c2f2206b15","Type":"ContainerStarted","Data":"17e6e80dc12eef05cfd8f4350dfc714c40232e5ea73b94ffa3b859952f80bf82"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.187913 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6" event={"ID":"4ce839b1-2853-4984-9105-73e482b62cfb","Type":"ContainerStarted","Data":"dd82582575267cc7806ed72bebcbe004113aa7eca5f93612a0f65e3950963c8f"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.199629 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" event={"ID":"f022dad1-489f-4a36-b60b-6b34e8dbd5aa","Type":"ContainerStarted","Data":"072446ce3c62190b8fcadacc250185fd0b73cff1f624f664c326a3e689b62e87"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.201479 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kzg9q" event={"ID":"a3081e47-19ee-40fa-b5db-9c01bc56228a","Type":"ContainerStarted","Data":"7d28acb8b78ce5cc5ef301f5bb14faaf96e359c41dded139c892524c70fef2ab"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.208033 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" event={"ID":"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13","Type":"ContainerStarted","Data":"19736734d64853bd38523d37bcd04beb93ade4218888e7a8179498c1c7e2c3d8"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.208119 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" event={"ID":"ebc7ebc6-eb8a-49f2-ba7e-22885fb83d13","Type":"ContainerStarted","Data":"4aa37d9c735301131172f59ce86eb4f673792d50d3965ea4d7fbadcad0eeb683"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.219179 4736 ???:1] "http: TLS handshake error from 192.168.126.11:60772: no serving certificate available for the kubelet" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.219784 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:27 crc kubenswrapper[4736]: E0316 15:16:27.220890 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:27.720869968 +0000 UTC m=+189.448260255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.234475 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" event={"ID":"61ab67d8-7624-44b8-8891-37490ba9ab4b","Type":"ContainerStarted","Data":"6c6809e41a6c637f97acd4a5a6023303588e2134b5416a5bad42bbe6181ced63"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.276793 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" event={"ID":"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b","Type":"ContainerStarted","Data":"d319502e04e719f2f898a6200a6256d5b73d6f35513e32f9f9e4ba732df89abd"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.278072 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.291148 4736 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kr8vn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.291222 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" podUID="d5ddb97a-f00f-442e-bb57-4cf82b3eb29b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.303484 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prl2f" podStartSLOduration=126.303465478 podStartE2EDuration="2m6.303465478s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:27.302427681 +0000 UTC m=+189.029817968" watchObservedRunningTime="2026-03-16 15:16:27.303465478 +0000 UTC m=+189.030855765" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.305063 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pwfh6" podStartSLOduration=126.30504007 podStartE2EDuration="2m6.30504007s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:27.241265404 +0000 UTC m=+188.968655691" watchObservedRunningTime="2026-03-16 15:16:27.30504007 +0000 UTC m=+189.032430357" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.318694 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" event={"ID":"f8234512-ffa3-4229-b3d5-be9360b59dac","Type":"ContainerStarted","Data":"79f8af816697e4dd44333287615dfd71b07947c665a4264144068220f928c918"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.319816 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.328354 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:27 crc kubenswrapper[4736]: E0316 15:16:27.335012 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:27.834994361 +0000 UTC m=+189.562384648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.337398 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-7d4rb" podStartSLOduration=126.337369565 podStartE2EDuration="2m6.337369565s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:27.335475674 +0000 UTC m=+189.062865951" watchObservedRunningTime="2026-03-16 15:16:27.337369565 +0000 UTC m=+189.064759852" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.346508 4736 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xzwwg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.346569 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" podUID="f8234512-ffa3-4229-b3d5-be9360b59dac" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.402869 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tbntp" event={"ID":"9ad17779-e378-4bfe-bf98-b746baf0d153","Type":"ContainerStarted","Data":"0550431e377fc9f90a75b5cc1269ffc291e313e6d9ce20e998451b7f3c49826c"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.433510 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm" event={"ID":"10773cef-e8d2-43c6-a821-901ab3ebd72f","Type":"ContainerStarted","Data":"461f57aec70064b3bbfdb9604c2b6c00b9cca5e0ff32be39a41d77e01c57af93"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.434881 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:27 crc kubenswrapper[4736]: E0316 15:16:27.436230 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:27.936205688 +0000 UTC m=+189.663595965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.481340 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" event={"ID":"365f3914-692f-4b54-b389-89e9a276a9d9","Type":"ContainerStarted","Data":"34a4fe967ebb20d59d848197cbe7ad6101b77c950356fa7680dfe0a2935c8b69"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.508836 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989" event={"ID":"a3d36700-672c-441c-bf3a-23f4ef140fd5","Type":"ContainerStarted","Data":"4fc189eb56f8bac6e49f067cf789e497183d416c1f90286de6071b938689bbd1"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.523900 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zxgkb" event={"ID":"dca4fa92-819d-4973-87b1-b6282946f072","Type":"ContainerStarted","Data":"02e44f13328847216c953c9ff81e610e6469d3b136c48e107696ee47e8df0c04"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.538060 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:27 crc kubenswrapper[4736]: E0316 15:16:27.539731 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:28.039713947 +0000 UTC m=+189.767104234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.550643 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" podStartSLOduration=126.550595619 podStartE2EDuration="2m6.550595619s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:27.413745478 +0000 UTC m=+189.141135765" watchObservedRunningTime="2026-03-16 15:16:27.550595619 +0000 UTC m=+189.277985906" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.593689 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vv5d4" event={"ID":"ab78d179-e8ea-44ca-a95a-c634dab5aa6b","Type":"ContainerStarted","Data":"2283aa71161d12294cc0d197b858e8755bba1609c2000d53ab380c97d6c0c2fb"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.596266 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5rwsm" podStartSLOduration=126.596240249 podStartE2EDuration="2m6.596240249s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:27.596226429 +0000 UTC m=+189.323616736" watchObservedRunningTime="2026-03-16 15:16:27.596240249 +0000 UTC m=+189.323630536" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.597777 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" podStartSLOduration=126.597770881 podStartE2EDuration="2m6.597770881s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:27.554165244 +0000 UTC m=+189.281555541" watchObservedRunningTime="2026-03-16 15:16:27.597770881 +0000 UTC m=+189.325161168" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.643867 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.645174 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zxgkb" podStartSLOduration=126.645151268 podStartE2EDuration="2m6.645151268s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:27.630288601 +0000 UTC m=+189.357678888" watchObservedRunningTime="2026-03-16 15:16:27.645151268 +0000 UTC m=+189.372541555" Mar 16 15:16:27 crc kubenswrapper[4736]: E0316 15:16:27.645280 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:28.145261701 +0000 UTC m=+189.872651988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.742559 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2" event={"ID":"ef5af3b4-3c61-4469-84e2-5e0bac26ed18","Type":"ContainerStarted","Data":"a4636fa5cbafcd8c0fc6180e51522412a4666cffcc4198c90f0777116395b42b"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.745542 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:27 crc kubenswrapper[4736]: E0316 15:16:27.745910 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:28.245896693 +0000 UTC m=+189.973286980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.762727 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7gf6z" podStartSLOduration=126.762637501 podStartE2EDuration="2m6.762637501s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:27.675422208 +0000 UTC m=+189.402812495" watchObservedRunningTime="2026-03-16 15:16:27.762637501 +0000 UTC m=+189.490027788" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.764894 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6d989" podStartSLOduration=126.764884212 podStartE2EDuration="2m6.764884212s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:27.762423726 +0000 UTC m=+189.489814013" watchObservedRunningTime="2026-03-16 15:16:27.764884212 +0000 UTC m=+189.492274499" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.774911 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bx5tm" event={"ID":"90e2fab8-e507-485e-9c3d-63d00f092fa1","Type":"ContainerStarted","Data":"5cc2251b6a414f5f17cc09b68d707816526336cae712d1baf16b6d99398872dc"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.815326 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:27 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:27 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:27 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.815430 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.843771 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" event={"ID":"6d00785d-6730-42d9-8004-f1bbc451d581","Type":"ContainerStarted","Data":"2fcc3b5633e50d6b21bc942c79cd3336b9622b8392eb37fc7218f78f7d99db0d"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.844371 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.846498 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:27 crc kubenswrapper[4736]: E0316 15:16:27.846787 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:28.346757162 +0000 UTC m=+190.074147449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.861984 4736 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnszh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.862093 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" podUID="6d00785d-6730-42d9-8004-f1bbc451d581" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.866404 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qv77g" event={"ID":"a14c47f9-54bf-455c-9e43-a36fb2fd871b","Type":"ContainerStarted","Data":"2a066098aa5d37c6e07c345bf6070b713a33f0f158c40cba3677b492e4e22c89"} Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.887402 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-bx5tm" podStartSLOduration=9.887370717 podStartE2EDuration="9.887370717s" podCreationTimestamp="2026-03-16 15:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:27.885162109 +0000 UTC m=+189.612552396" watchObservedRunningTime="2026-03-16 15:16:27.887370717 +0000 UTC m=+189.614761004" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.915570 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-2j5dh" Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.948589 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:27 crc kubenswrapper[4736]: E0316 15:16:27.958743 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:28.458719137 +0000 UTC m=+190.186109424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:27 crc kubenswrapper[4736]: I0316 15:16:27.964092 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-smqd4"] Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.058174 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.059028 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:28.55900574 +0000 UTC m=+190.286396027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: W0316 15:16:28.066512 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5586cec2_8615_4d8b_a695_65ee04613c35.slice/crio-f438470f6358468539bf1a03294ae21e9ee095ccf65486086552b78269ddc7d0 WatchSource:0}: Error finding container f438470f6358468539bf1a03294ae21e9ee095ccf65486086552b78269ddc7d0: Status 404 returned error can't find the container with id f438470f6358468539bf1a03294ae21e9ee095ccf65486086552b78269ddc7d0 Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.177273 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.177728 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:28.677710905 +0000 UTC m=+190.405101192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.222025 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" podStartSLOduration=127.221167398 podStartE2EDuration="2m7.221167398s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:28.083082503 +0000 UTC m=+189.810472790" watchObservedRunningTime="2026-03-16 15:16:28.221167398 +0000 UTC m=+189.948557695" Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.280583 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.280824 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:28.780774312 +0000 UTC m=+190.508164609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.280919 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.281907 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:28.781896532 +0000 UTC m=+190.509286819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.362471 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4dzg6"] Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.385682 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.385870 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:28.885836412 +0000 UTC m=+190.613226699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.385998 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.386367 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:28.886352536 +0000 UTC m=+190.613742823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.486940 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.487287 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:28.987255795 +0000 UTC m=+190.714646082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.487826 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.488307 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:28.988288233 +0000 UTC m=+190.715678520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.589200 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.589591 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:29.089551482 +0000 UTC m=+190.816941779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.589877 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.590239 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:29.09022974 +0000 UTC m=+190.817620027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.694600 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.694757 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:29.194736416 +0000 UTC m=+190.922126703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.694894 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.695231 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:29.195223099 +0000 UTC m=+190.922613386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.783565 4736 ???:1] "http: TLS handshake error from 192.168.126.11:56352: no serving certificate available for the kubelet" Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.795953 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.796505 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:29.296489998 +0000 UTC m=+191.023880285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.808374 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:28 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:28 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:28 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.808459 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.822559 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c"] Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.823155 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" podUID="7984ce35-90df-4462-b2d5-6d25102d7bb5" containerName="route-controller-manager" containerID="cri-o://c058829e7179e721f29a0829eb6fe919ff56c20f2fcd754f262685d7495f7c50" gracePeriod=30 Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.898018 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.898966 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:29.398931377 +0000 UTC m=+191.126321694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.900640 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" event={"ID":"bfbf1c26-496a-4fc3-a248-9c2db09bf334","Type":"ContainerStarted","Data":"59416ede21a1396d91e4fa5bf37227c3fabee285caf735e9b9fe1c63ce3882b9"} Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.901538 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.925031 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p47pv" event={"ID":"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2","Type":"ContainerStarted","Data":"a971724281aaa4a7474d18435187abe8156086861cd8869acd0aa3d296575f87"} Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.947811 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" event={"ID":"db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74","Type":"ContainerStarted","Data":"2a3cae87c61e5def305afafad86bdb940c30872dd018b0e6c903c73e533e4a64"} Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.948784 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.950684 4736 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hxs96 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.950735 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" podUID="db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.973643 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2" event={"ID":"ef5af3b4-3c61-4469-84e2-5e0bac26ed18","Type":"ContainerStarted","Data":"0caef4152c462cf41a4b674de812f7f74c0a21f61c0ea0457574652700832ac4"} Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.999147 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:28 crc kubenswrapper[4736]: E0316 15:16:28.999489 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:29.499380505 +0000 UTC m=+191.226770792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:28 crc kubenswrapper[4736]: I0316 15:16:28.999609 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:29 crc kubenswrapper[4736]: E0316 15:16:29.000192 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:29.500184527 +0000 UTC m=+191.227574814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.022416 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tbntp" event={"ID":"9ad17779-e378-4bfe-bf98-b746baf0d153","Type":"ContainerStarted","Data":"d544af480993fc0a5409bf2fb3fe77e7dd4aaaaf9290b1f71093c05ea6d6d730"} Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.022472 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-smqd4" event={"ID":"5586cec2-8615-4d8b-a695-65ee04613c35","Type":"ContainerStarted","Data":"f438470f6358468539bf1a03294ae21e9ee095ccf65486086552b78269ddc7d0"} Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.033916 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" event={"ID":"4aa7778f-ab77-4cbe-ac51-99c2f2206b15","Type":"ContainerStarted","Data":"443f8182a3e2422b4f9c1e768501db705e100ef5792ee7f9aa622a85736c9694"} Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.034091 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.068689 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kzg9q" event={"ID":"a3081e47-19ee-40fa-b5db-9c01bc56228a","Type":"ContainerStarted","Data":"733ed3025ad85754cca17494f4e4e47eb91c267f9a50d7dc0a53776e3009e70e"} Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.076742 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" event={"ID":"61ab67d8-7624-44b8-8891-37490ba9ab4b","Type":"ContainerStarted","Data":"cf50a48932978769b4c6d8d194a6dd17c514c9467852abf02db65ea793651b2c"} Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.096483 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podStartSLOduration=128.096453962 podStartE2EDuration="2m8.096453962s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:29.096454082 +0000 UTC m=+190.823844369" watchObservedRunningTime="2026-03-16 15:16:29.096453962 +0000 UTC m=+190.823844249" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.105775 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:29 crc kubenswrapper[4736]: E0316 15:16:29.107485 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:29.607463036 +0000 UTC m=+191.334853323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.109038 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" event={"ID":"f8234512-ffa3-4229-b3d5-be9360b59dac","Type":"ContainerStarted","Data":"f18bd1fc431a76b33026009aea230e395e86d86874f4f459f07b39ba773e8751"} Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.113997 4736 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xzwwg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.114077 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" podUID="f8234512-ffa3-4229-b3d5-be9360b59dac" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.148380 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qv77g" event={"ID":"a14c47f9-54bf-455c-9e43-a36fb2fd871b","Type":"ContainerStarted","Data":"94f1e11fb1e1908b15d5e4a65ad116f6a985490018676166d22e8db7fdd07ab8"} Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.184896 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" event={"ID":"1b7712f1-6d20-477f-a190-a50a6d35c238","Type":"ContainerStarted","Data":"2074dabde9a233217cb84b476634636e27b148434ba99284c909fb321585598a"} Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.205564 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ngjsb" event={"ID":"6f744d0a-ff18-46d3-b3ab-731e0308e0b6","Type":"ContainerStarted","Data":"51d30a7518d10771e33d75c09fc18dd19d960d17b2f8f404b9ca2efbc2cfe490"} Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.205628 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ngjsb" event={"ID":"6f744d0a-ff18-46d3-b3ab-731e0308e0b6","Type":"ContainerStarted","Data":"2fcc3e0b9a4b0999a531dc2f3aa26decc1f34acb2c01f9b3355e9eb77b1d8f53"} Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.206223 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ngjsb" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.209062 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:29 crc kubenswrapper[4736]: E0316 15:16:29.212188 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:29.712166047 +0000 UTC m=+191.439556334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.235640 4736 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-76btc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.235707 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" podUID="6c3b56d3-03e4-4bba-8ac0-072c4a281513" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.235921 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-vv5d4" event={"ID":"ab78d179-e8ea-44ca-a95a-c634dab5aa6b","Type":"ContainerStarted","Data":"51e5b810fa2d7bb798e8a78f5073f348d1f9d9d54e13b19da12d80210a5a4de5"} Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.238363 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" podUID="84a3bf2d-cfc8-4cc4-a284-1e34af54f453" containerName="controller-manager" containerID="cri-o://0ac6ae88a01ff4b19d5652e4c94f7f54025610c3ad8f6c3417fa9da00ed8c558" gracePeriod=30 Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.245923 4736 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kr8vn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.245978 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" podUID="d5ddb97a-f00f-442e-bb57-4cf82b3eb29b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.312016 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:29 crc kubenswrapper[4736]: E0316 15:16:29.314519 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:29.814497755 +0000 UTC m=+191.541888212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.405437 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-df26x" podStartSLOduration=128.405413836 podStartE2EDuration="2m8.405413836s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:29.401759999 +0000 UTC m=+191.129150286" watchObservedRunningTime="2026-03-16 15:16:29.405413836 +0000 UTC m=+191.132804123" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.406242 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" podStartSLOduration=128.406236079 podStartE2EDuration="2m8.406236079s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:29.249763223 +0000 UTC m=+190.977153510" watchObservedRunningTime="2026-03-16 15:16:29.406236079 +0000 UTC m=+191.133626366" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.417440 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:29 crc kubenswrapper[4736]: E0316 15:16:29.433732 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:29.933709753 +0000 UTC m=+191.661100040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.502658 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-kzg9q" podStartSLOduration=128.502613667 podStartE2EDuration="2m8.502613667s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:29.47394209 +0000 UTC m=+191.201332397" watchObservedRunningTime="2026-03-16 15:16:29.502613667 +0000 UTC m=+191.230003954" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.523661 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:29 crc kubenswrapper[4736]: E0316 15:16:29.524424 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:30.02440122 +0000 UTC m=+191.751791507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.561645 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" podStartSLOduration=128.561624275 podStartE2EDuration="2m8.561624275s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:29.56028889 +0000 UTC m=+191.287679177" watchObservedRunningTime="2026-03-16 15:16:29.561624275 +0000 UTC m=+191.289014562" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.615389 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ncvk2" podStartSLOduration=128.615358013 podStartE2EDuration="2m8.615358013s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:29.613802202 +0000 UTC m=+191.341192489" watchObservedRunningTime="2026-03-16 15:16:29.615358013 +0000 UTC m=+191.342748300" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.627571 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:29 crc kubenswrapper[4736]: E0316 15:16:29.628165 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:30.128143445 +0000 UTC m=+191.855533732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.730664 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:29 crc kubenswrapper[4736]: E0316 15:16:29.731027 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:30.231011086 +0000 UTC m=+191.958401373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.808277 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qv77g" podStartSLOduration=11.808258503 podStartE2EDuration="11.808258503s" podCreationTimestamp="2026-03-16 15:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:29.683539457 +0000 UTC m=+191.410929744" watchObservedRunningTime="2026-03-16 15:16:29.808258503 +0000 UTC m=+191.535648790" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.819471 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" podStartSLOduration=128.819446172 podStartE2EDuration="2m8.819446172s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:29.801352509 +0000 UTC m=+191.528742806" watchObservedRunningTime="2026-03-16 15:16:29.819446172 +0000 UTC m=+191.546836469" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.809519 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:29 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:29 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:29 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.820460 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.834993 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:29 crc kubenswrapper[4736]: E0316 15:16:29.835374 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:30.335359678 +0000 UTC m=+192.062749965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.927759 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-vv5d4" podStartSLOduration=128.92773794 podStartE2EDuration="2m8.92773794s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:29.925212912 +0000 UTC m=+191.652603209" watchObservedRunningTime="2026-03-16 15:16:29.92773794 +0000 UTC m=+191.655128217" Mar 16 15:16:29 crc kubenswrapper[4736]: I0316 15:16:29.935941 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:29 crc kubenswrapper[4736]: E0316 15:16:29.936579 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:30.436557085 +0000 UTC m=+192.163947372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.041951 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:30 crc kubenswrapper[4736]: E0316 15:16:30.042315 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:30.542302474 +0000 UTC m=+192.269692761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.072543 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ngjsb" podStartSLOduration=12.072528072 podStartE2EDuration="12.072528072s" podCreationTimestamp="2026-03-16 15:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:29.990756215 +0000 UTC m=+191.718146502" watchObservedRunningTime="2026-03-16 15:16:30.072528072 +0000 UTC m=+191.799918349" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.147750 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:30 crc kubenswrapper[4736]: E0316 15:16:30.148097 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:30.648080373 +0000 UTC m=+192.375470660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.236303 4736 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnszh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.236362 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" podUID="6d00785d-6730-42d9-8004-f1bbc451d581" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.264347 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:30 crc kubenswrapper[4736]: E0316 15:16:30.264869 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:30.764841927 +0000 UTC m=+192.492232204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.327621 4736 generic.go:334] "Generic (PLEG): container finished" podID="7984ce35-90df-4462-b2d5-6d25102d7bb5" containerID="c058829e7179e721f29a0829eb6fe919ff56c20f2fcd754f262685d7495f7c50" exitCode=0 Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.327772 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" event={"ID":"7984ce35-90df-4462-b2d5-6d25102d7bb5","Type":"ContainerDied","Data":"c058829e7179e721f29a0829eb6fe919ff56c20f2fcd754f262685d7495f7c50"} Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.346262 4736 generic.go:334] "Generic (PLEG): container finished" podID="84a3bf2d-cfc8-4cc4-a284-1e34af54f453" containerID="0ac6ae88a01ff4b19d5652e4c94f7f54025610c3ad8f6c3417fa9da00ed8c558" exitCode=0 Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.346357 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" event={"ID":"84a3bf2d-cfc8-4cc4-a284-1e34af54f453","Type":"ContainerDied","Data":"0ac6ae88a01ff4b19d5652e4c94f7f54025610c3ad8f6c3417fa9da00ed8c558"} Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.346580 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.367892 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:30 crc kubenswrapper[4736]: E0316 15:16:30.368518 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:30.86849564 +0000 UTC m=+192.595885927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.372266 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tbntp" event={"ID":"9ad17779-e378-4bfe-bf98-b746baf0d153","Type":"ContainerStarted","Data":"6a9f3458badf67a7b3eebd2740265b6ad62f003265918f361a9bb03159fdd69a"} Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.375315 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-smqd4" event={"ID":"5586cec2-8615-4d8b-a695-65ee04613c35","Type":"ContainerStarted","Data":"c78d73699e08f9891e4fc0008e9850fe993d62c6d76ee6e0f0e38c88fe85530c"} Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.375339 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-smqd4" event={"ID":"5586cec2-8615-4d8b-a695-65ee04613c35","Type":"ContainerStarted","Data":"fd4f7ae2189766be9c98e4647c9a87dde5db34b97cebb333ed033f74097f1c2e"} Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.377581 4736 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kr8vn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.377661 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" podUID="d5ddb97a-f00f-442e-bb57-4cf82b3eb29b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.441627 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.460454 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.467521 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-smqd4" podStartSLOduration=129.467499488 podStartE2EDuration="2m9.467499488s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:30.465609578 +0000 UTC m=+192.192999865" watchObservedRunningTime="2026-03-16 15:16:30.467499488 +0000 UTC m=+192.194889775" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.469965 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7984ce35-90df-4462-b2d5-6d25102d7bb5-serving-cert\") pod \"7984ce35-90df-4462-b2d5-6d25102d7bb5\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.470074 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dcrs\" (UniqueName: \"kubernetes.io/projected/7984ce35-90df-4462-b2d5-6d25102d7bb5-kube-api-access-5dcrs\") pod \"7984ce35-90df-4462-b2d5-6d25102d7bb5\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.470425 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7984ce35-90df-4462-b2d5-6d25102d7bb5-client-ca\") pod \"7984ce35-90df-4462-b2d5-6d25102d7bb5\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.470536 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7984ce35-90df-4462-b2d5-6d25102d7bb5-config\") pod \"7984ce35-90df-4462-b2d5-6d25102d7bb5\" (UID: \"7984ce35-90df-4462-b2d5-6d25102d7bb5\") " Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.470896 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.475491 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7984ce35-90df-4462-b2d5-6d25102d7bb5-client-ca" (OuterVolumeSpecName: "client-ca") pod "7984ce35-90df-4462-b2d5-6d25102d7bb5" (UID: "7984ce35-90df-4462-b2d5-6d25102d7bb5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.475955 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7984ce35-90df-4462-b2d5-6d25102d7bb5-config" (OuterVolumeSpecName: "config") pod "7984ce35-90df-4462-b2d5-6d25102d7bb5" (UID: "7984ce35-90df-4462-b2d5-6d25102d7bb5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.504320 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7984ce35-90df-4462-b2d5-6d25102d7bb5-kube-api-access-5dcrs" (OuterVolumeSpecName: "kube-api-access-5dcrs") pod "7984ce35-90df-4462-b2d5-6d25102d7bb5" (UID: "7984ce35-90df-4462-b2d5-6d25102d7bb5"). InnerVolumeSpecName "kube-api-access-5dcrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:16:30 crc kubenswrapper[4736]: E0316 15:16:30.516070 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:31.016046547 +0000 UTC m=+192.743436834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.574621 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.575075 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dcrs\" (UniqueName: \"kubernetes.io/projected/7984ce35-90df-4462-b2d5-6d25102d7bb5-kube-api-access-5dcrs\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:30 crc kubenswrapper[4736]: E0316 15:16:30.582560 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:31.082527595 +0000 UTC m=+192.809917882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.587625 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7984ce35-90df-4462-b2d5-6d25102d7bb5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7984ce35-90df-4462-b2d5-6d25102d7bb5" (UID: "7984ce35-90df-4462-b2d5-6d25102d7bb5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.575096 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7984ce35-90df-4462-b2d5-6d25102d7bb5-client-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.588713 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7984ce35-90df-4462-b2d5-6d25102d7bb5-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.621987 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-tbntp" podStartSLOduration=129.621957091 podStartE2EDuration="2m9.621957091s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:30.536009081 +0000 UTC m=+192.263399368" watchObservedRunningTime="2026-03-16 15:16:30.621957091 +0000 UTC m=+192.349347378" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.689896 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.689999 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7984ce35-90df-4462-b2d5-6d25102d7bb5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:30 crc kubenswrapper[4736]: E0316 15:16:30.690267 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:31.190254347 +0000 UTC m=+192.917644634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.754310 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.755235 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.790696 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:30 crc kubenswrapper[4736]: E0316 15:16:30.791039 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:31.291022103 +0000 UTC m=+193.018412400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.802667 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.803757 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.807488 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:30 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:30 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:30 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.807545 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.828929 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.845283 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.894331 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:30 crc kubenswrapper[4736]: E0316 15:16:30.895758 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:31.395740284 +0000 UTC m=+193.123130571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.996742 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgwzp\" (UniqueName: \"kubernetes.io/projected/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-kube-api-access-qgwzp\") pod \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.997344 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-config\") pod \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.997417 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-client-ca\") pod \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.997589 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.997633 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-serving-cert\") pod \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " Mar 16 15:16:30 crc kubenswrapper[4736]: I0316 15:16:30.997650 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-proxy-ca-bundles\") pod \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\" (UID: \"84a3bf2d-cfc8-4cc4-a284-1e34af54f453\") " Mar 16 15:16:31 crc kubenswrapper[4736]: E0316 15:16:31.001486 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:31.501450202 +0000 UTC m=+193.228840489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.016549 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "84a3bf2d-cfc8-4cc4-a284-1e34af54f453" (UID: "84a3bf2d-cfc8-4cc4-a284-1e34af54f453"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.016788 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-config" (OuterVolumeSpecName: "config") pod "84a3bf2d-cfc8-4cc4-a284-1e34af54f453" (UID: "84a3bf2d-cfc8-4cc4-a284-1e34af54f453"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.017413 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "84a3bf2d-cfc8-4cc4-a284-1e34af54f453" (UID: "84a3bf2d-cfc8-4cc4-a284-1e34af54f453"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.018133 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-client-ca" (OuterVolumeSpecName: "client-ca") pod "84a3bf2d-cfc8-4cc4-a284-1e34af54f453" (UID: "84a3bf2d-cfc8-4cc4-a284-1e34af54f453"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.019920 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-kube-api-access-qgwzp" (OuterVolumeSpecName: "kube-api-access-qgwzp") pod "84a3bf2d-cfc8-4cc4-a284-1e34af54f453" (UID: "84a3bf2d-cfc8-4cc4-a284-1e34af54f453"). InnerVolumeSpecName "kube-api-access-qgwzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.028552 4736 patch_prober.go:28] interesting pod/apiserver-76f77b778f-qgcn7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 16 15:16:31 crc kubenswrapper[4736]: [+]log ok Mar 16 15:16:31 crc kubenswrapper[4736]: [+]etcd ok Mar 16 15:16:31 crc kubenswrapper[4736]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 16 15:16:31 crc kubenswrapper[4736]: [+]poststarthook/generic-apiserver-start-informers ok Mar 16 15:16:31 crc kubenswrapper[4736]: [+]poststarthook/max-in-flight-filter ok Mar 16 15:16:31 crc kubenswrapper[4736]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 16 15:16:31 crc kubenswrapper[4736]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 16 15:16:31 crc kubenswrapper[4736]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 16 15:16:31 crc kubenswrapper[4736]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 16 15:16:31 crc kubenswrapper[4736]: [+]poststarthook/project.openshift.io-projectcache ok Mar 16 15:16:31 crc kubenswrapper[4736]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 16 15:16:31 crc kubenswrapper[4736]: [+]poststarthook/openshift.io-startinformers ok Mar 16 15:16:31 crc kubenswrapper[4736]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 16 15:16:31 crc kubenswrapper[4736]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 16 15:16:31 crc kubenswrapper[4736]: livez check failed Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.028629 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" podUID="1b7712f1-6d20-477f-a190-a50a6d35c238" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.092502 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9"] Mar 16 15:16:31 crc kubenswrapper[4736]: E0316 15:16:31.092762 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7984ce35-90df-4462-b2d5-6d25102d7bb5" containerName="route-controller-manager" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.092776 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7984ce35-90df-4462-b2d5-6d25102d7bb5" containerName="route-controller-manager" Mar 16 15:16:31 crc kubenswrapper[4736]: E0316 15:16:31.092789 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a3bf2d-cfc8-4cc4-a284-1e34af54f453" containerName="controller-manager" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.092796 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a3bf2d-cfc8-4cc4-a284-1e34af54f453" containerName="controller-manager" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.092931 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7984ce35-90df-4462-b2d5-6d25102d7bb5" containerName="route-controller-manager" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.092948 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a3bf2d-cfc8-4cc4-a284-1e34af54f453" containerName="controller-manager" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.093302 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j"] Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.093732 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.094147 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.099257 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.099459 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.099483 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-client-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.099498 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.099511 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.099523 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgwzp\" (UniqueName: \"kubernetes.io/projected/84a3bf2d-cfc8-4cc4-a284-1e34af54f453-kube-api-access-qgwzp\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:31 crc kubenswrapper[4736]: E0316 15:16:31.099955 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:31.599936787 +0000 UTC m=+193.327327074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.151491 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9"] Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.171051 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j"] Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.201267 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.201532 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b53811-2589-4440-936b-e1793e7b8878-client-ca\") pod \"route-controller-manager-855977cd4d-6p49j\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.201568 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e322a73-fb0e-4994-a7f6-6ade35b70364-serving-cert\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.201591 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b53811-2589-4440-936b-e1793e7b8878-config\") pod \"route-controller-manager-855977cd4d-6p49j\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.201612 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b53811-2589-4440-936b-e1793e7b8878-serving-cert\") pod \"route-controller-manager-855977cd4d-6p49j\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.201631 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-client-ca\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.201681 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fgdq\" (UniqueName: \"kubernetes.io/projected/02b53811-2589-4440-936b-e1793e7b8878-kube-api-access-8fgdq\") pod \"route-controller-manager-855977cd4d-6p49j\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.201708 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-proxy-ca-bundles\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.201758 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-config\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.201782 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fmx5\" (UniqueName: \"kubernetes.io/projected/8e322a73-fb0e-4994-a7f6-6ade35b70364-kube-api-access-4fmx5\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: E0316 15:16:31.201925 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:31.701902075 +0000 UTC m=+193.429292352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.304757 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fgdq\" (UniqueName: \"kubernetes.io/projected/02b53811-2589-4440-936b-e1793e7b8878-kube-api-access-8fgdq\") pod \"route-controller-manager-855977cd4d-6p49j\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.304807 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-proxy-ca-bundles\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.304859 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-config\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.304880 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fmx5\" (UniqueName: \"kubernetes.io/projected/8e322a73-fb0e-4994-a7f6-6ade35b70364-kube-api-access-4fmx5\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.304910 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b53811-2589-4440-936b-e1793e7b8878-client-ca\") pod \"route-controller-manager-855977cd4d-6p49j\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.304925 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e322a73-fb0e-4994-a7f6-6ade35b70364-serving-cert\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.304944 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b53811-2589-4440-936b-e1793e7b8878-config\") pod \"route-controller-manager-855977cd4d-6p49j\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.304961 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b53811-2589-4440-936b-e1793e7b8878-serving-cert\") pod \"route-controller-manager-855977cd4d-6p49j\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.304979 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-client-ca\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.305008 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:31 crc kubenswrapper[4736]: E0316 15:16:31.305314 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:31.80530203 +0000 UTC m=+193.532692317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.307955 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-proxy-ca-bundles\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.309318 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b53811-2589-4440-936b-e1793e7b8878-config\") pod \"route-controller-manager-855977cd4d-6p49j\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.311887 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-config\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.312894 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b53811-2589-4440-936b-e1793e7b8878-client-ca\") pod \"route-controller-manager-855977cd4d-6p49j\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.315950 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-client-ca\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.319342 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e322a73-fb0e-4994-a7f6-6ade35b70364-serving-cert\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.325666 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b53811-2589-4440-936b-e1793e7b8878-serving-cert\") pod \"route-controller-manager-855977cd4d-6p49j\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.328621 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.330197 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.333826 4736 patch_prober.go:28] interesting pod/console-f9d7485db-p78nr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.333923 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-p78nr" podUID="bdf738c2-dd67-4aea-9d3e-03d68658ee50" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.347228 4736 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bzh5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.347309 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2bzh5" podUID="5881f71f-e94a-4bc8-8d08-8bf079fd10e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.347393 4736 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bzh5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.347407 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2bzh5" podUID="5881f71f-e94a-4bc8-8d08-8bf079fd10e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.348905 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fgdq\" (UniqueName: \"kubernetes.io/projected/02b53811-2589-4440-936b-e1793e7b8878-kube-api-access-8fgdq\") pod \"route-controller-manager-855977cd4d-6p49j\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.352359 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fmx5\" (UniqueName: \"kubernetes.io/projected/8e322a73-fb0e-4994-a7f6-6ade35b70364-kube-api-access-4fmx5\") pod \"controller-manager-65bb6d87cb-gwzz9\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.382259 4736 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-76btc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": context deadline exceeded" start-of-body= Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.382356 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" podUID="6c3b56d3-03e4-4bba-8ac0-072c4a281513" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": context deadline exceeded" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.408752 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:31 crc kubenswrapper[4736]: E0316 15:16:31.410467 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:31.910443593 +0000 UTC m=+193.637833880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.424599 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.429147 4736 ???:1] "http: TLS handshake error from 192.168.126.11:56356: no serving certificate available for the kubelet" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.440483 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.452005 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.451699 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c" event={"ID":"7984ce35-90df-4462-b2d5-6d25102d7bb5","Type":"ContainerDied","Data":"cd185c058fff283118c1ba65a771933a13840afc5d6a0c78ea18e7f91a6cd1df"} Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.459242 4736 scope.go:117] "RemoveContainer" containerID="c058829e7179e721f29a0829eb6fe919ff56c20f2fcd754f262685d7495f7c50" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.476191 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p47pv" event={"ID":"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2","Type":"ContainerStarted","Data":"edfb93780414d056ea3e203f8075e428504d8c2798014188d4a1866569164c04"} Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.490277 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c"] Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.490353 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cpv4c"] Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.511182 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:31 crc kubenswrapper[4736]: E0316 15:16:31.511751 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:32.011724782 +0000 UTC m=+193.739115069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.512089 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" event={"ID":"84a3bf2d-cfc8-4cc4-a284-1e34af54f453","Type":"ContainerDied","Data":"61f729f3e2fe8a327c29147af6afa24aa0b1c8e638d0fbaf1d186ebc86277b39"} Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.519659 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4dzg6" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.524950 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wx2v9" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.562280 4736 scope.go:117] "RemoveContainer" containerID="0ac6ae88a01ff4b19d5652e4c94f7f54025610c3ad8f6c3417fa9da00ed8c558" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.626969 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4dzg6"] Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.630218 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:31 crc kubenswrapper[4736]: E0316 15:16:31.630576 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:32.130551682 +0000 UTC m=+193.857941969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.630921 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:31 crc kubenswrapper[4736]: E0316 15:16:31.635354 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:32.135337799 +0000 UTC m=+193.862728086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.649011 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4dzg6"] Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.732648 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:31 crc kubenswrapper[4736]: E0316 15:16:31.732875 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:32.232817187 +0000 UTC m=+193.960207464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.732970 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:31 crc kubenswrapper[4736]: E0316 15:16:31.733998 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:32.233987958 +0000 UTC m=+193.961378245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.798729 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.832419 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:31 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:31 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:31 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.832495 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.839149 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:31 crc kubenswrapper[4736]: E0316 15:16:31.839766 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:32.339744107 +0000 UTC m=+194.067134394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:31 crc kubenswrapper[4736]: I0316 15:16:31.940974 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:31 crc kubenswrapper[4736]: E0316 15:16:31.943058 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:32.44304197 +0000 UTC m=+194.170432257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.043408 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.043652 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:32.543615221 +0000 UTC m=+194.271005508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.043879 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.044320 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:32.544311979 +0000 UTC m=+194.271702266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.104367 4736 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kr8vn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.104834 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" podUID="d5ddb97a-f00f-442e-bb57-4cf82b3eb29b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.105152 4736 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kr8vn container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.105170 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" podUID="d5ddb97a-f00f-442e-bb57-4cf82b3eb29b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.145600 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.145992 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:32.645969369 +0000 UTC m=+194.373359656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.188137 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qkm5c"] Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.189344 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.223380 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.248020 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cfclg"] Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.249010 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.249577 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:32.74955521 +0000 UTC m=+194.476945497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.249913 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.285889 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.350750 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.351011 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:32.850971073 +0000 UTC m=+194.578361360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.351068 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795c05a5-413f-4361-ab0e-6796cf7862f9-catalog-content\") pod \"certified-operators-cfclg\" (UID: \"795c05a5-413f-4361-ab0e-6796cf7862f9\") " pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.351116 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkjwz\" (UniqueName: \"kubernetes.io/projected/795c05a5-413f-4361-ab0e-6796cf7862f9-kube-api-access-hkjwz\") pod \"certified-operators-cfclg\" (UID: \"795c05a5-413f-4361-ab0e-6796cf7862f9\") " pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.351262 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.351365 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6503814f-9075-4f44-8a49-9a79e5ac3c42-utilities\") pod \"community-operators-qkm5c\" (UID: \"6503814f-9075-4f44-8a49-9a79e5ac3c42\") " pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.351404 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795c05a5-413f-4361-ab0e-6796cf7862f9-utilities\") pod \"certified-operators-cfclg\" (UID: \"795c05a5-413f-4361-ab0e-6796cf7862f9\") " pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.351530 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4tzz\" (UniqueName: \"kubernetes.io/projected/6503814f-9075-4f44-8a49-9a79e5ac3c42-kube-api-access-r4tzz\") pod \"community-operators-qkm5c\" (UID: \"6503814f-9075-4f44-8a49-9a79e5ac3c42\") " pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.351583 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6503814f-9075-4f44-8a49-9a79e5ac3c42-catalog-content\") pod \"community-operators-qkm5c\" (UID: \"6503814f-9075-4f44-8a49-9a79e5ac3c42\") " pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.351673 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:32.851656891 +0000 UTC m=+194.579047168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.439373 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qkm5c"] Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.445538 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.452468 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.452920 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4tzz\" (UniqueName: \"kubernetes.io/projected/6503814f-9075-4f44-8a49-9a79e5ac3c42-kube-api-access-r4tzz\") pod \"community-operators-qkm5c\" (UID: \"6503814f-9075-4f44-8a49-9a79e5ac3c42\") " pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.452958 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6503814f-9075-4f44-8a49-9a79e5ac3c42-catalog-content\") pod \"community-operators-qkm5c\" (UID: \"6503814f-9075-4f44-8a49-9a79e5ac3c42\") " pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.452996 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795c05a5-413f-4361-ab0e-6796cf7862f9-catalog-content\") pod \"certified-operators-cfclg\" (UID: \"795c05a5-413f-4361-ab0e-6796cf7862f9\") " pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.453016 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkjwz\" (UniqueName: \"kubernetes.io/projected/795c05a5-413f-4361-ab0e-6796cf7862f9-kube-api-access-hkjwz\") pod \"certified-operators-cfclg\" (UID: \"795c05a5-413f-4361-ab0e-6796cf7862f9\") " pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.453065 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6503814f-9075-4f44-8a49-9a79e5ac3c42-utilities\") pod \"community-operators-qkm5c\" (UID: \"6503814f-9075-4f44-8a49-9a79e5ac3c42\") " pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.453086 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795c05a5-413f-4361-ab0e-6796cf7862f9-utilities\") pod \"certified-operators-cfclg\" (UID: \"795c05a5-413f-4361-ab0e-6796cf7862f9\") " pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.453718 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795c05a5-413f-4361-ab0e-6796cf7862f9-utilities\") pod \"certified-operators-cfclg\" (UID: \"795c05a5-413f-4361-ab0e-6796cf7862f9\") " pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.453937 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6503814f-9075-4f44-8a49-9a79e5ac3c42-catalog-content\") pod \"community-operators-qkm5c\" (UID: \"6503814f-9075-4f44-8a49-9a79e5ac3c42\") " pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.453961 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795c05a5-413f-4361-ab0e-6796cf7862f9-catalog-content\") pod \"certified-operators-cfclg\" (UID: \"795c05a5-413f-4361-ab0e-6796cf7862f9\") " pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.454046 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:32.954024359 +0000 UTC m=+194.681414636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.454505 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6503814f-9075-4f44-8a49-9a79e5ac3c42-utilities\") pod \"community-operators-qkm5c\" (UID: \"6503814f-9075-4f44-8a49-9a79e5ac3c42\") " pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.463991 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5kz6z"] Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.465741 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.492748 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cfclg"] Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.513733 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.513801 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.546971 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kz6z"] Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.556147 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.557157 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:33.057128577 +0000 UTC m=+194.784518854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.566314 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkjwz\" (UniqueName: \"kubernetes.io/projected/795c05a5-413f-4361-ab0e-6796cf7862f9-kube-api-access-hkjwz\") pod \"certified-operators-cfclg\" (UID: \"795c05a5-413f-4361-ab0e-6796cf7862f9\") " pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.572762 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t25g4"] Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.585969 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.607993 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4tzz\" (UniqueName: \"kubernetes.io/projected/6503814f-9075-4f44-8a49-9a79e5ac3c42-kube-api-access-r4tzz\") pod \"community-operators-qkm5c\" (UID: \"6503814f-9075-4f44-8a49-9a79e5ac3c42\") " pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.611798 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p47pv" event={"ID":"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2","Type":"ContainerStarted","Data":"51c40ffa0370574d5b2b05d64224f3e0f847f1010f52b58ea93165ad5bfc069a"} Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.657546 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.659535 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:33.159501606 +0000 UTC m=+194.886891883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.663204 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97816ab1-5779-4821-9bba-63b4e6d4586b-catalog-content\") pod \"community-operators-5kz6z\" (UID: \"97816ab1-5779-4821-9bba-63b4e6d4586b\") " pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.663385 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.663490 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97816ab1-5779-4821-9bba-63b4e6d4586b-utilities\") pod \"community-operators-5kz6z\" (UID: \"97816ab1-5779-4821-9bba-63b4e6d4586b\") " pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.663596 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thpth\" (UniqueName: \"kubernetes.io/projected/97816ab1-5779-4821-9bba-63b4e6d4586b-kube-api-access-thpth\") pod \"community-operators-5kz6z\" (UID: \"97816ab1-5779-4821-9bba-63b4e6d4586b\") " pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.664332 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:33.164311315 +0000 UTC m=+194.891701602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.672494 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t25g4"] Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.766007 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.766214 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:33.26619426 +0000 UTC m=+194.993584537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.766328 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97816ab1-5779-4821-9bba-63b4e6d4586b-catalog-content\") pod \"community-operators-5kz6z\" (UID: \"97816ab1-5779-4821-9bba-63b4e6d4586b\") " pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.766380 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26502a77-e11a-496c-8bf5-483d61d2ed8d-catalog-content\") pod \"certified-operators-t25g4\" (UID: \"26502a77-e11a-496c-8bf5-483d61d2ed8d\") " pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.766406 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26502a77-e11a-496c-8bf5-483d61d2ed8d-utilities\") pod \"certified-operators-t25g4\" (UID: \"26502a77-e11a-496c-8bf5-483d61d2ed8d\") " pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.766533 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.766578 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97816ab1-5779-4821-9bba-63b4e6d4586b-utilities\") pod \"community-operators-5kz6z\" (UID: \"97816ab1-5779-4821-9bba-63b4e6d4586b\") " pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.766597 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvv9z\" (UniqueName: \"kubernetes.io/projected/26502a77-e11a-496c-8bf5-483d61d2ed8d-kube-api-access-lvv9z\") pod \"certified-operators-t25g4\" (UID: \"26502a77-e11a-496c-8bf5-483d61d2ed8d\") " pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.766631 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thpth\" (UniqueName: \"kubernetes.io/projected/97816ab1-5779-4821-9bba-63b4e6d4586b-kube-api-access-thpth\") pod \"community-operators-5kz6z\" (UID: \"97816ab1-5779-4821-9bba-63b4e6d4586b\") " pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.768871 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97816ab1-5779-4821-9bba-63b4e6d4586b-catalog-content\") pod \"community-operators-5kz6z\" (UID: \"97816ab1-5779-4821-9bba-63b4e6d4586b\") " pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.770240 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:33.270231948 +0000 UTC m=+194.997622235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.770730 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97816ab1-5779-4821-9bba-63b4e6d4586b-utilities\") pod \"community-operators-5kz6z\" (UID: \"97816ab1-5779-4821-9bba-63b4e6d4586b\") " pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.811715 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:32 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:32 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:32 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.811790 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.814704 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.865740 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.868393 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.869078 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:33.369057852 +0000 UTC m=+195.096448139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.869124 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26502a77-e11a-496c-8bf5-483d61d2ed8d-catalog-content\") pod \"certified-operators-t25g4\" (UID: \"26502a77-e11a-496c-8bf5-483d61d2ed8d\") " pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.869167 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26502a77-e11a-496c-8bf5-483d61d2ed8d-utilities\") pod \"certified-operators-t25g4\" (UID: \"26502a77-e11a-496c-8bf5-483d61d2ed8d\") " pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.869284 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.869331 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvv9z\" (UniqueName: \"kubernetes.io/projected/26502a77-e11a-496c-8bf5-483d61d2ed8d-kube-api-access-lvv9z\") pod \"certified-operators-t25g4\" (UID: \"26502a77-e11a-496c-8bf5-483d61d2ed8d\") " pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.870042 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26502a77-e11a-496c-8bf5-483d61d2ed8d-catalog-content\") pod \"certified-operators-t25g4\" (UID: \"26502a77-e11a-496c-8bf5-483d61d2ed8d\") " pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.870300 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26502a77-e11a-496c-8bf5-483d61d2ed8d-utilities\") pod \"certified-operators-t25g4\" (UID: \"26502a77-e11a-496c-8bf5-483d61d2ed8d\") " pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.870520 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:33.370512051 +0000 UTC m=+195.097902338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.900584 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thpth\" (UniqueName: \"kubernetes.io/projected/97816ab1-5779-4821-9bba-63b4e6d4586b-kube-api-access-thpth\") pod \"community-operators-5kz6z\" (UID: \"97816ab1-5779-4821-9bba-63b4e6d4586b\") " pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.973574 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:32 crc kubenswrapper[4736]: E0316 15:16:32.974332 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:33.474315118 +0000 UTC m=+195.201705405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:32 crc kubenswrapper[4736]: I0316 15:16:32.984919 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvv9z\" (UniqueName: \"kubernetes.io/projected/26502a77-e11a-496c-8bf5-483d61d2ed8d-kube-api-access-lvv9z\") pod \"certified-operators-t25g4\" (UID: \"26502a77-e11a-496c-8bf5-483d61d2ed8d\") " pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.032330 4736 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-76btc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.032414 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" podUID="6c3b56d3-03e4-4bba-8ac0-072c4a281513" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.066497 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7984ce35-90df-4462-b2d5-6d25102d7bb5" path="/var/lib/kubelet/pods/7984ce35-90df-4462-b2d5-6d25102d7bb5/volumes" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.067704 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84a3bf2d-cfc8-4cc4-a284-1e34af54f453" path="/var/lib/kubelet/pods/84a3bf2d-cfc8-4cc4-a284-1e34af54f453/volumes" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.077614 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:33 crc kubenswrapper[4736]: E0316 15:16:33.078050 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:33.578037682 +0000 UTC m=+195.305427969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.132956 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.180575 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:33 crc kubenswrapper[4736]: E0316 15:16:33.181012 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:33.680994967 +0000 UTC m=+195.408385244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.225441 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.226491 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.233466 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.260463 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.260672 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.263801 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.282709 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:33 crc kubenswrapper[4736]: E0316 15:16:33.283078 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:33.783065777 +0000 UTC m=+195.510456064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.318332 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j"] Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.383362 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:33 crc kubenswrapper[4736]: E0316 15:16:33.383510 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:33.883484743 +0000 UTC m=+195.610875030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.383653 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21021b43-79d3-4b59-b2ab-05308d8ad9f2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"21021b43-79d3-4b59-b2ab-05308d8ad9f2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.383703 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.383740 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21021b43-79d3-4b59-b2ab-05308d8ad9f2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"21021b43-79d3-4b59-b2ab-05308d8ad9f2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 16 15:16:33 crc kubenswrapper[4736]: E0316 15:16:33.384212 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:33.884184133 +0000 UTC m=+195.611574420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.390905 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9"] Mar 16 15:16:33 crc kubenswrapper[4736]: W0316 15:16:33.436003 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e322a73_fb0e_4994_a7f6_6ade35b70364.slice/crio-70c5d7e64197b7cce819cbc9291a2d3e5dfa1c90799a21704eaab449de154820 WatchSource:0}: Error finding container 70c5d7e64197b7cce819cbc9291a2d3e5dfa1c90799a21704eaab449de154820: Status 404 returned error can't find the container with id 70c5d7e64197b7cce819cbc9291a2d3e5dfa1c90799a21704eaab449de154820 Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.485949 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.486202 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21021b43-79d3-4b59-b2ab-05308d8ad9f2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"21021b43-79d3-4b59-b2ab-05308d8ad9f2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.486259 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21021b43-79d3-4b59-b2ab-05308d8ad9f2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"21021b43-79d3-4b59-b2ab-05308d8ad9f2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.486401 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21021b43-79d3-4b59-b2ab-05308d8ad9f2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"21021b43-79d3-4b59-b2ab-05308d8ad9f2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 16 15:16:33 crc kubenswrapper[4736]: E0316 15:16:33.486498 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:33.986477829 +0000 UTC m=+195.713868116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.525735 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21021b43-79d3-4b59-b2ab-05308d8ad9f2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"21021b43-79d3-4b59-b2ab-05308d8ad9f2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.549421 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.587220 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:33 crc kubenswrapper[4736]: E0316 15:16:33.587623 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:34.087608874 +0000 UTC m=+195.814999161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.687854 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:33 crc kubenswrapper[4736]: E0316 15:16:33.688879 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:34.188864042 +0000 UTC m=+195.916254329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.756348 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p47pv" event={"ID":"02c1a3b8-f0be-49ef-ad4e-6734d14df6b2","Type":"ContainerStarted","Data":"7839eb5a52c4f0270b92e2b847abacc260f663c34bf63fdfe4160aa7341ea629"} Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.766242 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" event={"ID":"02b53811-2589-4440-936b-e1793e7b8878","Type":"ContainerStarted","Data":"b86f188351af339e6e368cf108470fc82176a2152e86decd8495fa785dbb5476"} Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.785943 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" event={"ID":"8e322a73-fb0e-4994-a7f6-6ade35b70364","Type":"ContainerStarted","Data":"70c5d7e64197b7cce819cbc9291a2d3e5dfa1c90799a21704eaab449de154820"} Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.803450 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:33 crc kubenswrapper[4736]: E0316 15:16:33.804057 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:34.304038793 +0000 UTC m=+196.031429080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.811740 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:33 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:33 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:33 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.811822 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.812366 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-p47pv" podStartSLOduration=15.812346645 podStartE2EDuration="15.812346645s" podCreationTimestamp="2026-03-16 15:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:33.811836472 +0000 UTC m=+195.539226779" watchObservedRunningTime="2026-03-16 15:16:33.812346645 +0000 UTC m=+195.539736932" Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.909230 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:33 crc kubenswrapper[4736]: E0316 15:16:33.909431 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:34.409400592 +0000 UTC m=+196.136790879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.909991 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:33 crc kubenswrapper[4736]: E0316 15:16:33.911014 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:34.410998905 +0000 UTC m=+196.138389192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:33 crc kubenswrapper[4736]: I0316 15:16:33.972684 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.014780 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:34 crc kubenswrapper[4736]: E0316 15:16:34.015660 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:34.515627884 +0000 UTC m=+196.243018171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.118088 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:34 crc kubenswrapper[4736]: E0316 15:16:34.119605 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:34.619592475 +0000 UTC m=+196.346982762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.121431 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qkm5c"] Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.235847 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:34 crc kubenswrapper[4736]: E0316 15:16:34.236448 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:34.73642453 +0000 UTC m=+196.463814807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.293860 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fz7zr"] Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.295059 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.297762 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz7zr"] Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.301011 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.339835 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:34 crc kubenswrapper[4736]: E0316 15:16:34.340191 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:34.840177816 +0000 UTC m=+196.567568103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.343307 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t25g4"] Mar 16 15:16:34 crc kubenswrapper[4736]: W0316 15:16:34.395015 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26502a77_e11a_496c_8bf5_483d61d2ed8d.slice/crio-dc7890518e14952e1980342bd1f2281aef4266457f56f0ecc4a3b773f0123f78 WatchSource:0}: Error finding container dc7890518e14952e1980342bd1f2281aef4266457f56f0ecc4a3b773f0123f78: Status 404 returned error can't find the container with id dc7890518e14952e1980342bd1f2281aef4266457f56f0ecc4a3b773f0123f78 Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.440831 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.441714 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/880121ba-67c4-47f7-86a7-1c0caead4c3a-catalog-content\") pod \"redhat-marketplace-fz7zr\" (UID: \"880121ba-67c4-47f7-86a7-1c0caead4c3a\") " pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.441751 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snmz5\" (UniqueName: \"kubernetes.io/projected/880121ba-67c4-47f7-86a7-1c0caead4c3a-kube-api-access-snmz5\") pod \"redhat-marketplace-fz7zr\" (UID: \"880121ba-67c4-47f7-86a7-1c0caead4c3a\") " pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.441828 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/880121ba-67c4-47f7-86a7-1c0caead4c3a-utilities\") pod \"redhat-marketplace-fz7zr\" (UID: \"880121ba-67c4-47f7-86a7-1c0caead4c3a\") " pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:16:34 crc kubenswrapper[4736]: E0316 15:16:34.441975 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:34.941959308 +0000 UTC m=+196.669349595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.496913 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kz6z"] Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.501796 4736 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.546048 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/880121ba-67c4-47f7-86a7-1c0caead4c3a-utilities\") pod \"redhat-marketplace-fz7zr\" (UID: \"880121ba-67c4-47f7-86a7-1c0caead4c3a\") " pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.547574 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/880121ba-67c4-47f7-86a7-1c0caead4c3a-catalog-content\") pod \"redhat-marketplace-fz7zr\" (UID: \"880121ba-67c4-47f7-86a7-1c0caead4c3a\") " pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.547904 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snmz5\" (UniqueName: \"kubernetes.io/projected/880121ba-67c4-47f7-86a7-1c0caead4c3a-kube-api-access-snmz5\") pod \"redhat-marketplace-fz7zr\" (UID: \"880121ba-67c4-47f7-86a7-1c0caead4c3a\") " pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.547995 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.549519 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/880121ba-67c4-47f7-86a7-1c0caead4c3a-catalog-content\") pod \"redhat-marketplace-fz7zr\" (UID: \"880121ba-67c4-47f7-86a7-1c0caead4c3a\") " pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.546687 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/880121ba-67c4-47f7-86a7-1c0caead4c3a-utilities\") pod \"redhat-marketplace-fz7zr\" (UID: \"880121ba-67c4-47f7-86a7-1c0caead4c3a\") " pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:16:34 crc kubenswrapper[4736]: E0316 15:16:34.550962 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:35.050950764 +0000 UTC m=+196.778341051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.567242 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.579957 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cfclg"] Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.601746 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snmz5\" (UniqueName: \"kubernetes.io/projected/880121ba-67c4-47f7-86a7-1c0caead4c3a-kube-api-access-snmz5\") pod \"redhat-marketplace-fz7zr\" (UID: \"880121ba-67c4-47f7-86a7-1c0caead4c3a\") " pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.649904 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:34 crc kubenswrapper[4736]: E0316 15:16:34.650369 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:35.150352014 +0000 UTC m=+196.877742301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:34 crc kubenswrapper[4736]: W0316 15:16:34.658375 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod21021b43_79d3_4b59_b2ab_05308d8ad9f2.slice/crio-6bac23ff252dfae9cce030fc925a9bd656c2e95caad82bc727382fa256a91c39 WatchSource:0}: Error finding container 6bac23ff252dfae9cce030fc925a9bd656c2e95caad82bc727382fa256a91c39: Status 404 returned error can't find the container with id 6bac23ff252dfae9cce030fc925a9bd656c2e95caad82bc727382fa256a91c39 Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.672854 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.675907 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dps9k"] Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.687709 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.695074 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dps9k"] Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.752082 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:34 crc kubenswrapper[4736]: E0316 15:16:34.753715 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:35.253698447 +0000 UTC m=+196.981088734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.808834 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kz6z" event={"ID":"97816ab1-5779-4821-9bba-63b4e6d4586b","Type":"ContainerStarted","Data":"dd212cb0479fd262985e9374633defa3b6809fffcab5cff743342b606faca3d4"} Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.809976 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:34 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:34 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:34 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.810014 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.814805 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" event={"ID":"02b53811-2589-4440-936b-e1793e7b8878","Type":"ContainerStarted","Data":"4b64c635b6c18cd8a5330c95c24ccdd7021b7dba2da9e7b2adec4b82b5dd64b2"} Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.818320 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.839025 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" event={"ID":"8e322a73-fb0e-4994-a7f6-6ade35b70364","Type":"ContainerStarted","Data":"25524105922a9febc93645425600220f39fd1fb1ffb7c045b3faa090d9396073"} Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.844410 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.851563 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t25g4" event={"ID":"26502a77-e11a-496c-8bf5-483d61d2ed8d","Type":"ContainerStarted","Data":"dc7890518e14952e1980342bd1f2281aef4266457f56f0ecc4a3b773f0123f78"} Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.855885 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.856150 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4444c3-8376-49ab-a094-348594b05bdb-utilities\") pod \"redhat-marketplace-dps9k\" (UID: \"3c4444c3-8376-49ab-a094-348594b05bdb\") " pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.856197 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4444c3-8376-49ab-a094-348594b05bdb-catalog-content\") pod \"redhat-marketplace-dps9k\" (UID: \"3c4444c3-8376-49ab-a094-348594b05bdb\") " pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.856247 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lwdt\" (UniqueName: \"kubernetes.io/projected/3c4444c3-8376-49ab-a094-348594b05bdb-kube-api-access-9lwdt\") pod \"redhat-marketplace-dps9k\" (UID: \"3c4444c3-8376-49ab-a094-348594b05bdb\") " pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:16:34 crc kubenswrapper[4736]: E0316 15:16:34.856389 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:35.356370084 +0000 UTC m=+197.083760371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.875378 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"21021b43-79d3-4b59-b2ab-05308d8ad9f2","Type":"ContainerStarted","Data":"6bac23ff252dfae9cce030fc925a9bd656c2e95caad82bc727382fa256a91c39"} Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.881415 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" podStartSLOduration=5.881392974 podStartE2EDuration="5.881392974s" podCreationTimestamp="2026-03-16 15:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:34.875696421 +0000 UTC m=+196.603086708" watchObservedRunningTime="2026-03-16 15:16:34.881392974 +0000 UTC m=+196.608783261" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.887447 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfclg" event={"ID":"795c05a5-413f-4361-ab0e-6796cf7862f9","Type":"ContainerStarted","Data":"28410739555aef9eca2cdcd4d5a07bf4c81278a86fc1ec288f5ee92ce8e01fbb"} Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.899960 4736 generic.go:334] "Generic (PLEG): container finished" podID="6503814f-9075-4f44-8a49-9a79e5ac3c42" containerID="e2c4a70f32e43c645141549522a3c2c2e12924943f5f0c5f048846518d3e3eb8" exitCode=0 Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.900346 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkm5c" event={"ID":"6503814f-9075-4f44-8a49-9a79e5ac3c42","Type":"ContainerDied","Data":"e2c4a70f32e43c645141549522a3c2c2e12924943f5f0c5f048846518d3e3eb8"} Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.900409 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkm5c" event={"ID":"6503814f-9075-4f44-8a49-9a79e5ac3c42","Type":"ContainerStarted","Data":"beaf9729b22b6de8596c1b74adaffbccbfc09082f86c77272dba47151f85fd38"} Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.909842 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.957686 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4444c3-8376-49ab-a094-348594b05bdb-utilities\") pod \"redhat-marketplace-dps9k\" (UID: \"3c4444c3-8376-49ab-a094-348594b05bdb\") " pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.957770 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4444c3-8376-49ab-a094-348594b05bdb-catalog-content\") pod \"redhat-marketplace-dps9k\" (UID: \"3c4444c3-8376-49ab-a094-348594b05bdb\") " pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.957794 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.957843 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lwdt\" (UniqueName: \"kubernetes.io/projected/3c4444c3-8376-49ab-a094-348594b05bdb-kube-api-access-9lwdt\") pod \"redhat-marketplace-dps9k\" (UID: \"3c4444c3-8376-49ab-a094-348594b05bdb\") " pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.959710 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4444c3-8376-49ab-a094-348594b05bdb-catalog-content\") pod \"redhat-marketplace-dps9k\" (UID: \"3c4444c3-8376-49ab-a094-348594b05bdb\") " pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.960200 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4444c3-8376-49ab-a094-348594b05bdb-utilities\") pod \"redhat-marketplace-dps9k\" (UID: \"3c4444c3-8376-49ab-a094-348594b05bdb\") " pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:16:34 crc kubenswrapper[4736]: E0316 15:16:34.960245 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:35.460227532 +0000 UTC m=+197.187617819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:34 crc kubenswrapper[4736]: I0316 15:16:34.974429 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" podStartSLOduration=5.974411722 podStartE2EDuration="5.974411722s" podCreationTimestamp="2026-03-16 15:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:34.974044592 +0000 UTC m=+196.701434879" watchObservedRunningTime="2026-03-16 15:16:34.974411722 +0000 UTC m=+196.701802009" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.059895 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:35 crc kubenswrapper[4736]: E0316 15:16:35.061310 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:35.561279216 +0000 UTC m=+197.288669503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.081664 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lwdt\" (UniqueName: \"kubernetes.io/projected/3c4444c3-8376-49ab-a094-348594b05bdb-kube-api-access-9lwdt\") pod \"redhat-marketplace-dps9k\" (UID: \"3c4444c3-8376-49ab-a094-348594b05bdb\") " pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.097907 4736 ???:1] "http: TLS handshake error from 192.168.126.11:56366: no serving certificate available for the kubelet" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.106043 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tcpnf"] Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.108701 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.113226 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.162930 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:35 crc kubenswrapper[4736]: E0316 15:16:35.163409 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:35.663396728 +0000 UTC m=+197.390787015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.167972 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tcpnf"] Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.223647 4736 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-16T15:16:34.50183711Z","Handler":null,"Name":""} Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.263327 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lvjdd"] Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.269945 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.270253 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde7b0b7-ad25-49ce-8441-9f6517808da6-utilities\") pod \"redhat-operators-tcpnf\" (UID: \"cde7b0b7-ad25-49ce-8441-9f6517808da6\") " pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.270288 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde7b0b7-ad25-49ce-8441-9f6517808da6-catalog-content\") pod \"redhat-operators-tcpnf\" (UID: \"cde7b0b7-ad25-49ce-8441-9f6517808da6\") " pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.270310 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsrrk\" (UniqueName: \"kubernetes.io/projected/cde7b0b7-ad25-49ce-8441-9f6517808da6-kube-api-access-hsrrk\") pod \"redhat-operators-tcpnf\" (UID: \"cde7b0b7-ad25-49ce-8441-9f6517808da6\") " pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:16:35 crc kubenswrapper[4736]: E0316 15:16:35.270441 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:35.770420211 +0000 UTC m=+197.497810698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.271022 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.293325 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.308141 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lvjdd"] Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.373500 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde7b0b7-ad25-49ce-8441-9f6517808da6-utilities\") pod \"redhat-operators-tcpnf\" (UID: \"cde7b0b7-ad25-49ce-8441-9f6517808da6\") " pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.373565 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde7b0b7-ad25-49ce-8441-9f6517808da6-utilities\") pod \"redhat-operators-tcpnf\" (UID: \"cde7b0b7-ad25-49ce-8441-9f6517808da6\") " pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.373623 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5522833-034d-43cc-954b-427ef568ce69-utilities\") pod \"redhat-operators-lvjdd\" (UID: \"d5522833-034d-43cc-954b-427ef568ce69\") " pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.373661 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde7b0b7-ad25-49ce-8441-9f6517808da6-catalog-content\") pod \"redhat-operators-tcpnf\" (UID: \"cde7b0b7-ad25-49ce-8441-9f6517808da6\") " pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.373692 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8xhn\" (UniqueName: \"kubernetes.io/projected/d5522833-034d-43cc-954b-427ef568ce69-kube-api-access-c8xhn\") pod \"redhat-operators-lvjdd\" (UID: \"d5522833-034d-43cc-954b-427ef568ce69\") " pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.373713 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsrrk\" (UniqueName: \"kubernetes.io/projected/cde7b0b7-ad25-49ce-8441-9f6517808da6-kube-api-access-hsrrk\") pod \"redhat-operators-tcpnf\" (UID: \"cde7b0b7-ad25-49ce-8441-9f6517808da6\") " pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.373782 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5522833-034d-43cc-954b-427ef568ce69-catalog-content\") pod \"redhat-operators-lvjdd\" (UID: \"d5522833-034d-43cc-954b-427ef568ce69\") " pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.373814 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:35 crc kubenswrapper[4736]: E0316 15:16:35.375148 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-16 15:16:35.875129821 +0000 UTC m=+197.602520108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-v8kpj" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.376531 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde7b0b7-ad25-49ce-8441-9f6517808da6-catalog-content\") pod \"redhat-operators-tcpnf\" (UID: \"cde7b0b7-ad25-49ce-8441-9f6517808da6\") " pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.401316 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.401977 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsrrk\" (UniqueName: \"kubernetes.io/projected/cde7b0b7-ad25-49ce-8441-9f6517808da6-kube-api-access-hsrrk\") pod \"redhat-operators-tcpnf\" (UID: \"cde7b0b7-ad25-49ce-8441-9f6517808da6\") " pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.476537 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.476926 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5522833-034d-43cc-954b-427ef568ce69-catalog-content\") pod \"redhat-operators-lvjdd\" (UID: \"d5522833-034d-43cc-954b-427ef568ce69\") " pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.477031 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5522833-034d-43cc-954b-427ef568ce69-utilities\") pod \"redhat-operators-lvjdd\" (UID: \"d5522833-034d-43cc-954b-427ef568ce69\") " pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.477060 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8xhn\" (UniqueName: \"kubernetes.io/projected/d5522833-034d-43cc-954b-427ef568ce69-kube-api-access-c8xhn\") pod \"redhat-operators-lvjdd\" (UID: \"d5522833-034d-43cc-954b-427ef568ce69\") " pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:16:35 crc kubenswrapper[4736]: E0316 15:16:35.477536 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-16 15:16:35.977516921 +0000 UTC m=+197.704907208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.477953 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5522833-034d-43cc-954b-427ef568ce69-catalog-content\") pod \"redhat-operators-lvjdd\" (UID: \"d5522833-034d-43cc-954b-427ef568ce69\") " pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.478182 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5522833-034d-43cc-954b-427ef568ce69-utilities\") pod \"redhat-operators-lvjdd\" (UID: \"d5522833-034d-43cc-954b-427ef568ce69\") " pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.494761 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.517602 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8xhn\" (UniqueName: \"kubernetes.io/projected/d5522833-034d-43cc-954b-427ef568ce69-kube-api-access-c8xhn\") pod \"redhat-operators-lvjdd\" (UID: \"d5522833-034d-43cc-954b-427ef568ce69\") " pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.528023 4736 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.528077 4736 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.579434 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.601543 4736 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.601600 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.651620 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.699262 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz7zr"] Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.795451 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.840755 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:35 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:35 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:35 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.845269 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.850321 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-qgcn7" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.943357 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-v8kpj\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.989957 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 16 15:16:35 crc kubenswrapper[4736]: I0316 15:16:35.997131 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz7zr" event={"ID":"880121ba-67c4-47f7-86a7-1c0caead4c3a","Type":"ContainerStarted","Data":"240100c7c1f09ef7c07acb1519dd6fb83d765b09ec61841640f9e5991ee452c6"} Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.031949 4736 generic.go:334] "Generic (PLEG): container finished" podID="795c05a5-413f-4361-ab0e-6796cf7862f9" containerID="5b78202d24b2123af522c393e8280a1eea4835d1b6b1e477e41d61e85b1dcdb4" exitCode=0 Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.032048 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfclg" event={"ID":"795c05a5-413f-4361-ab0e-6796cf7862f9","Type":"ContainerDied","Data":"5b78202d24b2123af522c393e8280a1eea4835d1b6b1e477e41d61e85b1dcdb4"} Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.056554 4736 generic.go:334] "Generic (PLEG): container finished" podID="97816ab1-5779-4821-9bba-63b4e6d4586b" containerID="2581552dafdf852f01bc737a7388894a9ba69a6d680d4d83a031660169d07f92" exitCode=0 Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.056751 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kz6z" event={"ID":"97816ab1-5779-4821-9bba-63b4e6d4586b","Type":"ContainerDied","Data":"2581552dafdf852f01bc737a7388894a9ba69a6d680d4d83a031660169d07f92"} Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.089544 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.126351 4736 generic.go:334] "Generic (PLEG): container finished" podID="26502a77-e11a-496c-8bf5-483d61d2ed8d" containerID="5b12af9e3b54963b5e8e5b98814ed043daad58c499c93fd5863a3f1e51120dd5" exitCode=0 Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.126455 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t25g4" event={"ID":"26502a77-e11a-496c-8bf5-483d61d2ed8d","Type":"ContainerDied","Data":"5b12af9e3b54963b5e8e5b98814ed043daad58c499c93fd5863a3f1e51120dd5"} Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.212831 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"21021b43-79d3-4b59-b2ab-05308d8ad9f2","Type":"ContainerStarted","Data":"d22faeaa26cf05d50cdf26fae9b36cd08070839e5cee06ac40795f371fcbaa70"} Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.355750 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.482723 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.4826999 podStartE2EDuration="3.4826999s" podCreationTimestamp="2026-03-16 15:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:36.40269828 +0000 UTC m=+198.130088567" watchObservedRunningTime="2026-03-16 15:16:36.4826999 +0000 UTC m=+198.210090187" Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.485215 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dps9k"] Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.741870 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tcpnf"] Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.765542 4736 ???:1] "http: TLS handshake error from 192.168.126.11:56368: no serving certificate available for the kubelet" Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.822931 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:36 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:36 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:36 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:36 crc kubenswrapper[4736]: I0316 15:16:36.823517 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.020895 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.171860 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v8kpj"] Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.239263 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ngjsb" Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.250190 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" event={"ID":"8ccfe744-3d0c-404c-aed7-94c575a05b34","Type":"ContainerStarted","Data":"1084d7a9051df595dbb0c8bfcff027e0b1d2ea778112918d9e425e42f6d5e1dd"} Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.257414 4736 generic.go:334] "Generic (PLEG): container finished" podID="3c4444c3-8376-49ab-a094-348594b05bdb" containerID="cf69c4a4a6f43ea169a9b04463334d0e0ef3bd4b123e6a63fb5b42b9351006fc" exitCode=0 Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.257485 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dps9k" event={"ID":"3c4444c3-8376-49ab-a094-348594b05bdb","Type":"ContainerDied","Data":"cf69c4a4a6f43ea169a9b04463334d0e0ef3bd4b123e6a63fb5b42b9351006fc"} Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.257512 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dps9k" event={"ID":"3c4444c3-8376-49ab-a094-348594b05bdb","Type":"ContainerStarted","Data":"320f36cfa17ae95b40451342750c8e1df9f9ddd423cfcee96ac36f3be8bfb37a"} Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.289986 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcpnf" event={"ID":"cde7b0b7-ad25-49ce-8441-9f6517808da6","Type":"ContainerStarted","Data":"983fc78448b7669bba68b1675c2093ad19dcb7b7994a9f0695eec8bb00ab047e"} Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.310326 4736 generic.go:334] "Generic (PLEG): container finished" podID="21021b43-79d3-4b59-b2ab-05308d8ad9f2" containerID="d22faeaa26cf05d50cdf26fae9b36cd08070839e5cee06ac40795f371fcbaa70" exitCode=0 Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.310423 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"21021b43-79d3-4b59-b2ab-05308d8ad9f2","Type":"ContainerDied","Data":"d22faeaa26cf05d50cdf26fae9b36cd08070839e5cee06ac40795f371fcbaa70"} Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.315165 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.316114 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.328551 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.333723 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.342490 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.369594 4736 generic.go:334] "Generic (PLEG): container finished" podID="880121ba-67c4-47f7-86a7-1c0caead4c3a" containerID="7944eccfa093949af54052118a9973403889526f4aa971d781d02cc4e66cc879" exitCode=0 Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.370330 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz7zr" event={"ID":"880121ba-67c4-47f7-86a7-1c0caead4c3a","Type":"ContainerDied","Data":"7944eccfa093949af54052118a9973403889526f4aa971d781d02cc4e66cc879"} Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.419247 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lvjdd"] Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.459089 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0672f4-9ead-4762-824a-20f1be9e274d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2a0672f4-9ead-4762-824a-20f1be9e274d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.461337 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0672f4-9ead-4762-824a-20f1be9e274d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2a0672f4-9ead-4762-824a-20f1be9e274d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 16 15:16:37 crc kubenswrapper[4736]: W0316 15:16:37.535090 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5522833_034d_43cc_954b_427ef568ce69.slice/crio-8266123176f45f541c52df398b8a9eecf289e0a1974dfdeaef02b00f0a8941c2 WatchSource:0}: Error finding container 8266123176f45f541c52df398b8a9eecf289e0a1974dfdeaef02b00f0a8941c2: Status 404 returned error can't find the container with id 8266123176f45f541c52df398b8a9eecf289e0a1974dfdeaef02b00f0a8941c2 Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.562772 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0672f4-9ead-4762-824a-20f1be9e274d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2a0672f4-9ead-4762-824a-20f1be9e274d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.562942 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0672f4-9ead-4762-824a-20f1be9e274d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2a0672f4-9ead-4762-824a-20f1be9e274d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.564815 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0672f4-9ead-4762-824a-20f1be9e274d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2a0672f4-9ead-4762-824a-20f1be9e274d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.596906 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0672f4-9ead-4762-824a-20f1be9e274d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2a0672f4-9ead-4762-824a-20f1be9e274d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.656829 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.805150 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:37 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:37 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:37 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:37 crc kubenswrapper[4736]: I0316 15:16:37.805213 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.144337 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.380046 4736 generic.go:334] "Generic (PLEG): container finished" podID="cde7b0b7-ad25-49ce-8441-9f6517808da6" containerID="dc0b14879a2829db11025f69e5fe37aa97efbe2c6e54916bf564e3fb7903a7e5" exitCode=0 Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.380825 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcpnf" event={"ID":"cde7b0b7-ad25-49ce-8441-9f6517808da6","Type":"ContainerDied","Data":"dc0b14879a2829db11025f69e5fe37aa97efbe2c6e54916bf564e3fb7903a7e5"} Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.388528 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2a0672f4-9ead-4762-824a-20f1be9e274d","Type":"ContainerStarted","Data":"f88ab7bc6ec6600b2f4ee2ac0ac00283343f5dbdf998a7f5f51681ce210827cb"} Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.430359 4736 generic.go:334] "Generic (PLEG): container finished" podID="d5522833-034d-43cc-954b-427ef568ce69" containerID="09f0a15b6fdb9e131cce6d7e2c2dfb000798917494b24142965e4e28390ce0dd" exitCode=0 Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.430452 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvjdd" event={"ID":"d5522833-034d-43cc-954b-427ef568ce69","Type":"ContainerDied","Data":"09f0a15b6fdb9e131cce6d7e2c2dfb000798917494b24142965e4e28390ce0dd"} Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.430482 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvjdd" event={"ID":"d5522833-034d-43cc-954b-427ef568ce69","Type":"ContainerStarted","Data":"8266123176f45f541c52df398b8a9eecf289e0a1974dfdeaef02b00f0a8941c2"} Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.440940 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" event={"ID":"8ccfe744-3d0c-404c-aed7-94c575a05b34","Type":"ContainerStarted","Data":"ece24d02401d4dfa92dd4e1ea857c5dd736234308f790c057bfca365fb0ff31e"} Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.440981 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.492763 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" podStartSLOduration=137.49274584 podStartE2EDuration="2m17.49274584s" podCreationTimestamp="2026-03-16 15:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:16:38.49089872 +0000 UTC m=+200.218289007" watchObservedRunningTime="2026-03-16 15:16:38.49274584 +0000 UTC m=+200.220136127" Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.803846 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:38 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:38 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:38 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.803970 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.885955 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.993450 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21021b43-79d3-4b59-b2ab-05308d8ad9f2-kube-api-access\") pod \"21021b43-79d3-4b59-b2ab-05308d8ad9f2\" (UID: \"21021b43-79d3-4b59-b2ab-05308d8ad9f2\") " Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.993977 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21021b43-79d3-4b59-b2ab-05308d8ad9f2-kubelet-dir\") pod \"21021b43-79d3-4b59-b2ab-05308d8ad9f2\" (UID: \"21021b43-79d3-4b59-b2ab-05308d8ad9f2\") " Mar 16 15:16:38 crc kubenswrapper[4736]: I0316 15:16:38.994376 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21021b43-79d3-4b59-b2ab-05308d8ad9f2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "21021b43-79d3-4b59-b2ab-05308d8ad9f2" (UID: "21021b43-79d3-4b59-b2ab-05308d8ad9f2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:16:39 crc kubenswrapper[4736]: I0316 15:16:39.014523 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21021b43-79d3-4b59-b2ab-05308d8ad9f2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "21021b43-79d3-4b59-b2ab-05308d8ad9f2" (UID: "21021b43-79d3-4b59-b2ab-05308d8ad9f2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:16:39 crc kubenswrapper[4736]: I0316 15:16:39.095198 4736 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21021b43-79d3-4b59-b2ab-05308d8ad9f2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:39 crc kubenswrapper[4736]: I0316 15:16:39.095231 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/21021b43-79d3-4b59-b2ab-05308d8ad9f2-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:39 crc kubenswrapper[4736]: I0316 15:16:39.472068 4736 generic.go:334] "Generic (PLEG): container finished" podID="e95b535a-2e38-4797-97c6-5ab54160b983" containerID="988ab0a1449a75719b2eaa713d728f43e859fed2b271c60d45a8ff342d1ac67d" exitCode=0 Mar 16 15:16:39 crc kubenswrapper[4736]: I0316 15:16:39.472164 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" event={"ID":"e95b535a-2e38-4797-97c6-5ab54160b983","Type":"ContainerDied","Data":"988ab0a1449a75719b2eaa713d728f43e859fed2b271c60d45a8ff342d1ac67d"} Mar 16 15:16:39 crc kubenswrapper[4736]: I0316 15:16:39.525347 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 16 15:16:39 crc kubenswrapper[4736]: I0316 15:16:39.525932 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"21021b43-79d3-4b59-b2ab-05308d8ad9f2","Type":"ContainerDied","Data":"6bac23ff252dfae9cce030fc925a9bd656c2e95caad82bc727382fa256a91c39"} Mar 16 15:16:39 crc kubenswrapper[4736]: I0316 15:16:39.526015 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bac23ff252dfae9cce030fc925a9bd656c2e95caad82bc727382fa256a91c39" Mar 16 15:16:39 crc kubenswrapper[4736]: I0316 15:16:39.804868 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:39 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:39 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:39 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:39 crc kubenswrapper[4736]: I0316 15:16:39.805361 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:40 crc kubenswrapper[4736]: E0316 15:16:40.109833 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod2a0672f4_9ead_4762_824a_20f1be9e274d.slice/crio-47ca2e76cc486d6f760d1c5e2a04fe87d1666ac60e3e7cd0e5c989c15c391945.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod2a0672f4_9ead_4762_824a_20f1be9e274d.slice/crio-conmon-47ca2e76cc486d6f760d1c5e2a04fe87d1666ac60e3e7cd0e5c989c15c391945.scope\": RecentStats: unable to find data in memory cache]" Mar 16 15:16:40 crc kubenswrapper[4736]: I0316 15:16:40.570055 4736 generic.go:334] "Generic (PLEG): container finished" podID="2a0672f4-9ead-4762-824a-20f1be9e274d" containerID="47ca2e76cc486d6f760d1c5e2a04fe87d1666ac60e3e7cd0e5c989c15c391945" exitCode=0 Mar 16 15:16:40 crc kubenswrapper[4736]: I0316 15:16:40.570704 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2a0672f4-9ead-4762-824a-20f1be9e274d","Type":"ContainerDied","Data":"47ca2e76cc486d6f760d1c5e2a04fe87d1666ac60e3e7cd0e5c989c15c391945"} Mar 16 15:16:40 crc kubenswrapper[4736]: I0316 15:16:40.803826 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:40 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:40 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:40 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:40 crc kubenswrapper[4736]: I0316 15:16:40.803912 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:40 crc kubenswrapper[4736]: I0316 15:16:40.976836 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.071517 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e95b535a-2e38-4797-97c6-5ab54160b983-secret-volume\") pod \"e95b535a-2e38-4797-97c6-5ab54160b983\" (UID: \"e95b535a-2e38-4797-97c6-5ab54160b983\") " Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.071604 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e95b535a-2e38-4797-97c6-5ab54160b983-config-volume\") pod \"e95b535a-2e38-4797-97c6-5ab54160b983\" (UID: \"e95b535a-2e38-4797-97c6-5ab54160b983\") " Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.071814 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thmhf\" (UniqueName: \"kubernetes.io/projected/e95b535a-2e38-4797-97c6-5ab54160b983-kube-api-access-thmhf\") pod \"e95b535a-2e38-4797-97c6-5ab54160b983\" (UID: \"e95b535a-2e38-4797-97c6-5ab54160b983\") " Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.073636 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e95b535a-2e38-4797-97c6-5ab54160b983-config-volume" (OuterVolumeSpecName: "config-volume") pod "e95b535a-2e38-4797-97c6-5ab54160b983" (UID: "e95b535a-2e38-4797-97c6-5ab54160b983"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.082397 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e95b535a-2e38-4797-97c6-5ab54160b983-kube-api-access-thmhf" (OuterVolumeSpecName: "kube-api-access-thmhf") pod "e95b535a-2e38-4797-97c6-5ab54160b983" (UID: "e95b535a-2e38-4797-97c6-5ab54160b983"). InnerVolumeSpecName "kube-api-access-thmhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.084643 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e95b535a-2e38-4797-97c6-5ab54160b983-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e95b535a-2e38-4797-97c6-5ab54160b983" (UID: "e95b535a-2e38-4797-97c6-5ab54160b983"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.093186 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thmhf\" (UniqueName: \"kubernetes.io/projected/e95b535a-2e38-4797-97c6-5ab54160b983-kube-api-access-thmhf\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.097908 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e95b535a-2e38-4797-97c6-5ab54160b983-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.097927 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e95b535a-2e38-4797-97c6-5ab54160b983-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.329231 4736 patch_prober.go:28] interesting pod/console-f9d7485db-p78nr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.329284 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-p78nr" podUID="bdf738c2-dd67-4aea-9d3e-03d68658ee50" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.345373 4736 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bzh5 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.345436 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2bzh5" podUID="5881f71f-e94a-4bc8-8d08-8bf079fd10e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.346722 4736 patch_prober.go:28] interesting pod/downloads-7954f5f757-2bzh5 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.346751 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2bzh5" podUID="5881f71f-e94a-4bc8-8d08-8bf079fd10e3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.30:8080/\": dial tcp 10.217.0.30:8080: connect: connection refused" Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.616589 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.618799 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89" event={"ID":"e95b535a-2e38-4797-97c6-5ab54160b983","Type":"ContainerDied","Data":"d1374411c6439fc2150dfe79c03f4e2c3580c7e7cef253759648e4c8d3b64500"} Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.618884 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1374411c6439fc2150dfe79c03f4e2c3580c7e7cef253759648e4c8d3b64500" Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.803749 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:41 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:41 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:41 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:41 crc kubenswrapper[4736]: I0316 15:16:41.803804 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:42 crc kubenswrapper[4736]: I0316 15:16:42.049700 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:16:42 crc kubenswrapper[4736]: I0316 15:16:42.111928 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:16:42 crc kubenswrapper[4736]: I0316 15:16:42.156563 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 16 15:16:42 crc kubenswrapper[4736]: I0316 15:16:42.223601 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0672f4-9ead-4762-824a-20f1be9e274d-kubelet-dir\") pod \"2a0672f4-9ead-4762-824a-20f1be9e274d\" (UID: \"2a0672f4-9ead-4762-824a-20f1be9e274d\") " Mar 16 15:16:42 crc kubenswrapper[4736]: I0316 15:16:42.223665 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0672f4-9ead-4762-824a-20f1be9e274d-kube-api-access\") pod \"2a0672f4-9ead-4762-824a-20f1be9e274d\" (UID: \"2a0672f4-9ead-4762-824a-20f1be9e274d\") " Mar 16 15:16:42 crc kubenswrapper[4736]: I0316 15:16:42.225135 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a0672f4-9ead-4762-824a-20f1be9e274d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2a0672f4-9ead-4762-824a-20f1be9e274d" (UID: "2a0672f4-9ead-4762-824a-20f1be9e274d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:16:42 crc kubenswrapper[4736]: I0316 15:16:42.229775 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a0672f4-9ead-4762-824a-20f1be9e274d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2a0672f4-9ead-4762-824a-20f1be9e274d" (UID: "2a0672f4-9ead-4762-824a-20f1be9e274d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:16:42 crc kubenswrapper[4736]: I0316 15:16:42.330745 4736 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0672f4-9ead-4762-824a-20f1be9e274d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:42 crc kubenswrapper[4736]: I0316 15:16:42.330782 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0672f4-9ead-4762-824a-20f1be9e274d-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 16 15:16:42 crc kubenswrapper[4736]: I0316 15:16:42.665792 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2a0672f4-9ead-4762-824a-20f1be9e274d","Type":"ContainerDied","Data":"f88ab7bc6ec6600b2f4ee2ac0ac00283343f5dbdf998a7f5f51681ce210827cb"} Mar 16 15:16:42 crc kubenswrapper[4736]: I0316 15:16:42.665869 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f88ab7bc6ec6600b2f4ee2ac0ac00283343f5dbdf998a7f5f51681ce210827cb" Mar 16 15:16:42 crc kubenswrapper[4736]: I0316 15:16:42.665986 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 16 15:16:42 crc kubenswrapper[4736]: I0316 15:16:42.803847 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:42 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:42 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:42 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:42 crc kubenswrapper[4736]: I0316 15:16:42.803957 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:43 crc kubenswrapper[4736]: I0316 15:16:43.802041 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:43 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:43 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:43 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:43 crc kubenswrapper[4736]: I0316 15:16:43.802137 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:44 crc kubenswrapper[4736]: I0316 15:16:44.801982 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 16 15:16:44 crc kubenswrapper[4736]: [-]has-synced failed: reason withheld Mar 16 15:16:44 crc kubenswrapper[4736]: [+]process-running ok Mar 16 15:16:44 crc kubenswrapper[4736]: healthz check failed Mar 16 15:16:44 crc kubenswrapper[4736]: I0316 15:16:44.802075 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 16 15:16:45 crc kubenswrapper[4736]: I0316 15:16:45.874823 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:45 crc kubenswrapper[4736]: I0316 15:16:45.878170 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ns4mr" Mar 16 15:16:46 crc kubenswrapper[4736]: I0316 15:16:46.658690 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9"] Mar 16 15:16:46 crc kubenswrapper[4736]: I0316 15:16:46.659013 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" podUID="8e322a73-fb0e-4994-a7f6-6ade35b70364" containerName="controller-manager" containerID="cri-o://25524105922a9febc93645425600220f39fd1fb1ffb7c045b3faa090d9396073" gracePeriod=30 Mar 16 15:16:46 crc kubenswrapper[4736]: I0316 15:16:46.668658 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j"] Mar 16 15:16:46 crc kubenswrapper[4736]: I0316 15:16:46.669143 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" podUID="02b53811-2589-4440-936b-e1793e7b8878" containerName="route-controller-manager" containerID="cri-o://4b64c635b6c18cd8a5330c95c24ccdd7021b7dba2da9e7b2adec4b82b5dd64b2" gracePeriod=30 Mar 16 15:16:47 crc kubenswrapper[4736]: I0316 15:16:47.834476 4736 generic.go:334] "Generic (PLEG): container finished" podID="02b53811-2589-4440-936b-e1793e7b8878" containerID="4b64c635b6c18cd8a5330c95c24ccdd7021b7dba2da9e7b2adec4b82b5dd64b2" exitCode=0 Mar 16 15:16:47 crc kubenswrapper[4736]: I0316 15:16:47.835027 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" event={"ID":"02b53811-2589-4440-936b-e1793e7b8878","Type":"ContainerDied","Data":"4b64c635b6c18cd8a5330c95c24ccdd7021b7dba2da9e7b2adec4b82b5dd64b2"} Mar 16 15:16:47 crc kubenswrapper[4736]: I0316 15:16:47.840831 4736 generic.go:334] "Generic (PLEG): container finished" podID="8e322a73-fb0e-4994-a7f6-6ade35b70364" containerID="25524105922a9febc93645425600220f39fd1fb1ffb7c045b3faa090d9396073" exitCode=0 Mar 16 15:16:47 crc kubenswrapper[4736]: I0316 15:16:47.840876 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" event={"ID":"8e322a73-fb0e-4994-a7f6-6ade35b70364","Type":"ContainerDied","Data":"25524105922a9febc93645425600220f39fd1fb1ffb7c045b3faa090d9396073"} Mar 16 15:16:51 crc kubenswrapper[4736]: I0316 15:16:51.332835 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:51 crc kubenswrapper[4736]: I0316 15:16:51.337432 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:16:51 crc kubenswrapper[4736]: I0316 15:16:51.365824 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2bzh5" Mar 16 15:16:51 crc kubenswrapper[4736]: I0316 15:16:51.431268 4736 patch_prober.go:28] interesting pod/route-controller-manager-855977cd4d-6p49j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" start-of-body= Mar 16 15:16:51 crc kubenswrapper[4736]: I0316 15:16:51.431345 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" podUID="02b53811-2589-4440-936b-e1793e7b8878" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" Mar 16 15:16:51 crc kubenswrapper[4736]: I0316 15:16:51.441698 4736 patch_prober.go:28] interesting pod/controller-manager-65bb6d87cb-gwzz9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" start-of-body= Mar 16 15:16:51 crc kubenswrapper[4736]: I0316 15:16:51.442066 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" podUID="8e322a73-fb0e-4994-a7f6-6ade35b70364" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" Mar 16 15:16:51 crc kubenswrapper[4736]: I0316 15:16:51.916460 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:16:51 crc kubenswrapper[4736]: I0316 15:16:51.916564 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:16:51 crc kubenswrapper[4736]: I0316 15:16:51.918756 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 16 15:16:51 crc kubenswrapper[4736]: I0316 15:16:51.919603 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 16 15:16:51 crc kubenswrapper[4736]: I0316 15:16:51.929096 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:16:51 crc kubenswrapper[4736]: I0316 15:16:51.947256 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:16:52 crc kubenswrapper[4736]: I0316 15:16:52.007664 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 16 15:16:52 crc kubenswrapper[4736]: I0316 15:16:52.017483 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:16:52 crc kubenswrapper[4736]: I0316 15:16:52.017524 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:16:52 crc kubenswrapper[4736]: I0316 15:16:52.020859 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 16 15:16:52 crc kubenswrapper[4736]: I0316 15:16:52.030055 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 16 15:16:52 crc kubenswrapper[4736]: I0316 15:16:52.043554 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:16:52 crc kubenswrapper[4736]: I0316 15:16:52.045477 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:16:52 crc kubenswrapper[4736]: I0316 15:16:52.298729 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:16:52 crc kubenswrapper[4736]: I0316 15:16:52.316220 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 16 15:16:56 crc kubenswrapper[4736]: I0316 15:16:56.099022 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:16:57 crc kubenswrapper[4736]: I0316 15:16:57.306369 4736 ???:1] "http: TLS handshake error from 192.168.126.11:34404: no serving certificate available for the kubelet" Mar 16 15:17:01 crc kubenswrapper[4736]: I0316 15:17:01.427002 4736 patch_prober.go:28] interesting pod/route-controller-manager-855977cd4d-6p49j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" start-of-body= Mar 16 15:17:01 crc kubenswrapper[4736]: I0316 15:17:01.427695 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" podUID="02b53811-2589-4440-936b-e1793e7b8878" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.46:8443/healthz\": dial tcp 10.217.0.46:8443: connect: connection refused" Mar 16 15:17:01 crc kubenswrapper[4736]: I0316 15:17:01.442478 4736 patch_prober.go:28] interesting pod/controller-manager-65bb6d87cb-gwzz9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" start-of-body= Mar 16 15:17:01 crc kubenswrapper[4736]: I0316 15:17:01.442581 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" podUID="8e322a73-fb0e-4994-a7f6-6ade35b70364" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.45:8443/healthz\": dial tcp 10.217.0.45:8443: connect: connection refused" Mar 16 15:17:02 crc kubenswrapper[4736]: I0316 15:17:02.501569 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" Mar 16 15:17:03 crc kubenswrapper[4736]: E0316 15:17:03.933647 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 16 15:17:03 crc kubenswrapper[4736]: E0316 15:17:03.934115 4736 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 16 15:17:03 crc kubenswrapper[4736]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 16 15:17:03 crc kubenswrapper[4736]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8cjqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29561236-cxwv6_openshift-infra(ae9de77b-767a-4a87-b4bd-648728dc9826): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 16 15:17:03 crc kubenswrapper[4736]: > logger="UnhandledError" Mar 16 15:17:03 crc kubenswrapper[4736]: E0316 15:17:03.935318 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29561236-cxwv6" podUID="ae9de77b-767a-4a87-b4bd-648728dc9826" Mar 16 15:17:04 crc kubenswrapper[4736]: E0316 15:17:04.123987 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29561236-cxwv6" podUID="ae9de77b-767a-4a87-b4bd-648728dc9826" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.246662 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.253929 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.293837 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8"] Mar 16 15:17:04 crc kubenswrapper[4736]: E0316 15:17:04.294238 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b53811-2589-4440-936b-e1793e7b8878" containerName="route-controller-manager" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.294253 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b53811-2589-4440-936b-e1793e7b8878" containerName="route-controller-manager" Mar 16 15:17:04 crc kubenswrapper[4736]: E0316 15:17:04.294270 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e322a73-fb0e-4994-a7f6-6ade35b70364" containerName="controller-manager" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.294277 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e322a73-fb0e-4994-a7f6-6ade35b70364" containerName="controller-manager" Mar 16 15:17:04 crc kubenswrapper[4736]: E0316 15:17:04.294290 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95b535a-2e38-4797-97c6-5ab54160b983" containerName="collect-profiles" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.294297 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95b535a-2e38-4797-97c6-5ab54160b983" containerName="collect-profiles" Mar 16 15:17:04 crc kubenswrapper[4736]: E0316 15:17:04.294308 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21021b43-79d3-4b59-b2ab-05308d8ad9f2" containerName="pruner" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.294315 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="21021b43-79d3-4b59-b2ab-05308d8ad9f2" containerName="pruner" Mar 16 15:17:04 crc kubenswrapper[4736]: E0316 15:17:04.294336 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0672f4-9ead-4762-824a-20f1be9e274d" containerName="pruner" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.294342 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0672f4-9ead-4762-824a-20f1be9e274d" containerName="pruner" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.294452 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a0672f4-9ead-4762-824a-20f1be9e274d" containerName="pruner" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.294466 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e322a73-fb0e-4994-a7f6-6ade35b70364" containerName="controller-manager" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.294477 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="21021b43-79d3-4b59-b2ab-05308d8ad9f2" containerName="pruner" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.294486 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95b535a-2e38-4797-97c6-5ab54160b983" containerName="collect-profiles" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.294496 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b53811-2589-4440-936b-e1793e7b8878" containerName="route-controller-manager" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.296725 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.302064 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8"] Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.418242 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b53811-2589-4440-936b-e1793e7b8878-serving-cert\") pod \"02b53811-2589-4440-936b-e1793e7b8878\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.418306 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-config\") pod \"8e322a73-fb0e-4994-a7f6-6ade35b70364\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.418354 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fgdq\" (UniqueName: \"kubernetes.io/projected/02b53811-2589-4440-936b-e1793e7b8878-kube-api-access-8fgdq\") pod \"02b53811-2589-4440-936b-e1793e7b8878\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.418382 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fmx5\" (UniqueName: \"kubernetes.io/projected/8e322a73-fb0e-4994-a7f6-6ade35b70364-kube-api-access-4fmx5\") pod \"8e322a73-fb0e-4994-a7f6-6ade35b70364\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.418415 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e322a73-fb0e-4994-a7f6-6ade35b70364-serving-cert\") pod \"8e322a73-fb0e-4994-a7f6-6ade35b70364\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.418444 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b53811-2589-4440-936b-e1793e7b8878-client-ca\") pod \"02b53811-2589-4440-936b-e1793e7b8878\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.418486 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-client-ca\") pod \"8e322a73-fb0e-4994-a7f6-6ade35b70364\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.418517 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b53811-2589-4440-936b-e1793e7b8878-config\") pod \"02b53811-2589-4440-936b-e1793e7b8878\" (UID: \"02b53811-2589-4440-936b-e1793e7b8878\") " Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.418552 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-proxy-ca-bundles\") pod \"8e322a73-fb0e-4994-a7f6-6ade35b70364\" (UID: \"8e322a73-fb0e-4994-a7f6-6ade35b70364\") " Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.418715 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-client-ca\") pod \"route-controller-manager-585f9bb8d5-82rs8\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.418772 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-serving-cert\") pod \"route-controller-manager-585f9bb8d5-82rs8\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.418812 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-config\") pod \"route-controller-manager-585f9bb8d5-82rs8\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.418845 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkfrh\" (UniqueName: \"kubernetes.io/projected/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-kube-api-access-pkfrh\") pod \"route-controller-manager-585f9bb8d5-82rs8\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.420318 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-config" (OuterVolumeSpecName: "config") pod "8e322a73-fb0e-4994-a7f6-6ade35b70364" (UID: "8e322a73-fb0e-4994-a7f6-6ade35b70364"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.421481 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b53811-2589-4440-936b-e1793e7b8878-config" (OuterVolumeSpecName: "config") pod "02b53811-2589-4440-936b-e1793e7b8878" (UID: "02b53811-2589-4440-936b-e1793e7b8878"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.422179 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-client-ca" (OuterVolumeSpecName: "client-ca") pod "8e322a73-fb0e-4994-a7f6-6ade35b70364" (UID: "8e322a73-fb0e-4994-a7f6-6ade35b70364"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.422763 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b53811-2589-4440-936b-e1793e7b8878-client-ca" (OuterVolumeSpecName: "client-ca") pod "02b53811-2589-4440-936b-e1793e7b8878" (UID: "02b53811-2589-4440-936b-e1793e7b8878"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.423507 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8e322a73-fb0e-4994-a7f6-6ade35b70364" (UID: "8e322a73-fb0e-4994-a7f6-6ade35b70364"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.426228 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e322a73-fb0e-4994-a7f6-6ade35b70364-kube-api-access-4fmx5" (OuterVolumeSpecName: "kube-api-access-4fmx5") pod "8e322a73-fb0e-4994-a7f6-6ade35b70364" (UID: "8e322a73-fb0e-4994-a7f6-6ade35b70364"). InnerVolumeSpecName "kube-api-access-4fmx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.427006 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b53811-2589-4440-936b-e1793e7b8878-kube-api-access-8fgdq" (OuterVolumeSpecName: "kube-api-access-8fgdq") pod "02b53811-2589-4440-936b-e1793e7b8878" (UID: "02b53811-2589-4440-936b-e1793e7b8878"). InnerVolumeSpecName "kube-api-access-8fgdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.429095 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e322a73-fb0e-4994-a7f6-6ade35b70364-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8e322a73-fb0e-4994-a7f6-6ade35b70364" (UID: "8e322a73-fb0e-4994-a7f6-6ade35b70364"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.433827 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b53811-2589-4440-936b-e1793e7b8878-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "02b53811-2589-4440-936b-e1793e7b8878" (UID: "02b53811-2589-4440-936b-e1793e7b8878"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.520669 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkfrh\" (UniqueName: \"kubernetes.io/projected/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-kube-api-access-pkfrh\") pod \"route-controller-manager-585f9bb8d5-82rs8\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.520900 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-client-ca\") pod \"route-controller-manager-585f9bb8d5-82rs8\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.520952 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-serving-cert\") pod \"route-controller-manager-585f9bb8d5-82rs8\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.520987 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-config\") pod \"route-controller-manager-585f9bb8d5-82rs8\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.521028 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b53811-2589-4440-936b-e1793e7b8878-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.521282 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.522170 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-client-ca\") pod \"route-controller-manager-585f9bb8d5-82rs8\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.522239 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fgdq\" (UniqueName: \"kubernetes.io/projected/02b53811-2589-4440-936b-e1793e7b8878-kube-api-access-8fgdq\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.522258 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fmx5\" (UniqueName: \"kubernetes.io/projected/8e322a73-fb0e-4994-a7f6-6ade35b70364-kube-api-access-4fmx5\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.522270 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e322a73-fb0e-4994-a7f6-6ade35b70364-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.522281 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b53811-2589-4440-936b-e1793e7b8878-client-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.522290 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-client-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.522300 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b53811-2589-4440-936b-e1793e7b8878-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.522311 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e322a73-fb0e-4994-a7f6-6ade35b70364-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.522860 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-config\") pod \"route-controller-manager-585f9bb8d5-82rs8\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.531823 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-serving-cert\") pod \"route-controller-manager-585f9bb8d5-82rs8\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.541223 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkfrh\" (UniqueName: \"kubernetes.io/projected/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-kube-api-access-pkfrh\") pod \"route-controller-manager-585f9bb8d5-82rs8\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:04 crc kubenswrapper[4736]: I0316 15:17:04.640929 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:05 crc kubenswrapper[4736]: I0316 15:17:05.127308 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" Mar 16 15:17:05 crc kubenswrapper[4736]: I0316 15:17:05.127296 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9" event={"ID":"8e322a73-fb0e-4994-a7f6-6ade35b70364","Type":"ContainerDied","Data":"70c5d7e64197b7cce819cbc9291a2d3e5dfa1c90799a21704eaab449de154820"} Mar 16 15:17:05 crc kubenswrapper[4736]: I0316 15:17:05.127808 4736 scope.go:117] "RemoveContainer" containerID="25524105922a9febc93645425600220f39fd1fb1ffb7c045b3faa090d9396073" Mar 16 15:17:05 crc kubenswrapper[4736]: I0316 15:17:05.130731 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" event={"ID":"02b53811-2589-4440-936b-e1793e7b8878","Type":"ContainerDied","Data":"b86f188351af339e6e368cf108470fc82176a2152e86decd8495fa785dbb5476"} Mar 16 15:17:05 crc kubenswrapper[4736]: I0316 15:17:05.130795 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j" Mar 16 15:17:05 crc kubenswrapper[4736]: I0316 15:17:05.150662 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9"] Mar 16 15:17:05 crc kubenswrapper[4736]: I0316 15:17:05.154469 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65bb6d87cb-gwzz9"] Mar 16 15:17:05 crc kubenswrapper[4736]: I0316 15:17:05.160899 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j"] Mar 16 15:17:05 crc kubenswrapper[4736]: I0316 15:17:05.167083 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855977cd4d-6p49j"] Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.638427 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57b7cc5445-2pw5j"] Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.639459 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.642953 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.644276 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.644341 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.644713 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.646475 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.646592 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.651731 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b7cc5445-2pw5j"] Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.655697 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.693009 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed84aba7-dd84-4680-971b-a17620346b48-serving-cert\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.693060 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-client-ca\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.693093 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-proxy-ca-bundles\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.693135 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hhnk\" (UniqueName: \"kubernetes.io/projected/ed84aba7-dd84-4680-971b-a17620346b48-kube-api-access-5hhnk\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.693350 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-config\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.712316 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8"] Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.793902 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-client-ca\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.794395 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-proxy-ca-bundles\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.794434 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hhnk\" (UniqueName: \"kubernetes.io/projected/ed84aba7-dd84-4680-971b-a17620346b48-kube-api-access-5hhnk\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.794485 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-config\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.794552 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed84aba7-dd84-4680-971b-a17620346b48-serving-cert\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.794923 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-client-ca\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.795882 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-proxy-ca-bundles\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.796087 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-config\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.803346 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed84aba7-dd84-4680-971b-a17620346b48-serving-cert\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.812259 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hhnk\" (UniqueName: \"kubernetes.io/projected/ed84aba7-dd84-4680-971b-a17620346b48-kube-api-access-5hhnk\") pod \"controller-manager-57b7cc5445-2pw5j\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.967313 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.985628 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b53811-2589-4440-936b-e1793e7b8878" path="/var/lib/kubelet/pods/02b53811-2589-4440-936b-e1793e7b8878/volumes" Mar 16 15:17:06 crc kubenswrapper[4736]: I0316 15:17:06.986325 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e322a73-fb0e-4994-a7f6-6ade35b70364" path="/var/lib/kubelet/pods/8e322a73-fb0e-4994-a7f6-6ade35b70364/volumes" Mar 16 15:17:07 crc kubenswrapper[4736]: I0316 15:17:07.274385 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 16 15:17:07 crc kubenswrapper[4736]: I0316 15:17:07.275652 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 16 15:17:07 crc kubenswrapper[4736]: I0316 15:17:07.277639 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 16 15:17:07 crc kubenswrapper[4736]: I0316 15:17:07.277940 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 16 15:17:07 crc kubenswrapper[4736]: I0316 15:17:07.283098 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 16 15:17:07 crc kubenswrapper[4736]: I0316 15:17:07.302702 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ff78da1-1b92-4e41-8857-f98bd20754ae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6ff78da1-1b92-4e41-8857-f98bd20754ae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 16 15:17:07 crc kubenswrapper[4736]: I0316 15:17:07.302751 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ff78da1-1b92-4e41-8857-f98bd20754ae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6ff78da1-1b92-4e41-8857-f98bd20754ae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 16 15:17:07 crc kubenswrapper[4736]: I0316 15:17:07.403594 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ff78da1-1b92-4e41-8857-f98bd20754ae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6ff78da1-1b92-4e41-8857-f98bd20754ae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 16 15:17:07 crc kubenswrapper[4736]: I0316 15:17:07.403643 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ff78da1-1b92-4e41-8857-f98bd20754ae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6ff78da1-1b92-4e41-8857-f98bd20754ae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 16 15:17:07 crc kubenswrapper[4736]: I0316 15:17:07.403708 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ff78da1-1b92-4e41-8857-f98bd20754ae-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6ff78da1-1b92-4e41-8857-f98bd20754ae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 16 15:17:07 crc kubenswrapper[4736]: I0316 15:17:07.419594 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ff78da1-1b92-4e41-8857-f98bd20754ae-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6ff78da1-1b92-4e41-8857-f98bd20754ae\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 16 15:17:07 crc kubenswrapper[4736]: I0316 15:17:07.606320 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 16 15:17:08 crc kubenswrapper[4736]: I0316 15:17:08.148314 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b4255a0c62668361976339c8a7fe9d08dbf013cb551147b83e91ebe2ef88cdeb"} Mar 16 15:17:08 crc kubenswrapper[4736]: I0316 15:17:08.508979 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:17:08 crc kubenswrapper[4736]: I0316 15:17:08.509035 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:17:09 crc kubenswrapper[4736]: I0316 15:17:09.235096 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-76btc"] Mar 16 15:17:12 crc kubenswrapper[4736]: I0316 15:17:12.273182 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 16 15:17:12 crc kubenswrapper[4736]: I0316 15:17:12.274213 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 16 15:17:12 crc kubenswrapper[4736]: I0316 15:17:12.301128 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 16 15:17:12 crc kubenswrapper[4736]: I0316 15:17:12.393134 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-kube-api-access\") pod \"installer-9-crc\" (UID: \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 16 15:17:12 crc kubenswrapper[4736]: I0316 15:17:12.393235 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-var-lock\") pod \"installer-9-crc\" (UID: \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 16 15:17:12 crc kubenswrapper[4736]: I0316 15:17:12.393321 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 16 15:17:12 crc kubenswrapper[4736]: I0316 15:17:12.495031 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-kube-api-access\") pod \"installer-9-crc\" (UID: \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 16 15:17:12 crc kubenswrapper[4736]: I0316 15:17:12.495140 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-var-lock\") pod \"installer-9-crc\" (UID: \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 16 15:17:12 crc kubenswrapper[4736]: I0316 15:17:12.495176 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 16 15:17:12 crc kubenswrapper[4736]: I0316 15:17:12.495320 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 16 15:17:12 crc kubenswrapper[4736]: I0316 15:17:12.495427 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-var-lock\") pod \"installer-9-crc\" (UID: \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 16 15:17:12 crc kubenswrapper[4736]: I0316 15:17:12.517432 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-kube-api-access\") pod \"installer-9-crc\" (UID: \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 16 15:17:12 crc kubenswrapper[4736]: I0316 15:17:12.591440 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 16 15:17:13 crc kubenswrapper[4736]: E0316 15:17:13.422002 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 16 15:17:13 crc kubenswrapper[4736]: E0316 15:17:13.422341 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsrrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tcpnf_openshift-marketplace(cde7b0b7-ad25-49ce-8441-9f6517808da6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 16 15:17:13 crc kubenswrapper[4736]: E0316 15:17:13.427338 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tcpnf" podUID="cde7b0b7-ad25-49ce-8441-9f6517808da6" Mar 16 15:17:13 crc kubenswrapper[4736]: E0316 15:17:13.449605 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 16 15:17:13 crc kubenswrapper[4736]: E0316 15:17:13.449833 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8xhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lvjdd_openshift-marketplace(d5522833-034d-43cc-954b-427ef568ce69): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 16 15:17:13 crc kubenswrapper[4736]: E0316 15:17:13.451007 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lvjdd" podUID="d5522833-034d-43cc-954b-427ef568ce69" Mar 16 15:17:15 crc kubenswrapper[4736]: E0316 15:17:15.978956 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 16 15:17:15 crc kubenswrapper[4736]: E0316 15:17:15.979442 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-thpth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5kz6z_openshift-marketplace(97816ab1-5779-4821-9bba-63b4e6d4586b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 16 15:17:15 crc kubenswrapper[4736]: E0316 15:17:15.980651 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5kz6z" podUID="97816ab1-5779-4821-9bba-63b4e6d4586b" Mar 16 15:17:18 crc kubenswrapper[4736]: E0316 15:17:18.948949 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 16 15:17:18 crc kubenswrapper[4736]: E0316 15:17:18.949650 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-snmz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fz7zr_openshift-marketplace(880121ba-67c4-47f7-86a7-1c0caead4c3a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 16 15:17:18 crc kubenswrapper[4736]: E0316 15:17:18.950713 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fz7zr" podUID="880121ba-67c4-47f7-86a7-1c0caead4c3a" Mar 16 15:17:21 crc kubenswrapper[4736]: I0316 15:17:21.198411 4736 scope.go:117] "RemoveContainer" containerID="4b64c635b6c18cd8a5330c95c24ccdd7021b7dba2da9e7b2adec4b82b5dd64b2" Mar 16 15:17:21 crc kubenswrapper[4736]: W0316 15:17:21.199568 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-f8362e675ed5414917ad680c808eb2a1d930e79ef4630e662318448231298cc1 WatchSource:0}: Error finding container f8362e675ed5414917ad680c808eb2a1d930e79ef4630e662318448231298cc1: Status 404 returned error can't find the container with id f8362e675ed5414917ad680c808eb2a1d930e79ef4630e662318448231298cc1 Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.207389 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tcpnf" podUID="cde7b0b7-ad25-49ce-8441-9f6517808da6" Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.207741 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fz7zr" podUID="880121ba-67c4-47f7-86a7-1c0caead4c3a" Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.207798 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5kz6z" podUID="97816ab1-5779-4821-9bba-63b4e6d4586b" Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.208566 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lvjdd" podUID="d5522833-034d-43cc-954b-427ef568ce69" Mar 16 15:17:21 crc kubenswrapper[4736]: I0316 15:17:21.242637 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f8362e675ed5414917ad680c808eb2a1d930e79ef4630e662318448231298cc1"} Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.314979 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.315306 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkjwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cfclg_openshift-marketplace(795c05a5-413f-4361-ab0e-6796cf7862f9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.316655 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cfclg" podUID="795c05a5-413f-4361-ab0e-6796cf7862f9" Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.327342 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.327556 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r4tzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qkm5c_openshift-marketplace(6503814f-9075-4f44-8a49-9a79e5ac3c42): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.328724 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-qkm5c" podUID="6503814f-9075-4f44-8a49-9a79e5ac3c42" Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.358429 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.359876 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lwdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dps9k_openshift-marketplace(3c4444c3-8376-49ab-a094-348594b05bdb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.362471 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dps9k" podUID="3c4444c3-8376-49ab-a094-348594b05bdb" Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.520354 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.521177 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvv9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t25g4_openshift-marketplace(26502a77-e11a-496c-8bf5-483d61d2ed8d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 16 15:17:21 crc kubenswrapper[4736]: E0316 15:17:21.522326 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-t25g4" podUID="26502a77-e11a-496c-8bf5-483d61d2ed8d" Mar 16 15:17:21 crc kubenswrapper[4736]: I0316 15:17:21.795165 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 16 15:17:21 crc kubenswrapper[4736]: W0316 15:17:21.812243 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7d5f695c_45db_4f9e_8dbd_8bf8b5718855.slice/crio-8449eac36684e55c04ccb11e5aa39e4e5ef9f72fbe3da0ad71e1281b8fd4f17d WatchSource:0}: Error finding container 8449eac36684e55c04ccb11e5aa39e4e5ef9f72fbe3da0ad71e1281b8fd4f17d: Status 404 returned error can't find the container with id 8449eac36684e55c04ccb11e5aa39e4e5ef9f72fbe3da0ad71e1281b8fd4f17d Mar 16 15:17:21 crc kubenswrapper[4736]: I0316 15:17:21.980220 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8"] Mar 16 15:17:21 crc kubenswrapper[4736]: I0316 15:17:21.992198 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.021496 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b7cc5445-2pw5j"] Mar 16 15:17:22 crc kubenswrapper[4736]: W0316 15:17:22.031310 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6ff78da1_1b92_4e41_8857_f98bd20754ae.slice/crio-8e9b92127970d38c64d6108d5ec8af788372934ecaa1796f0ce6ac2c82859135 WatchSource:0}: Error finding container 8e9b92127970d38c64d6108d5ec8af788372934ecaa1796f0ce6ac2c82859135: Status 404 returned error can't find the container with id 8e9b92127970d38c64d6108d5ec8af788372934ecaa1796f0ce6ac2c82859135 Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.263534 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" event={"ID":"ed84aba7-dd84-4680-971b-a17620346b48","Type":"ContainerStarted","Data":"050d0918de578c4ceb7f99ba48679c87b152042bd21188b6c527736c9ce5f5a3"} Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.263601 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" event={"ID":"ed84aba7-dd84-4680-971b-a17620346b48","Type":"ContainerStarted","Data":"30c76c0db6773922b5aeabd858fbd18455daa044e0b879f3d996ac7073a9a49d"} Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.264214 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.268733 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" event={"ID":"dfeb4e31-e90d-4be9-99ef-242e4d5dc436","Type":"ContainerStarted","Data":"6f66a6c386f001cabf8947b44ba8d83316f54b4c40ddb6a34aa7159fe1611128"} Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.268775 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" event={"ID":"dfeb4e31-e90d-4be9-99ef-242e4d5dc436","Type":"ContainerStarted","Data":"ebd534b6ee636a249791b1b5a6fc9149b08d03a690f247a337661723bfd61d6d"} Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.268867 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" podUID="dfeb4e31-e90d-4be9-99ef-242e4d5dc436" containerName="route-controller-manager" containerID="cri-o://6f66a6c386f001cabf8947b44ba8d83316f54b4c40ddb6a34aa7159fe1611128" gracePeriod=30 Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.269350 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.270930 4736 patch_prober.go:28] interesting pod/controller-manager-57b7cc5445-2pw5j container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.270968 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" podUID="ed84aba7-dd84-4680-971b-a17620346b48" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.272673 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bef330ee9546f98a41d672cd220db94a29de1dd6b50a7e72e83aebc656a51d93"} Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.272743 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5dbd76bae6423b6abbbcc927e37b4c23bf3d24f8a75e25a52c29cba733e65ce4"} Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.273006 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.284197 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6ff78da1-1b92-4e41-8857-f98bd20754ae","Type":"ContainerStarted","Data":"8e9b92127970d38c64d6108d5ec8af788372934ecaa1796f0ce6ac2c82859135"} Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.288588 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" podStartSLOduration=16.288570056 podStartE2EDuration="16.288570056s" podCreationTimestamp="2026-03-16 15:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:17:22.287096416 +0000 UTC m=+244.014486703" watchObservedRunningTime="2026-03-16 15:17:22.288570056 +0000 UTC m=+244.015960343" Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.302987 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3ea6946e9a2a1dfbec3cb853ef9b65e1a2bed32c62c53cc998d6f35e9ad3f330"} Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.305280 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561236-cxwv6" event={"ID":"ae9de77b-767a-4a87-b4bd-648728dc9826","Type":"ContainerStarted","Data":"a3033f5ff86fc64106eaedf12e9b3591ba74d7569ab978294dcff96605aa6b70"} Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.317041 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1c1ff48f8d0c1394edf8a9313304bfc21d541b51d2c30ae0a63372a548548799"} Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.325704 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7d5f695c-45db-4f9e-8dbd-8bf8b5718855","Type":"ContainerStarted","Data":"8449eac36684e55c04ccb11e5aa39e4e5ef9f72fbe3da0ad71e1281b8fd4f17d"} Mar 16 15:17:22 crc kubenswrapper[4736]: E0316 15:17:22.328334 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cfclg" podUID="795c05a5-413f-4361-ab0e-6796cf7862f9" Mar 16 15:17:22 crc kubenswrapper[4736]: E0316 15:17:22.328891 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-dps9k" podUID="3c4444c3-8376-49ab-a094-348594b05bdb" Mar 16 15:17:22 crc kubenswrapper[4736]: E0316 15:17:22.328934 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-t25g4" podUID="26502a77-e11a-496c-8bf5-483d61d2ed8d" Mar 16 15:17:22 crc kubenswrapper[4736]: E0316 15:17:22.329220 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qkm5c" podUID="6503814f-9075-4f44-8a49-9a79e5ac3c42" Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.336680 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" podStartSLOduration=36.336668382 podStartE2EDuration="36.336668382s" podCreationTimestamp="2026-03-16 15:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:17:22.333501168 +0000 UTC m=+244.060891455" watchObservedRunningTime="2026-03-16 15:17:22.336668382 +0000 UTC m=+244.064058669" Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.359672 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=10.359660237 podStartE2EDuration="10.359660237s" podCreationTimestamp="2026-03-16 15:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:17:22.357327754 +0000 UTC m=+244.084718041" watchObservedRunningTime="2026-03-16 15:17:22.359660237 +0000 UTC m=+244.087050514" Mar 16 15:17:22 crc kubenswrapper[4736]: I0316 15:17:22.521524 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561236-cxwv6" podStartSLOduration=26.03740439 podStartE2EDuration="1m22.521498097s" podCreationTimestamp="2026-03-16 15:16:00 +0000 UTC" firstStartedPulling="2026-03-16 15:16:25.018067071 +0000 UTC m=+186.745457358" lastFinishedPulling="2026-03-16 15:17:21.502160778 +0000 UTC m=+243.229551065" observedRunningTime="2026-03-16 15:17:22.483079888 +0000 UTC m=+244.210470175" watchObservedRunningTime="2026-03-16 15:17:22.521498097 +0000 UTC m=+244.248888384" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.056344 4736 csr.go:261] certificate signing request csr-7xbwd is approved, waiting to be issued Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.072273 4736 csr.go:257] certificate signing request csr-7xbwd is issued Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.120598 4736 patch_prober.go:28] interesting pod/route-controller-manager-585f9bb8d5-82rs8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": read tcp 10.217.0.2:46512->10.217.0.57:8443: read: connection reset by peer" start-of-body= Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.120665 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" podUID="dfeb4e31-e90d-4be9-99ef-242e4d5dc436" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": read tcp 10.217.0.2:46512->10.217.0.57:8443: read: connection reset by peer" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.337470 4736 generic.go:334] "Generic (PLEG): container finished" podID="ae9de77b-767a-4a87-b4bd-648728dc9826" containerID="a3033f5ff86fc64106eaedf12e9b3591ba74d7569ab978294dcff96605aa6b70" exitCode=0 Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.337580 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561236-cxwv6" event={"ID":"ae9de77b-767a-4a87-b4bd-648728dc9826","Type":"ContainerDied","Data":"a3033f5ff86fc64106eaedf12e9b3591ba74d7569ab978294dcff96605aa6b70"} Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.341493 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-585f9bb8d5-82rs8_dfeb4e31-e90d-4be9-99ef-242e4d5dc436/route-controller-manager/0.log" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.341543 4736 generic.go:334] "Generic (PLEG): container finished" podID="dfeb4e31-e90d-4be9-99ef-242e4d5dc436" containerID="6f66a6c386f001cabf8947b44ba8d83316f54b4c40ddb6a34aa7159fe1611128" exitCode=255 Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.341612 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" event={"ID":"dfeb4e31-e90d-4be9-99ef-242e4d5dc436","Type":"ContainerDied","Data":"6f66a6c386f001cabf8947b44ba8d83316f54b4c40ddb6a34aa7159fe1611128"} Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.343018 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7d5f695c-45db-4f9e-8dbd-8bf8b5718855","Type":"ContainerStarted","Data":"05c0f04ce8c46d4bb85549b4f8dcdcc3b4142b6c4f8fc2685f187f1577bc08b8"} Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.348519 4736 generic.go:334] "Generic (PLEG): container finished" podID="6ff78da1-1b92-4e41-8857-f98bd20754ae" containerID="94d661238a39a25fb6cfe449005b3c9b8385eb62474e192f062897e2da1aa87a" exitCode=0 Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.348617 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6ff78da1-1b92-4e41-8857-f98bd20754ae","Type":"ContainerDied","Data":"94d661238a39a25fb6cfe449005b3c9b8385eb62474e192f062897e2da1aa87a"} Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.360167 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.691410 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-585f9bb8d5-82rs8_dfeb4e31-e90d-4be9-99ef-242e4d5dc436/route-controller-manager/0.log" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.691493 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.722475 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl"] Mar 16 15:17:23 crc kubenswrapper[4736]: E0316 15:17:23.723183 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfeb4e31-e90d-4be9-99ef-242e4d5dc436" containerName="route-controller-manager" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.723397 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfeb4e31-e90d-4be9-99ef-242e4d5dc436" containerName="route-controller-manager" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.723579 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfeb4e31-e90d-4be9-99ef-242e4d5dc436" containerName="route-controller-manager" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.724056 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.738379 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl"] Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.796779 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-client-ca\") pod \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.796909 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-config\") pod \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.797031 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkfrh\" (UniqueName: \"kubernetes.io/projected/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-kube-api-access-pkfrh\") pod \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.797063 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-serving-cert\") pod \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\" (UID: \"dfeb4e31-e90d-4be9-99ef-242e4d5dc436\") " Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.798320 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-config" (OuterVolumeSpecName: "config") pod "dfeb4e31-e90d-4be9-99ef-242e4d5dc436" (UID: "dfeb4e31-e90d-4be9-99ef-242e4d5dc436"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.798310 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-client-ca" (OuterVolumeSpecName: "client-ca") pod "dfeb4e31-e90d-4be9-99ef-242e4d5dc436" (UID: "dfeb4e31-e90d-4be9-99ef-242e4d5dc436"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.804574 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dfeb4e31-e90d-4be9-99ef-242e4d5dc436" (UID: "dfeb4e31-e90d-4be9-99ef-242e4d5dc436"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.818932 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-kube-api-access-pkfrh" (OuterVolumeSpecName: "kube-api-access-pkfrh") pod "dfeb4e31-e90d-4be9-99ef-242e4d5dc436" (UID: "dfeb4e31-e90d-4be9-99ef-242e4d5dc436"). InnerVolumeSpecName "kube-api-access-pkfrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.898656 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b62ba64-3219-4118-b256-ba719c470325-client-ca\") pod \"route-controller-manager-64d57c49ff-zjxtl\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.899005 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b62ba64-3219-4118-b256-ba719c470325-serving-cert\") pod \"route-controller-manager-64d57c49ff-zjxtl\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.899032 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb7hg\" (UniqueName: \"kubernetes.io/projected/0b62ba64-3219-4118-b256-ba719c470325-kube-api-access-jb7hg\") pod \"route-controller-manager-64d57c49ff-zjxtl\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.899319 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b62ba64-3219-4118-b256-ba719c470325-config\") pod \"route-controller-manager-64d57c49ff-zjxtl\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.899620 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.899647 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkfrh\" (UniqueName: \"kubernetes.io/projected/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-kube-api-access-pkfrh\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.899668 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:23 crc kubenswrapper[4736]: I0316 15:17:23.899686 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfeb4e31-e90d-4be9-99ef-242e4d5dc436-client-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.001537 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b62ba64-3219-4118-b256-ba719c470325-config\") pod \"route-controller-manager-64d57c49ff-zjxtl\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.001642 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b62ba64-3219-4118-b256-ba719c470325-client-ca\") pod \"route-controller-manager-64d57c49ff-zjxtl\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.001664 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b62ba64-3219-4118-b256-ba719c470325-serving-cert\") pod \"route-controller-manager-64d57c49ff-zjxtl\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.001693 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb7hg\" (UniqueName: \"kubernetes.io/projected/0b62ba64-3219-4118-b256-ba719c470325-kube-api-access-jb7hg\") pod \"route-controller-manager-64d57c49ff-zjxtl\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.002936 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b62ba64-3219-4118-b256-ba719c470325-config\") pod \"route-controller-manager-64d57c49ff-zjxtl\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.003151 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b62ba64-3219-4118-b256-ba719c470325-client-ca\") pod \"route-controller-manager-64d57c49ff-zjxtl\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.006615 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b62ba64-3219-4118-b256-ba719c470325-serving-cert\") pod \"route-controller-manager-64d57c49ff-zjxtl\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.016963 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb7hg\" (UniqueName: \"kubernetes.io/projected/0b62ba64-3219-4118-b256-ba719c470325-kube-api-access-jb7hg\") pod \"route-controller-manager-64d57c49ff-zjxtl\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.047459 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.074367 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-27 21:16:32.118217527 +0000 UTC Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.074430 4736 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6869h59m8.043790419s for next certificate rotation Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.367626 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-585f9bb8d5-82rs8_dfeb4e31-e90d-4be9-99ef-242e4d5dc436/route-controller-manager/0.log" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.367850 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" event={"ID":"dfeb4e31-e90d-4be9-99ef-242e4d5dc436","Type":"ContainerDied","Data":"ebd534b6ee636a249791b1b5a6fc9149b08d03a690f247a337661723bfd61d6d"} Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.367929 4736 scope.go:117] "RemoveContainer" containerID="6f66a6c386f001cabf8947b44ba8d83316f54b4c40ddb6a34aa7159fe1611128" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.368323 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.421618 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8"] Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.426158 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-585f9bb8d5-82rs8"] Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.512663 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl"] Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.690647 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.732325 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561236-cxwv6" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.821945 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ff78da1-1b92-4e41-8857-f98bd20754ae-kubelet-dir\") pod \"6ff78da1-1b92-4e41-8857-f98bd20754ae\" (UID: \"6ff78da1-1b92-4e41-8857-f98bd20754ae\") " Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.822012 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cjqz\" (UniqueName: \"kubernetes.io/projected/ae9de77b-767a-4a87-b4bd-648728dc9826-kube-api-access-8cjqz\") pod \"ae9de77b-767a-4a87-b4bd-648728dc9826\" (UID: \"ae9de77b-767a-4a87-b4bd-648728dc9826\") " Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.822048 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ff78da1-1b92-4e41-8857-f98bd20754ae-kube-api-access\") pod \"6ff78da1-1b92-4e41-8857-f98bd20754ae\" (UID: \"6ff78da1-1b92-4e41-8857-f98bd20754ae\") " Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.822048 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ff78da1-1b92-4e41-8857-f98bd20754ae-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6ff78da1-1b92-4e41-8857-f98bd20754ae" (UID: "6ff78da1-1b92-4e41-8857-f98bd20754ae"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.822301 4736 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ff78da1-1b92-4e41-8857-f98bd20754ae-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.828686 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ff78da1-1b92-4e41-8857-f98bd20754ae-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6ff78da1-1b92-4e41-8857-f98bd20754ae" (UID: "6ff78da1-1b92-4e41-8857-f98bd20754ae"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.829158 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae9de77b-767a-4a87-b4bd-648728dc9826-kube-api-access-8cjqz" (OuterVolumeSpecName: "kube-api-access-8cjqz") pod "ae9de77b-767a-4a87-b4bd-648728dc9826" (UID: "ae9de77b-767a-4a87-b4bd-648728dc9826"). InnerVolumeSpecName "kube-api-access-8cjqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.923636 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cjqz\" (UniqueName: \"kubernetes.io/projected/ae9de77b-767a-4a87-b4bd-648728dc9826-kube-api-access-8cjqz\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.923673 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6ff78da1-1b92-4e41-8857-f98bd20754ae-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:24 crc kubenswrapper[4736]: I0316 15:17:24.984532 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfeb4e31-e90d-4be9-99ef-242e4d5dc436" path="/var/lib/kubelet/pods/dfeb4e31-e90d-4be9-99ef-242e4d5dc436/volumes" Mar 16 15:17:25 crc kubenswrapper[4736]: I0316 15:17:25.376040 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6ff78da1-1b92-4e41-8857-f98bd20754ae","Type":"ContainerDied","Data":"8e9b92127970d38c64d6108d5ec8af788372934ecaa1796f0ce6ac2c82859135"} Mar 16 15:17:25 crc kubenswrapper[4736]: I0316 15:17:25.376088 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e9b92127970d38c64d6108d5ec8af788372934ecaa1796f0ce6ac2c82859135" Mar 16 15:17:25 crc kubenswrapper[4736]: I0316 15:17:25.376178 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 16 15:17:25 crc kubenswrapper[4736]: I0316 15:17:25.377847 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" event={"ID":"0b62ba64-3219-4118-b256-ba719c470325","Type":"ContainerStarted","Data":"37bd35c6da58d2e253340bc8b763ac0105b54d393d316a99c70a332f6e6833e5"} Mar 16 15:17:25 crc kubenswrapper[4736]: I0316 15:17:25.377873 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" event={"ID":"0b62ba64-3219-4118-b256-ba719c470325","Type":"ContainerStarted","Data":"3343cc53743be77f745103901df258eb39e445d193ffbf64136bf3c28d54c669"} Mar 16 15:17:25 crc kubenswrapper[4736]: I0316 15:17:25.379055 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:25 crc kubenswrapper[4736]: I0316 15:17:25.382031 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561236-cxwv6" Mar 16 15:17:25 crc kubenswrapper[4736]: I0316 15:17:25.382348 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561236-cxwv6" event={"ID":"ae9de77b-767a-4a87-b4bd-648728dc9826","Type":"ContainerDied","Data":"d988d5aa945c64ec483107c52ba2ae2505e9c617cd165f7ed2b39b1ff740d2a4"} Mar 16 15:17:25 crc kubenswrapper[4736]: I0316 15:17:25.382363 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d988d5aa945c64ec483107c52ba2ae2505e9c617cd165f7ed2b39b1ff740d2a4" Mar 16 15:17:25 crc kubenswrapper[4736]: I0316 15:17:25.386192 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:25 crc kubenswrapper[4736]: I0316 15:17:25.398094 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" podStartSLOduration=19.398079487 podStartE2EDuration="19.398079487s" podCreationTimestamp="2026-03-16 15:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:17:25.397149972 +0000 UTC m=+247.124540249" watchObservedRunningTime="2026-03-16 15:17:25.398079487 +0000 UTC m=+247.125469774" Mar 16 15:17:26 crc kubenswrapper[4736]: I0316 15:17:26.628720 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57b7cc5445-2pw5j"] Mar 16 15:17:26 crc kubenswrapper[4736]: I0316 15:17:26.628929 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" podUID="ed84aba7-dd84-4680-971b-a17620346b48" containerName="controller-manager" containerID="cri-o://050d0918de578c4ceb7f99ba48679c87b152042bd21188b6c527736c9ce5f5a3" gracePeriod=30 Mar 16 15:17:26 crc kubenswrapper[4736]: I0316 15:17:26.684701 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl"] Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.063527 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.161808 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-proxy-ca-bundles\") pod \"ed84aba7-dd84-4680-971b-a17620346b48\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.161875 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-config\") pod \"ed84aba7-dd84-4680-971b-a17620346b48\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.161915 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-client-ca\") pod \"ed84aba7-dd84-4680-971b-a17620346b48\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.161950 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hhnk\" (UniqueName: \"kubernetes.io/projected/ed84aba7-dd84-4680-971b-a17620346b48-kube-api-access-5hhnk\") pod \"ed84aba7-dd84-4680-971b-a17620346b48\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.162004 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed84aba7-dd84-4680-971b-a17620346b48-serving-cert\") pod \"ed84aba7-dd84-4680-971b-a17620346b48\" (UID: \"ed84aba7-dd84-4680-971b-a17620346b48\") " Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.162940 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ed84aba7-dd84-4680-971b-a17620346b48" (UID: "ed84aba7-dd84-4680-971b-a17620346b48"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.162933 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-client-ca" (OuterVolumeSpecName: "client-ca") pod "ed84aba7-dd84-4680-971b-a17620346b48" (UID: "ed84aba7-dd84-4680-971b-a17620346b48"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.162963 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-config" (OuterVolumeSpecName: "config") pod "ed84aba7-dd84-4680-971b-a17620346b48" (UID: "ed84aba7-dd84-4680-971b-a17620346b48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.168043 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed84aba7-dd84-4680-971b-a17620346b48-kube-api-access-5hhnk" (OuterVolumeSpecName: "kube-api-access-5hhnk") pod "ed84aba7-dd84-4680-971b-a17620346b48" (UID: "ed84aba7-dd84-4680-971b-a17620346b48"). InnerVolumeSpecName "kube-api-access-5hhnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.169149 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed84aba7-dd84-4680-971b-a17620346b48-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ed84aba7-dd84-4680-971b-a17620346b48" (UID: "ed84aba7-dd84-4680-971b-a17620346b48"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.264258 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.264333 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.264355 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed84aba7-dd84-4680-971b-a17620346b48-client-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.264373 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hhnk\" (UniqueName: \"kubernetes.io/projected/ed84aba7-dd84-4680-971b-a17620346b48-kube-api-access-5hhnk\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.264432 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed84aba7-dd84-4680-971b-a17620346b48-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.403350 4736 generic.go:334] "Generic (PLEG): container finished" podID="ed84aba7-dd84-4680-971b-a17620346b48" containerID="050d0918de578c4ceb7f99ba48679c87b152042bd21188b6c527736c9ce5f5a3" exitCode=0 Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.403566 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.403731 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" event={"ID":"ed84aba7-dd84-4680-971b-a17620346b48","Type":"ContainerDied","Data":"050d0918de578c4ceb7f99ba48679c87b152042bd21188b6c527736c9ce5f5a3"} Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.403831 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" event={"ID":"ed84aba7-dd84-4680-971b-a17620346b48","Type":"ContainerDied","Data":"30c76c0db6773922b5aeabd858fbd18455daa044e0b879f3d996ac7073a9a49d"} Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.403905 4736 scope.go:117] "RemoveContainer" containerID="050d0918de578c4ceb7f99ba48679c87b152042bd21188b6c527736c9ce5f5a3" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.422407 4736 scope.go:117] "RemoveContainer" containerID="050d0918de578c4ceb7f99ba48679c87b152042bd21188b6c527736c9ce5f5a3" Mar 16 15:17:27 crc kubenswrapper[4736]: E0316 15:17:27.422936 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"050d0918de578c4ceb7f99ba48679c87b152042bd21188b6c527736c9ce5f5a3\": container with ID starting with 050d0918de578c4ceb7f99ba48679c87b152042bd21188b6c527736c9ce5f5a3 not found: ID does not exist" containerID="050d0918de578c4ceb7f99ba48679c87b152042bd21188b6c527736c9ce5f5a3" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.422979 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"050d0918de578c4ceb7f99ba48679c87b152042bd21188b6c527736c9ce5f5a3"} err="failed to get container status \"050d0918de578c4ceb7f99ba48679c87b152042bd21188b6c527736c9ce5f5a3\": rpc error: code = NotFound desc = could not find container \"050d0918de578c4ceb7f99ba48679c87b152042bd21188b6c527736c9ce5f5a3\": container with ID starting with 050d0918de578c4ceb7f99ba48679c87b152042bd21188b6c527736c9ce5f5a3 not found: ID does not exist" Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.446531 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57b7cc5445-2pw5j"] Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.449222 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-57b7cc5445-2pw5j"] Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.968443 4736 patch_prober.go:28] interesting pod/controller-manager-57b7cc5445-2pw5j container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: i/o timeout" start-of-body= Mar 16 15:17:27 crc kubenswrapper[4736]: I0316 15:17:27.968528 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-57b7cc5445-2pw5j" podUID="ed84aba7-dd84-4680-971b-a17620346b48" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: i/o timeout" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.138163 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f96848cc7-x2nnh"] Mar 16 15:17:28 crc kubenswrapper[4736]: E0316 15:17:28.138515 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ff78da1-1b92-4e41-8857-f98bd20754ae" containerName="pruner" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.138534 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ff78da1-1b92-4e41-8857-f98bd20754ae" containerName="pruner" Mar 16 15:17:28 crc kubenswrapper[4736]: E0316 15:17:28.138573 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae9de77b-767a-4a87-b4bd-648728dc9826" containerName="oc" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.138586 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae9de77b-767a-4a87-b4bd-648728dc9826" containerName="oc" Mar 16 15:17:28 crc kubenswrapper[4736]: E0316 15:17:28.138606 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed84aba7-dd84-4680-971b-a17620346b48" containerName="controller-manager" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.138616 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed84aba7-dd84-4680-971b-a17620346b48" containerName="controller-manager" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.138745 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed84aba7-dd84-4680-971b-a17620346b48" containerName="controller-manager" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.138759 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ff78da1-1b92-4e41-8857-f98bd20754ae" containerName="pruner" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.138779 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae9de77b-767a-4a87-b4bd-648728dc9826" containerName="oc" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.139451 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.144148 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.149270 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.150644 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.158921 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.158989 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.159201 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.161715 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f96848cc7-x2nnh"] Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.173899 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.282050 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-client-ca\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.282199 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-proxy-ca-bundles\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.282447 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f2r7\" (UniqueName: \"kubernetes.io/projected/1246239d-d9d8-4a08-8068-2d03260a7365-kube-api-access-9f2r7\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.282542 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1246239d-d9d8-4a08-8068-2d03260a7365-serving-cert\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.282650 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-config\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.383712 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-config\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.383776 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-client-ca\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.383796 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-proxy-ca-bundles\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.383897 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f2r7\" (UniqueName: \"kubernetes.io/projected/1246239d-d9d8-4a08-8068-2d03260a7365-kube-api-access-9f2r7\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.383927 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1246239d-d9d8-4a08-8068-2d03260a7365-serving-cert\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.385306 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-client-ca\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.386145 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-config\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.387072 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-proxy-ca-bundles\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.389758 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1246239d-d9d8-4a08-8068-2d03260a7365-serving-cert\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.403723 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f2r7\" (UniqueName: \"kubernetes.io/projected/1246239d-d9d8-4a08-8068-2d03260a7365-kube-api-access-9f2r7\") pod \"controller-manager-f96848cc7-x2nnh\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.411059 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" podUID="0b62ba64-3219-4118-b256-ba719c470325" containerName="route-controller-manager" containerID="cri-o://37bd35c6da58d2e253340bc8b763ac0105b54d393d316a99c70a332f6e6833e5" gracePeriod=30 Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.472209 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.850268 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.936341 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f96848cc7-x2nnh"] Mar 16 15:17:28 crc kubenswrapper[4736]: W0316 15:17:28.943555 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1246239d_d9d8_4a08_8068_2d03260a7365.slice/crio-a1f0ef8d0a09c5f80ebe25d7f5f33bf5eb7fd8d25fcc3d5f02c863e36051113b WatchSource:0}: Error finding container a1f0ef8d0a09c5f80ebe25d7f5f33bf5eb7fd8d25fcc3d5f02c863e36051113b: Status 404 returned error can't find the container with id a1f0ef8d0a09c5f80ebe25d7f5f33bf5eb7fd8d25fcc3d5f02c863e36051113b Mar 16 15:17:28 crc kubenswrapper[4736]: I0316 15:17:28.989540 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed84aba7-dd84-4680-971b-a17620346b48" path="/var/lib/kubelet/pods/ed84aba7-dd84-4680-971b-a17620346b48/volumes" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.006226 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b62ba64-3219-4118-b256-ba719c470325-config\") pod \"0b62ba64-3219-4118-b256-ba719c470325\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.006309 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb7hg\" (UniqueName: \"kubernetes.io/projected/0b62ba64-3219-4118-b256-ba719c470325-kube-api-access-jb7hg\") pod \"0b62ba64-3219-4118-b256-ba719c470325\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.006366 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b62ba64-3219-4118-b256-ba719c470325-client-ca\") pod \"0b62ba64-3219-4118-b256-ba719c470325\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.006416 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b62ba64-3219-4118-b256-ba719c470325-serving-cert\") pod \"0b62ba64-3219-4118-b256-ba719c470325\" (UID: \"0b62ba64-3219-4118-b256-ba719c470325\") " Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.007637 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b62ba64-3219-4118-b256-ba719c470325-config" (OuterVolumeSpecName: "config") pod "0b62ba64-3219-4118-b256-ba719c470325" (UID: "0b62ba64-3219-4118-b256-ba719c470325"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.007714 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b62ba64-3219-4118-b256-ba719c470325-client-ca" (OuterVolumeSpecName: "client-ca") pod "0b62ba64-3219-4118-b256-ba719c470325" (UID: "0b62ba64-3219-4118-b256-ba719c470325"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.012014 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b62ba64-3219-4118-b256-ba719c470325-kube-api-access-jb7hg" (OuterVolumeSpecName: "kube-api-access-jb7hg") pod "0b62ba64-3219-4118-b256-ba719c470325" (UID: "0b62ba64-3219-4118-b256-ba719c470325"). InnerVolumeSpecName "kube-api-access-jb7hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.012879 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b62ba64-3219-4118-b256-ba719c470325-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b62ba64-3219-4118-b256-ba719c470325" (UID: "0b62ba64-3219-4118-b256-ba719c470325"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.111345 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b62ba64-3219-4118-b256-ba719c470325-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.111407 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb7hg\" (UniqueName: \"kubernetes.io/projected/0b62ba64-3219-4118-b256-ba719c470325-kube-api-access-jb7hg\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.111428 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b62ba64-3219-4118-b256-ba719c470325-client-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.111441 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b62ba64-3219-4118-b256-ba719c470325-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.417431 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" event={"ID":"1246239d-d9d8-4a08-8068-2d03260a7365","Type":"ContainerStarted","Data":"b692754f72a20c6139efd1508b0ff651c5338c982ca73dcf4656b6a7377dc0cf"} Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.417831 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.417926 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" event={"ID":"1246239d-d9d8-4a08-8068-2d03260a7365","Type":"ContainerStarted","Data":"a1f0ef8d0a09c5f80ebe25d7f5f33bf5eb7fd8d25fcc3d5f02c863e36051113b"} Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.419536 4736 generic.go:334] "Generic (PLEG): container finished" podID="0b62ba64-3219-4118-b256-ba719c470325" containerID="37bd35c6da58d2e253340bc8b763ac0105b54d393d316a99c70a332f6e6833e5" exitCode=0 Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.419588 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" event={"ID":"0b62ba64-3219-4118-b256-ba719c470325","Type":"ContainerDied","Data":"37bd35c6da58d2e253340bc8b763ac0105b54d393d316a99c70a332f6e6833e5"} Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.419615 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.419637 4736 scope.go:117] "RemoveContainer" containerID="37bd35c6da58d2e253340bc8b763ac0105b54d393d316a99c70a332f6e6833e5" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.419621 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl" event={"ID":"0b62ba64-3219-4118-b256-ba719c470325","Type":"ContainerDied","Data":"3343cc53743be77f745103901df258eb39e445d193ffbf64136bf3c28d54c669"} Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.424233 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.436416 4736 scope.go:117] "RemoveContainer" containerID="37bd35c6da58d2e253340bc8b763ac0105b54d393d316a99c70a332f6e6833e5" Mar 16 15:17:29 crc kubenswrapper[4736]: E0316 15:17:29.436910 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37bd35c6da58d2e253340bc8b763ac0105b54d393d316a99c70a332f6e6833e5\": container with ID starting with 37bd35c6da58d2e253340bc8b763ac0105b54d393d316a99c70a332f6e6833e5 not found: ID does not exist" containerID="37bd35c6da58d2e253340bc8b763ac0105b54d393d316a99c70a332f6e6833e5" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.436948 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37bd35c6da58d2e253340bc8b763ac0105b54d393d316a99c70a332f6e6833e5"} err="failed to get container status \"37bd35c6da58d2e253340bc8b763ac0105b54d393d316a99c70a332f6e6833e5\": rpc error: code = NotFound desc = could not find container \"37bd35c6da58d2e253340bc8b763ac0105b54d393d316a99c70a332f6e6833e5\": container with ID starting with 37bd35c6da58d2e253340bc8b763ac0105b54d393d316a99c70a332f6e6833e5 not found: ID does not exist" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.441047 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" podStartSLOduration=3.440999879 podStartE2EDuration="3.440999879s" podCreationTimestamp="2026-03-16 15:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:17:29.439325103 +0000 UTC m=+251.166715390" watchObservedRunningTime="2026-03-16 15:17:29.440999879 +0000 UTC m=+251.168390166" Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.455526 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl"] Mar 16 15:17:29 crc kubenswrapper[4736]: I0316 15:17:29.458315 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64d57c49ff-zjxtl"] Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.138803 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s"] Mar 16 15:17:30 crc kubenswrapper[4736]: E0316 15:17:30.139312 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b62ba64-3219-4118-b256-ba719c470325" containerName="route-controller-manager" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.139340 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b62ba64-3219-4118-b256-ba719c470325" containerName="route-controller-manager" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.139520 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b62ba64-3219-4118-b256-ba719c470325" containerName="route-controller-manager" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.141197 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.143899 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.143907 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.143989 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.145275 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.145383 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.149063 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.162486 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s"] Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.227294 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59fd1022-975a-43a4-b97c-ab2de9b0aacb-client-ca\") pod \"route-controller-manager-6896cbdb8c-xl78s\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.227983 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59fd1022-975a-43a4-b97c-ab2de9b0aacb-serving-cert\") pod \"route-controller-manager-6896cbdb8c-xl78s\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.228031 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt87k\" (UniqueName: \"kubernetes.io/projected/59fd1022-975a-43a4-b97c-ab2de9b0aacb-kube-api-access-rt87k\") pod \"route-controller-manager-6896cbdb8c-xl78s\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.228069 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59fd1022-975a-43a4-b97c-ab2de9b0aacb-config\") pod \"route-controller-manager-6896cbdb8c-xl78s\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.329734 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59fd1022-975a-43a4-b97c-ab2de9b0aacb-client-ca\") pod \"route-controller-manager-6896cbdb8c-xl78s\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.329827 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59fd1022-975a-43a4-b97c-ab2de9b0aacb-serving-cert\") pod \"route-controller-manager-6896cbdb8c-xl78s\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.329850 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt87k\" (UniqueName: \"kubernetes.io/projected/59fd1022-975a-43a4-b97c-ab2de9b0aacb-kube-api-access-rt87k\") pod \"route-controller-manager-6896cbdb8c-xl78s\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.329875 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59fd1022-975a-43a4-b97c-ab2de9b0aacb-config\") pod \"route-controller-manager-6896cbdb8c-xl78s\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.330702 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59fd1022-975a-43a4-b97c-ab2de9b0aacb-client-ca\") pod \"route-controller-manager-6896cbdb8c-xl78s\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.330885 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59fd1022-975a-43a4-b97c-ab2de9b0aacb-config\") pod \"route-controller-manager-6896cbdb8c-xl78s\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.338628 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59fd1022-975a-43a4-b97c-ab2de9b0aacb-serving-cert\") pod \"route-controller-manager-6896cbdb8c-xl78s\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.351952 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt87k\" (UniqueName: \"kubernetes.io/projected/59fd1022-975a-43a4-b97c-ab2de9b0aacb-kube-api-access-rt87k\") pod \"route-controller-manager-6896cbdb8c-xl78s\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.464708 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.688392 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s"] Mar 16 15:17:30 crc kubenswrapper[4736]: I0316 15:17:30.990965 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b62ba64-3219-4118-b256-ba719c470325" path="/var/lib/kubelet/pods/0b62ba64-3219-4118-b256-ba719c470325/volumes" Mar 16 15:17:31 crc kubenswrapper[4736]: I0316 15:17:31.435316 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" event={"ID":"59fd1022-975a-43a4-b97c-ab2de9b0aacb","Type":"ContainerStarted","Data":"04528a805f5f649895f6eb6138f7afe80ad549cfe3e58221cfa0c9091e9dd12d"} Mar 16 15:17:31 crc kubenswrapper[4736]: I0316 15:17:31.435375 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" event={"ID":"59fd1022-975a-43a4-b97c-ab2de9b0aacb","Type":"ContainerStarted","Data":"2474f19798e5abc1f1d932bba728bc50ee61d7233f88b7c040bc10f16bed0e93"} Mar 16 15:17:31 crc kubenswrapper[4736]: I0316 15:17:31.454940 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" podStartSLOduration=5.454921093 podStartE2EDuration="5.454921093s" podCreationTimestamp="2026-03-16 15:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:17:31.450791152 +0000 UTC m=+253.178181439" watchObservedRunningTime="2026-03-16 15:17:31.454921093 +0000 UTC m=+253.182311380" Mar 16 15:17:32 crc kubenswrapper[4736]: I0316 15:17:32.453404 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:32 crc kubenswrapper[4736]: I0316 15:17:32.465008 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.278635 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" podUID="6c3b56d3-03e4-4bba-8ac0-072c4a281513" containerName="oauth-openshift" containerID="cri-o://9976cff7d658c06e9bdff4eec6c60a79a3a9eed47e219f988a97b7c8888f7d6e" gracePeriod=15 Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.468481 4736 generic.go:334] "Generic (PLEG): container finished" podID="97816ab1-5779-4821-9bba-63b4e6d4586b" containerID="60a30547e8f8d34c6087f57e7fe28ca1ef139d1225972d999355dd6c0694e6af" exitCode=0 Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.468521 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kz6z" event={"ID":"97816ab1-5779-4821-9bba-63b4e6d4586b","Type":"ContainerDied","Data":"60a30547e8f8d34c6087f57e7fe28ca1ef139d1225972d999355dd6c0694e6af"} Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.476866 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcpnf" event={"ID":"cde7b0b7-ad25-49ce-8441-9f6517808da6","Type":"ContainerStarted","Data":"8a04c9711d64214d73ac06562faa22cb71b9fba1a5f1d882ad83ffc33eb6f017"} Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.484432 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvjdd" event={"ID":"d5522833-034d-43cc-954b-427ef568ce69","Type":"ContainerStarted","Data":"d93b714827ae3f55857b40bebe05954821c8e7712858821b07cc082ef054ef59"} Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.490593 4736 generic.go:334] "Generic (PLEG): container finished" podID="6c3b56d3-03e4-4bba-8ac0-072c4a281513" containerID="9976cff7d658c06e9bdff4eec6c60a79a3a9eed47e219f988a97b7c8888f7d6e" exitCode=0 Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.490673 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" event={"ID":"6c3b56d3-03e4-4bba-8ac0-072c4a281513","Type":"ContainerDied","Data":"9976cff7d658c06e9bdff4eec6c60a79a3a9eed47e219f988a97b7c8888f7d6e"} Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.494088 4736 generic.go:334] "Generic (PLEG): container finished" podID="795c05a5-413f-4361-ab0e-6796cf7862f9" containerID="6098e1484cb7d025ce673ed3d5ade854a158dca952c8af481c284f694b316996" exitCode=0 Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.494174 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfclg" event={"ID":"795c05a5-413f-4361-ab0e-6796cf7862f9","Type":"ContainerDied","Data":"6098e1484cb7d025ce673ed3d5ade854a158dca952c8af481c284f694b316996"} Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.779380 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.905832 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-login\") pod \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.907045 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-error\") pod \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.907133 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-cliconfig\") pod \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.907175 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-provider-selection\") pod \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.907273 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-audit-policies\") pod \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.907301 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c3b56d3-03e4-4bba-8ac0-072c4a281513-audit-dir\") pod \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.907351 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-idp-0-file-data\") pod \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.907375 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-service-ca\") pod \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.907416 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-ocp-branding-template\") pod \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.907676 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c3b56d3-03e4-4bba-8ac0-072c4a281513-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6c3b56d3-03e4-4bba-8ac0-072c4a281513" (UID: "6c3b56d3-03e4-4bba-8ac0-072c4a281513"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.907879 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6c3b56d3-03e4-4bba-8ac0-072c4a281513" (UID: "6c3b56d3-03e4-4bba-8ac0-072c4a281513"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.908044 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-router-certs\") pod \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.908284 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-trusted-ca-bundle\") pod \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.908342 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-session\") pod \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.908380 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-serving-cert\") pod \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.908428 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-477lq\" (UniqueName: \"kubernetes.io/projected/6c3b56d3-03e4-4bba-8ac0-072c4a281513-kube-api-access-477lq\") pod \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\" (UID: \"6c3b56d3-03e4-4bba-8ac0-072c4a281513\") " Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.908183 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6c3b56d3-03e4-4bba-8ac0-072c4a281513" (UID: "6c3b56d3-03e4-4bba-8ac0-072c4a281513"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.908422 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6c3b56d3-03e4-4bba-8ac0-072c4a281513" (UID: "6c3b56d3-03e4-4bba-8ac0-072c4a281513"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.909401 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6c3b56d3-03e4-4bba-8ac0-072c4a281513" (UID: "6c3b56d3-03e4-4bba-8ac0-072c4a281513"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.913425 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.913448 4736 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c3b56d3-03e4-4bba-8ac0-072c4a281513-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.927875 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3b56d3-03e4-4bba-8ac0-072c4a281513-kube-api-access-477lq" (OuterVolumeSpecName: "kube-api-access-477lq") pod "6c3b56d3-03e4-4bba-8ac0-072c4a281513" (UID: "6c3b56d3-03e4-4bba-8ac0-072c4a281513"). InnerVolumeSpecName "kube-api-access-477lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.928858 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6c3b56d3-03e4-4bba-8ac0-072c4a281513" (UID: "6c3b56d3-03e4-4bba-8ac0-072c4a281513"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.929151 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6c3b56d3-03e4-4bba-8ac0-072c4a281513" (UID: "6c3b56d3-03e4-4bba-8ac0-072c4a281513"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.929415 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6c3b56d3-03e4-4bba-8ac0-072c4a281513" (UID: "6c3b56d3-03e4-4bba-8ac0-072c4a281513"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.929178 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6c3b56d3-03e4-4bba-8ac0-072c4a281513" (UID: "6c3b56d3-03e4-4bba-8ac0-072c4a281513"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.929942 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6c3b56d3-03e4-4bba-8ac0-072c4a281513" (UID: "6c3b56d3-03e4-4bba-8ac0-072c4a281513"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.929663 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6c3b56d3-03e4-4bba-8ac0-072c4a281513" (UID: "6c3b56d3-03e4-4bba-8ac0-072c4a281513"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.930248 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6c3b56d3-03e4-4bba-8ac0-072c4a281513" (UID: "6c3b56d3-03e4-4bba-8ac0-072c4a281513"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:34 crc kubenswrapper[4736]: I0316 15:17:34.930500 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6c3b56d3-03e4-4bba-8ac0-072c4a281513" (UID: "6c3b56d3-03e4-4bba-8ac0-072c4a281513"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.015299 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.015354 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.015370 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.015391 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.015412 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.015430 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-477lq\" (UniqueName: \"kubernetes.io/projected/6c3b56d3-03e4-4bba-8ac0-072c4a281513-kube-api-access-477lq\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.015446 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.015459 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.015475 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.015499 4736 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.015516 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.015534 4736 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c3b56d3-03e4-4bba-8ac0-072c4a281513-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.509932 4736 generic.go:334] "Generic (PLEG): container finished" podID="cde7b0b7-ad25-49ce-8441-9f6517808da6" containerID="8a04c9711d64214d73ac06562faa22cb71b9fba1a5f1d882ad83ffc33eb6f017" exitCode=0 Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.510052 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcpnf" event={"ID":"cde7b0b7-ad25-49ce-8441-9f6517808da6","Type":"ContainerDied","Data":"8a04c9711d64214d73ac06562faa22cb71b9fba1a5f1d882ad83ffc33eb6f017"} Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.513140 4736 generic.go:334] "Generic (PLEG): container finished" podID="d5522833-034d-43cc-954b-427ef568ce69" containerID="d93b714827ae3f55857b40bebe05954821c8e7712858821b07cc082ef054ef59" exitCode=0 Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.513274 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvjdd" event={"ID":"d5522833-034d-43cc-954b-427ef568ce69","Type":"ContainerDied","Data":"d93b714827ae3f55857b40bebe05954821c8e7712858821b07cc082ef054ef59"} Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.518532 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" event={"ID":"6c3b56d3-03e4-4bba-8ac0-072c4a281513","Type":"ContainerDied","Data":"6fe4dc579d879caf4eda3622821f4b59b461b7401bb994f72747d6f5aefb827a"} Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.518591 4736 scope.go:117] "RemoveContainer" containerID="9976cff7d658c06e9bdff4eec6c60a79a3a9eed47e219f988a97b7c8888f7d6e" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.518655 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-76btc" Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.576058 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-76btc"] Mar 16 15:17:35 crc kubenswrapper[4736]: I0316 15:17:35.583722 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-76btc"] Mar 16 15:17:36 crc kubenswrapper[4736]: I0316 15:17:36.531328 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfclg" event={"ID":"795c05a5-413f-4361-ab0e-6796cf7862f9","Type":"ContainerStarted","Data":"b42da5c540efd0302155b2f45e92c10815e0ff8442d4780a124c73a18908141e"} Mar 16 15:17:36 crc kubenswrapper[4736]: I0316 15:17:36.553057 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cfclg" podStartSLOduration=4.287810386 podStartE2EDuration="1m4.553038082s" podCreationTimestamp="2026-03-16 15:16:32 +0000 UTC" firstStartedPulling="2026-03-16 15:16:36.033845482 +0000 UTC m=+197.761235769" lastFinishedPulling="2026-03-16 15:17:36.299073188 +0000 UTC m=+258.026463465" observedRunningTime="2026-03-16 15:17:36.551167211 +0000 UTC m=+258.278557498" watchObservedRunningTime="2026-03-16 15:17:36.553038082 +0000 UTC m=+258.280428369" Mar 16 15:17:36 crc kubenswrapper[4736]: I0316 15:17:36.986725 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c3b56d3-03e4-4bba-8ac0-072c4a281513" path="/var/lib/kubelet/pods/6c3b56d3-03e4-4bba-8ac0-072c4a281513/volumes" Mar 16 15:17:37 crc kubenswrapper[4736]: I0316 15:17:37.539255 4736 generic.go:334] "Generic (PLEG): container finished" podID="6503814f-9075-4f44-8a49-9a79e5ac3c42" containerID="9849ac1005bd0e02a40354df3fc7734e6f42f4a4d1dd969ac2d94f979b325425" exitCode=0 Mar 16 15:17:37 crc kubenswrapper[4736]: I0316 15:17:37.539359 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkm5c" event={"ID":"6503814f-9075-4f44-8a49-9a79e5ac3c42","Type":"ContainerDied","Data":"9849ac1005bd0e02a40354df3fc7734e6f42f4a4d1dd969ac2d94f979b325425"} Mar 16 15:17:37 crc kubenswrapper[4736]: I0316 15:17:37.541721 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kz6z" event={"ID":"97816ab1-5779-4821-9bba-63b4e6d4586b","Type":"ContainerStarted","Data":"ce635b3bb11314063cc8faab5ed8a09c515658044fa14d103e82d9fc3e6e207e"} Mar 16 15:17:37 crc kubenswrapper[4736]: I0316 15:17:37.545022 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcpnf" event={"ID":"cde7b0b7-ad25-49ce-8441-9f6517808da6","Type":"ContainerStarted","Data":"99b8ada50feaa798184be7afd3b227c772aa07ebf078bb249112e0a83ad04e39"} Mar 16 15:17:37 crc kubenswrapper[4736]: I0316 15:17:37.547422 4736 generic.go:334] "Generic (PLEG): container finished" podID="880121ba-67c4-47f7-86a7-1c0caead4c3a" containerID="67144720a29b5aec9b60f2b279512d9f77b84cba034a0763e9743faa49a08ba9" exitCode=0 Mar 16 15:17:37 crc kubenswrapper[4736]: I0316 15:17:37.547484 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz7zr" event={"ID":"880121ba-67c4-47f7-86a7-1c0caead4c3a","Type":"ContainerDied","Data":"67144720a29b5aec9b60f2b279512d9f77b84cba034a0763e9743faa49a08ba9"} Mar 16 15:17:37 crc kubenswrapper[4736]: I0316 15:17:37.551574 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvjdd" event={"ID":"d5522833-034d-43cc-954b-427ef568ce69","Type":"ContainerStarted","Data":"466422a466d0c4f26d0e10f87cf93441534a75cdefc8d3cf35cf255f61ecc6e5"} Mar 16 15:17:37 crc kubenswrapper[4736]: I0316 15:17:37.646722 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tcpnf" podStartSLOduration=4.660219901 podStartE2EDuration="1m2.646700438s" podCreationTimestamp="2026-03-16 15:16:35 +0000 UTC" firstStartedPulling="2026-03-16 15:16:38.384442073 +0000 UTC m=+200.111832360" lastFinishedPulling="2026-03-16 15:17:36.37092261 +0000 UTC m=+258.098312897" observedRunningTime="2026-03-16 15:17:37.644585961 +0000 UTC m=+259.371976248" watchObservedRunningTime="2026-03-16 15:17:37.646700438 +0000 UTC m=+259.374090735" Mar 16 15:17:37 crc kubenswrapper[4736]: I0316 15:17:37.648381 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lvjdd" podStartSLOduration=4.753256579 podStartE2EDuration="1m2.648372163s" podCreationTimestamp="2026-03-16 15:16:35 +0000 UTC" firstStartedPulling="2026-03-16 15:16:38.433728911 +0000 UTC m=+200.161119198" lastFinishedPulling="2026-03-16 15:17:36.328844495 +0000 UTC m=+258.056234782" observedRunningTime="2026-03-16 15:17:37.622737017 +0000 UTC m=+259.350127304" watchObservedRunningTime="2026-03-16 15:17:37.648372163 +0000 UTC m=+259.375762460" Mar 16 15:17:37 crc kubenswrapper[4736]: I0316 15:17:37.666295 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5kz6z" podStartSLOduration=5.422706116 podStartE2EDuration="1m5.666273782s" podCreationTimestamp="2026-03-16 15:16:32 +0000 UTC" firstStartedPulling="2026-03-16 15:16:36.073853363 +0000 UTC m=+197.801243650" lastFinishedPulling="2026-03-16 15:17:36.317421029 +0000 UTC m=+258.044811316" observedRunningTime="2026-03-16 15:17:37.663907488 +0000 UTC m=+259.391297775" watchObservedRunningTime="2026-03-16 15:17:37.666273782 +0000 UTC m=+259.393664069" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.139420 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5d97c857-tmkjv"] Mar 16 15:17:38 crc kubenswrapper[4736]: E0316 15:17:38.139913 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3b56d3-03e4-4bba-8ac0-072c4a281513" containerName="oauth-openshift" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.140027 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3b56d3-03e4-4bba-8ac0-072c4a281513" containerName="oauth-openshift" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.140214 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c3b56d3-03e4-4bba-8ac0-072c4a281513" containerName="oauth-openshift" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.140682 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.151352 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.151812 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.152070 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.157439 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.171628 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.171744 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.171919 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.172275 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.171849 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.176861 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.181476 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.181812 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.190887 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.196751 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.197186 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d97c857-tmkjv"] Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.199427 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.266756 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-session\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.266823 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.266864 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.266887 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.267081 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.267231 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-user-template-error\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.267273 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8frr4\" (UniqueName: \"kubernetes.io/projected/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-kube-api-access-8frr4\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.267315 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-audit-policies\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.267436 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-audit-dir\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.267520 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.267542 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.267585 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.267680 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-user-template-login\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.267756 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369259 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-audit-dir\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369325 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369343 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369372 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369414 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-user-template-login\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369439 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369456 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-session\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369456 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-audit-dir\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369478 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369661 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369695 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369763 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369799 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-user-template-error\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369821 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8frr4\" (UniqueName: \"kubernetes.io/projected/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-kube-api-access-8frr4\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.369844 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-audit-policies\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.370312 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.371737 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-audit-policies\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.374626 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.377392 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-session\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.377397 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.379820 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.382463 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.385609 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.386507 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.386858 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.391817 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-user-template-login\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.396522 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-v4-0-config-user-template-error\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.401962 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8frr4\" (UniqueName: \"kubernetes.io/projected/3aded1fa-a7f1-4247-9f8b-3638bbbc47c7-kube-api-access-8frr4\") pod \"oauth-openshift-5d97c857-tmkjv\" (UID: \"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7\") " pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.488469 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.508496 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.508902 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:17:38 crc kubenswrapper[4736]: I0316 15:17:38.987190 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d97c857-tmkjv"] Mar 16 15:17:38 crc kubenswrapper[4736]: W0316 15:17:38.989665 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aded1fa_a7f1_4247_9f8b_3638bbbc47c7.slice/crio-f1a1a65e18045b8fcc6470dcca30bcb1ab4508c538117a16043976145f729236 WatchSource:0}: Error finding container f1a1a65e18045b8fcc6470dcca30bcb1ab4508c538117a16043976145f729236: Status 404 returned error can't find the container with id f1a1a65e18045b8fcc6470dcca30bcb1ab4508c538117a16043976145f729236 Mar 16 15:17:39 crc kubenswrapper[4736]: I0316 15:17:39.577238 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" event={"ID":"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7","Type":"ContainerStarted","Data":"51c9ddd6e4e6640a999987a86b011a07eb2d8c508ac49f57ef32d4571ae63721"} Mar 16 15:17:39 crc kubenswrapper[4736]: I0316 15:17:39.577304 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" event={"ID":"3aded1fa-a7f1-4247-9f8b-3638bbbc47c7","Type":"ContainerStarted","Data":"f1a1a65e18045b8fcc6470dcca30bcb1ab4508c538117a16043976145f729236"} Mar 16 15:17:39 crc kubenswrapper[4736]: I0316 15:17:39.577741 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:39 crc kubenswrapper[4736]: I0316 15:17:39.581359 4736 generic.go:334] "Generic (PLEG): container finished" podID="3c4444c3-8376-49ab-a094-348594b05bdb" containerID="bbc3391f2ccdcc30e935c13bfc2ead8cad727c6dc8ec8cd0f60564d091ec0665" exitCode=0 Mar 16 15:17:39 crc kubenswrapper[4736]: I0316 15:17:39.581430 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dps9k" event={"ID":"3c4444c3-8376-49ab-a094-348594b05bdb","Type":"ContainerDied","Data":"bbc3391f2ccdcc30e935c13bfc2ead8cad727c6dc8ec8cd0f60564d091ec0665"} Mar 16 15:17:39 crc kubenswrapper[4736]: I0316 15:17:39.584985 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz7zr" event={"ID":"880121ba-67c4-47f7-86a7-1c0caead4c3a","Type":"ContainerStarted","Data":"da9dac940376bf434308fae1e016e001ade44584111c2149b9b8a3c24c246bd5"} Mar 16 15:17:39 crc kubenswrapper[4736]: I0316 15:17:39.603849 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" podStartSLOduration=30.603824383 podStartE2EDuration="30.603824383s" podCreationTimestamp="2026-03-16 15:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:17:39.602537518 +0000 UTC m=+261.329927805" watchObservedRunningTime="2026-03-16 15:17:39.603824383 +0000 UTC m=+261.331214670" Mar 16 15:17:39 crc kubenswrapper[4736]: I0316 15:17:39.661655 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fz7zr" podStartSLOduration=4.684801638 podStartE2EDuration="1m5.661631719s" podCreationTimestamp="2026-03-16 15:16:34 +0000 UTC" firstStartedPulling="2026-03-16 15:16:37.451614049 +0000 UTC m=+199.179004336" lastFinishedPulling="2026-03-16 15:17:38.42844413 +0000 UTC m=+260.155834417" observedRunningTime="2026-03-16 15:17:39.658125515 +0000 UTC m=+261.385515802" watchObservedRunningTime="2026-03-16 15:17:39.661631719 +0000 UTC m=+261.389022006" Mar 16 15:17:40 crc kubenswrapper[4736]: I0316 15:17:40.331237 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" Mar 16 15:17:40 crc kubenswrapper[4736]: I0316 15:17:40.598684 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dps9k" event={"ID":"3c4444c3-8376-49ab-a094-348594b05bdb","Type":"ContainerStarted","Data":"e189afeeb23313a01f2d6ecfb1b597f2da43c48a7c824d1cdf7957407db493f7"} Mar 16 15:17:42 crc kubenswrapper[4736]: I0316 15:17:42.866983 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:17:42 crc kubenswrapper[4736]: I0316 15:17:42.868557 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:17:43 crc kubenswrapper[4736]: I0316 15:17:43.133886 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:17:43 crc kubenswrapper[4736]: I0316 15:17:43.133948 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:17:43 crc kubenswrapper[4736]: I0316 15:17:43.392617 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:17:43 crc kubenswrapper[4736]: I0316 15:17:43.394988 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:17:43 crc kubenswrapper[4736]: I0316 15:17:43.412856 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dps9k" podStartSLOduration=6.393736191 podStartE2EDuration="1m9.412837921s" podCreationTimestamp="2026-03-16 15:16:34 +0000 UTC" firstStartedPulling="2026-03-16 15:16:37.277946283 +0000 UTC m=+199.005336570" lastFinishedPulling="2026-03-16 15:17:40.297048013 +0000 UTC m=+262.024438300" observedRunningTime="2026-03-16 15:17:40.623278484 +0000 UTC m=+262.350668771" watchObservedRunningTime="2026-03-16 15:17:43.412837921 +0000 UTC m=+265.140228208" Mar 16 15:17:43 crc kubenswrapper[4736]: I0316 15:17:43.665733 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:17:43 crc kubenswrapper[4736]: I0316 15:17:43.689075 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:17:44 crc kubenswrapper[4736]: I0316 15:17:44.626775 4736 generic.go:334] "Generic (PLEG): container finished" podID="26502a77-e11a-496c-8bf5-483d61d2ed8d" containerID="090dfa0f0dca5d28837aaefc14dd133148294f93f67e780537821ab3d8b3b756" exitCode=0 Mar 16 15:17:44 crc kubenswrapper[4736]: I0316 15:17:44.626857 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t25g4" event={"ID":"26502a77-e11a-496c-8bf5-483d61d2ed8d","Type":"ContainerDied","Data":"090dfa0f0dca5d28837aaefc14dd133148294f93f67e780537821ab3d8b3b756"} Mar 16 15:17:44 crc kubenswrapper[4736]: I0316 15:17:44.634048 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkm5c" event={"ID":"6503814f-9075-4f44-8a49-9a79e5ac3c42","Type":"ContainerStarted","Data":"cb56f1e801af91d3d000d614a32589cea3358b0017e0ced619af07aaec861a21"} Mar 16 15:17:44 crc kubenswrapper[4736]: I0316 15:17:44.673656 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:17:44 crc kubenswrapper[4736]: I0316 15:17:44.674116 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:17:44 crc kubenswrapper[4736]: I0316 15:17:44.675384 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qkm5c" podStartSLOduration=4.980120403 podStartE2EDuration="1m13.675365649s" podCreationTimestamp="2026-03-16 15:16:31 +0000 UTC" firstStartedPulling="2026-03-16 15:16:34.905889269 +0000 UTC m=+196.633279566" lastFinishedPulling="2026-03-16 15:17:43.601134525 +0000 UTC m=+265.328524812" observedRunningTime="2026-03-16 15:17:44.672725122 +0000 UTC m=+266.400115409" watchObservedRunningTime="2026-03-16 15:17:44.675365649 +0000 UTC m=+266.402755936" Mar 16 15:17:44 crc kubenswrapper[4736]: I0316 15:17:44.728789 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:17:45 crc kubenswrapper[4736]: I0316 15:17:45.274272 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:17:45 crc kubenswrapper[4736]: I0316 15:17:45.274684 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:17:45 crc kubenswrapper[4736]: I0316 15:17:45.320775 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:17:45 crc kubenswrapper[4736]: I0316 15:17:45.496631 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:17:45 crc kubenswrapper[4736]: I0316 15:17:45.496694 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:17:45 crc kubenswrapper[4736]: I0316 15:17:45.544180 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:17:45 crc kubenswrapper[4736]: I0316 15:17:45.643246 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t25g4" event={"ID":"26502a77-e11a-496c-8bf5-483d61d2ed8d","Type":"ContainerStarted","Data":"247bbe9b3ba888028cd217128f24c68e127f92b04ffe072c26276b8f084c8e95"} Mar 16 15:17:45 crc kubenswrapper[4736]: I0316 15:17:45.652545 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:17:45 crc kubenswrapper[4736]: I0316 15:17:45.653544 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:17:45 crc kubenswrapper[4736]: I0316 15:17:45.669235 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t25g4" podStartSLOduration=3.467549039 podStartE2EDuration="1m13.669204762s" podCreationTimestamp="2026-03-16 15:16:32 +0000 UTC" firstStartedPulling="2026-03-16 15:16:34.858271225 +0000 UTC m=+196.585661512" lastFinishedPulling="2026-03-16 15:17:45.059926958 +0000 UTC m=+266.787317235" observedRunningTime="2026-03-16 15:17:45.662718633 +0000 UTC m=+267.390108920" watchObservedRunningTime="2026-03-16 15:17:45.669204762 +0000 UTC m=+267.396595049" Mar 16 15:17:45 crc kubenswrapper[4736]: I0316 15:17:45.688953 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:17:45 crc kubenswrapper[4736]: I0316 15:17:45.691960 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:17:45 crc kubenswrapper[4736]: I0316 15:17:45.698866 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:17:45 crc kubenswrapper[4736]: I0316 15:17:45.725405 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:17:46 crc kubenswrapper[4736]: I0316 15:17:46.632759 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f96848cc7-x2nnh"] Mar 16 15:17:46 crc kubenswrapper[4736]: I0316 15:17:46.632993 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" podUID="1246239d-d9d8-4a08-8068-2d03260a7365" containerName="controller-manager" containerID="cri-o://b692754f72a20c6139efd1508b0ff651c5338c982ca73dcf4656b6a7377dc0cf" gracePeriod=30 Mar 16 15:17:46 crc kubenswrapper[4736]: I0316 15:17:46.703077 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:17:46 crc kubenswrapper[4736]: I0316 15:17:46.732998 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s"] Mar 16 15:17:46 crc kubenswrapper[4736]: I0316 15:17:46.733507 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" podUID="59fd1022-975a-43a4-b97c-ab2de9b0aacb" containerName="route-controller-manager" containerID="cri-o://04528a805f5f649895f6eb6138f7afe80ad549cfe3e58221cfa0c9091e9dd12d" gracePeriod=30 Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.225598 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.254147 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.311449 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59fd1022-975a-43a4-b97c-ab2de9b0aacb-config\") pod \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.311529 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt87k\" (UniqueName: \"kubernetes.io/projected/59fd1022-975a-43a4-b97c-ab2de9b0aacb-kube-api-access-rt87k\") pod \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.311574 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59fd1022-975a-43a4-b97c-ab2de9b0aacb-serving-cert\") pod \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.311661 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59fd1022-975a-43a4-b97c-ab2de9b0aacb-client-ca\") pod \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\" (UID: \"59fd1022-975a-43a4-b97c-ab2de9b0aacb\") " Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.313850 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59fd1022-975a-43a4-b97c-ab2de9b0aacb-config" (OuterVolumeSpecName: "config") pod "59fd1022-975a-43a4-b97c-ab2de9b0aacb" (UID: "59fd1022-975a-43a4-b97c-ab2de9b0aacb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.314434 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59fd1022-975a-43a4-b97c-ab2de9b0aacb-client-ca" (OuterVolumeSpecName: "client-ca") pod "59fd1022-975a-43a4-b97c-ab2de9b0aacb" (UID: "59fd1022-975a-43a4-b97c-ab2de9b0aacb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.319486 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59fd1022-975a-43a4-b97c-ab2de9b0aacb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "59fd1022-975a-43a4-b97c-ab2de9b0aacb" (UID: "59fd1022-975a-43a4-b97c-ab2de9b0aacb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.319849 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59fd1022-975a-43a4-b97c-ab2de9b0aacb-kube-api-access-rt87k" (OuterVolumeSpecName: "kube-api-access-rt87k") pod "59fd1022-975a-43a4-b97c-ab2de9b0aacb" (UID: "59fd1022-975a-43a4-b97c-ab2de9b0aacb"). InnerVolumeSpecName "kube-api-access-rt87k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.355777 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kz6z"] Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.356124 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5kz6z" podUID="97816ab1-5779-4821-9bba-63b4e6d4586b" containerName="registry-server" containerID="cri-o://ce635b3bb11314063cc8faab5ed8a09c515658044fa14d103e82d9fc3e6e207e" gracePeriod=2 Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.414507 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1246239d-d9d8-4a08-8068-2d03260a7365-serving-cert\") pod \"1246239d-d9d8-4a08-8068-2d03260a7365\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.414581 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-proxy-ca-bundles\") pod \"1246239d-d9d8-4a08-8068-2d03260a7365\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.414615 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-config\") pod \"1246239d-d9d8-4a08-8068-2d03260a7365\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.415585 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1246239d-d9d8-4a08-8068-2d03260a7365" (UID: "1246239d-d9d8-4a08-8068-2d03260a7365"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.415686 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-client-ca" (OuterVolumeSpecName: "client-ca") pod "1246239d-d9d8-4a08-8068-2d03260a7365" (UID: "1246239d-d9d8-4a08-8068-2d03260a7365"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.415788 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-config" (OuterVolumeSpecName: "config") pod "1246239d-d9d8-4a08-8068-2d03260a7365" (UID: "1246239d-d9d8-4a08-8068-2d03260a7365"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.415863 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-client-ca\") pod \"1246239d-d9d8-4a08-8068-2d03260a7365\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.415985 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f2r7\" (UniqueName: \"kubernetes.io/projected/1246239d-d9d8-4a08-8068-2d03260a7365-kube-api-access-9f2r7\") pod \"1246239d-d9d8-4a08-8068-2d03260a7365\" (UID: \"1246239d-d9d8-4a08-8068-2d03260a7365\") " Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.417026 4736 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.417057 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt87k\" (UniqueName: \"kubernetes.io/projected/59fd1022-975a-43a4-b97c-ab2de9b0aacb-kube-api-access-rt87k\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.417074 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.417139 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59fd1022-975a-43a4-b97c-ab2de9b0aacb-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.417152 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1246239d-d9d8-4a08-8068-2d03260a7365-client-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.417165 4736 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59fd1022-975a-43a4-b97c-ab2de9b0aacb-client-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.417180 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59fd1022-975a-43a4-b97c-ab2de9b0aacb-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.418191 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1246239d-d9d8-4a08-8068-2d03260a7365-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1246239d-d9d8-4a08-8068-2d03260a7365" (UID: "1246239d-d9d8-4a08-8068-2d03260a7365"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.419182 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1246239d-d9d8-4a08-8068-2d03260a7365-kube-api-access-9f2r7" (OuterVolumeSpecName: "kube-api-access-9f2r7") pod "1246239d-d9d8-4a08-8068-2d03260a7365" (UID: "1246239d-d9d8-4a08-8068-2d03260a7365"). InnerVolumeSpecName "kube-api-access-9f2r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.520349 4736 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1246239d-d9d8-4a08-8068-2d03260a7365-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.520406 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f2r7\" (UniqueName: \"kubernetes.io/projected/1246239d-d9d8-4a08-8068-2d03260a7365-kube-api-access-9f2r7\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.662486 4736 generic.go:334] "Generic (PLEG): container finished" podID="1246239d-d9d8-4a08-8068-2d03260a7365" containerID="b692754f72a20c6139efd1508b0ff651c5338c982ca73dcf4656b6a7377dc0cf" exitCode=0 Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.662571 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" event={"ID":"1246239d-d9d8-4a08-8068-2d03260a7365","Type":"ContainerDied","Data":"b692754f72a20c6139efd1508b0ff651c5338c982ca73dcf4656b6a7377dc0cf"} Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.662615 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" event={"ID":"1246239d-d9d8-4a08-8068-2d03260a7365","Type":"ContainerDied","Data":"a1f0ef8d0a09c5f80ebe25d7f5f33bf5eb7fd8d25fcc3d5f02c863e36051113b"} Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.662642 4736 scope.go:117] "RemoveContainer" containerID="b692754f72a20c6139efd1508b0ff651c5338c982ca73dcf4656b6a7377dc0cf" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.662813 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f96848cc7-x2nnh" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.669029 4736 generic.go:334] "Generic (PLEG): container finished" podID="59fd1022-975a-43a4-b97c-ab2de9b0aacb" containerID="04528a805f5f649895f6eb6138f7afe80ad549cfe3e58221cfa0c9091e9dd12d" exitCode=0 Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.669094 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" event={"ID":"59fd1022-975a-43a4-b97c-ab2de9b0aacb","Type":"ContainerDied","Data":"04528a805f5f649895f6eb6138f7afe80ad549cfe3e58221cfa0c9091e9dd12d"} Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.669182 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" event={"ID":"59fd1022-975a-43a4-b97c-ab2de9b0aacb","Type":"ContainerDied","Data":"2474f19798e5abc1f1d932bba728bc50ee61d7233f88b7c040bc10f16bed0e93"} Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.669250 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.674721 4736 generic.go:334] "Generic (PLEG): container finished" podID="97816ab1-5779-4821-9bba-63b4e6d4586b" containerID="ce635b3bb11314063cc8faab5ed8a09c515658044fa14d103e82d9fc3e6e207e" exitCode=0 Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.675399 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kz6z" event={"ID":"97816ab1-5779-4821-9bba-63b4e6d4586b","Type":"ContainerDied","Data":"ce635b3bb11314063cc8faab5ed8a09c515658044fa14d103e82d9fc3e6e207e"} Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.693918 4736 scope.go:117] "RemoveContainer" containerID="b692754f72a20c6139efd1508b0ff651c5338c982ca73dcf4656b6a7377dc0cf" Mar 16 15:17:47 crc kubenswrapper[4736]: E0316 15:17:47.694510 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b692754f72a20c6139efd1508b0ff651c5338c982ca73dcf4656b6a7377dc0cf\": container with ID starting with b692754f72a20c6139efd1508b0ff651c5338c982ca73dcf4656b6a7377dc0cf not found: ID does not exist" containerID="b692754f72a20c6139efd1508b0ff651c5338c982ca73dcf4656b6a7377dc0cf" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.694548 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b692754f72a20c6139efd1508b0ff651c5338c982ca73dcf4656b6a7377dc0cf"} err="failed to get container status \"b692754f72a20c6139efd1508b0ff651c5338c982ca73dcf4656b6a7377dc0cf\": rpc error: code = NotFound desc = could not find container \"b692754f72a20c6139efd1508b0ff651c5338c982ca73dcf4656b6a7377dc0cf\": container with ID starting with b692754f72a20c6139efd1508b0ff651c5338c982ca73dcf4656b6a7377dc0cf not found: ID does not exist" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.694568 4736 scope.go:117] "RemoveContainer" containerID="04528a805f5f649895f6eb6138f7afe80ad549cfe3e58221cfa0c9091e9dd12d" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.711404 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s"] Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.722994 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6896cbdb8c-xl78s"] Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.730501 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f96848cc7-x2nnh"] Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.731647 4736 scope.go:117] "RemoveContainer" containerID="04528a805f5f649895f6eb6138f7afe80ad549cfe3e58221cfa0c9091e9dd12d" Mar 16 15:17:47 crc kubenswrapper[4736]: E0316 15:17:47.732469 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04528a805f5f649895f6eb6138f7afe80ad549cfe3e58221cfa0c9091e9dd12d\": container with ID starting with 04528a805f5f649895f6eb6138f7afe80ad549cfe3e58221cfa0c9091e9dd12d not found: ID does not exist" containerID="04528a805f5f649895f6eb6138f7afe80ad549cfe3e58221cfa0c9091e9dd12d" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.732550 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04528a805f5f649895f6eb6138f7afe80ad549cfe3e58221cfa0c9091e9dd12d"} err="failed to get container status \"04528a805f5f649895f6eb6138f7afe80ad549cfe3e58221cfa0c9091e9dd12d\": rpc error: code = NotFound desc = could not find container \"04528a805f5f649895f6eb6138f7afe80ad549cfe3e58221cfa0c9091e9dd12d\": container with ID starting with 04528a805f5f649895f6eb6138f7afe80ad549cfe3e58221cfa0c9091e9dd12d not found: ID does not exist" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.734062 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f96848cc7-x2nnh"] Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.816617 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.929968 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97816ab1-5779-4821-9bba-63b4e6d4586b-catalog-content\") pod \"97816ab1-5779-4821-9bba-63b4e6d4586b\" (UID: \"97816ab1-5779-4821-9bba-63b4e6d4586b\") " Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.930031 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97816ab1-5779-4821-9bba-63b4e6d4586b-utilities\") pod \"97816ab1-5779-4821-9bba-63b4e6d4586b\" (UID: \"97816ab1-5779-4821-9bba-63b4e6d4586b\") " Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.930092 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thpth\" (UniqueName: \"kubernetes.io/projected/97816ab1-5779-4821-9bba-63b4e6d4586b-kube-api-access-thpth\") pod \"97816ab1-5779-4821-9bba-63b4e6d4586b\" (UID: \"97816ab1-5779-4821-9bba-63b4e6d4586b\") " Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.931649 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97816ab1-5779-4821-9bba-63b4e6d4586b-utilities" (OuterVolumeSpecName: "utilities") pod "97816ab1-5779-4821-9bba-63b4e6d4586b" (UID: "97816ab1-5779-4821-9bba-63b4e6d4586b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.934071 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97816ab1-5779-4821-9bba-63b4e6d4586b-kube-api-access-thpth" (OuterVolumeSpecName: "kube-api-access-thpth") pod "97816ab1-5779-4821-9bba-63b4e6d4586b" (UID: "97816ab1-5779-4821-9bba-63b4e6d4586b"). InnerVolumeSpecName "kube-api-access-thpth". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.947717 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dps9k"] Mar 16 15:17:47 crc kubenswrapper[4736]: I0316 15:17:47.948284 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dps9k" podUID="3c4444c3-8376-49ab-a094-348594b05bdb" containerName="registry-server" containerID="cri-o://e189afeeb23313a01f2d6ecfb1b597f2da43c48a7c824d1cdf7957407db493f7" gracePeriod=2 Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.021291 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97816ab1-5779-4821-9bba-63b4e6d4586b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97816ab1-5779-4821-9bba-63b4e6d4586b" (UID: "97816ab1-5779-4821-9bba-63b4e6d4586b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.031552 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97816ab1-5779-4821-9bba-63b4e6d4586b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.031602 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97816ab1-5779-4821-9bba-63b4e6d4586b-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.031643 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thpth\" (UniqueName: \"kubernetes.io/projected/97816ab1-5779-4821-9bba-63b4e6d4586b-kube-api-access-thpth\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.154434 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq"] Mar 16 15:17:48 crc kubenswrapper[4736]: E0316 15:17:48.155352 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97816ab1-5779-4821-9bba-63b4e6d4586b" containerName="extract-content" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.155552 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="97816ab1-5779-4821-9bba-63b4e6d4586b" containerName="extract-content" Mar 16 15:17:48 crc kubenswrapper[4736]: E0316 15:17:48.155685 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97816ab1-5779-4821-9bba-63b4e6d4586b" containerName="registry-server" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.155793 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="97816ab1-5779-4821-9bba-63b4e6d4586b" containerName="registry-server" Mar 16 15:17:48 crc kubenswrapper[4736]: E0316 15:17:48.155933 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1246239d-d9d8-4a08-8068-2d03260a7365" containerName="controller-manager" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.156055 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1246239d-d9d8-4a08-8068-2d03260a7365" containerName="controller-manager" Mar 16 15:17:48 crc kubenswrapper[4736]: E0316 15:17:48.156200 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97816ab1-5779-4821-9bba-63b4e6d4586b" containerName="extract-utilities" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.156330 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="97816ab1-5779-4821-9bba-63b4e6d4586b" containerName="extract-utilities" Mar 16 15:17:48 crc kubenswrapper[4736]: E0316 15:17:48.156444 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59fd1022-975a-43a4-b97c-ab2de9b0aacb" containerName="route-controller-manager" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.156550 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="59fd1022-975a-43a4-b97c-ab2de9b0aacb" containerName="route-controller-manager" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.156883 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1246239d-d9d8-4a08-8068-2d03260a7365" containerName="controller-manager" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.157030 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="97816ab1-5779-4821-9bba-63b4e6d4586b" containerName="registry-server" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.157194 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="59fd1022-975a-43a4-b97c-ab2de9b0aacb" containerName="route-controller-manager" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.158229 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.162940 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.163194 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.163450 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.163667 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.164128 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.169290 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5"] Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.170244 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.170437 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.174814 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.174884 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.174939 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.175033 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.175255 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.175453 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.177244 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5"] Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.180830 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq"] Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.184794 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.235800 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-proxy-ca-bundles\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.235852 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv6gd\" (UniqueName: \"kubernetes.io/projected/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-kube-api-access-vv6gd\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.235890 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-client-ca\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.235911 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-config\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.235942 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-serving-cert\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.336996 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-client-ca\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.337067 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-config\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.337127 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm6bz\" (UniqueName: \"kubernetes.io/projected/8689f548-c815-44d2-bd27-6d3358162480-kube-api-access-gm6bz\") pod \"route-controller-manager-6c78568d4c-5mng5\" (UID: \"8689f548-c815-44d2-bd27-6d3358162480\") " pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.337149 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-serving-cert\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.337186 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8689f548-c815-44d2-bd27-6d3358162480-config\") pod \"route-controller-manager-6c78568d4c-5mng5\" (UID: \"8689f548-c815-44d2-bd27-6d3358162480\") " pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.337213 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8689f548-c815-44d2-bd27-6d3358162480-serving-cert\") pod \"route-controller-manager-6c78568d4c-5mng5\" (UID: \"8689f548-c815-44d2-bd27-6d3358162480\") " pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.337246 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8689f548-c815-44d2-bd27-6d3358162480-client-ca\") pod \"route-controller-manager-6c78568d4c-5mng5\" (UID: \"8689f548-c815-44d2-bd27-6d3358162480\") " pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.337264 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-proxy-ca-bundles\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.337282 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv6gd\" (UniqueName: \"kubernetes.io/projected/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-kube-api-access-vv6gd\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.339083 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-proxy-ca-bundles\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.339149 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-client-ca\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.339273 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-config\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.343432 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-serving-cert\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.359658 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.359785 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv6gd\" (UniqueName: \"kubernetes.io/projected/7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9-kube-api-access-vv6gd\") pod \"controller-manager-697bfb5dcc-2mrmq\" (UID: \"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9\") " pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.438614 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4444c3-8376-49ab-a094-348594b05bdb-utilities\") pod \"3c4444c3-8376-49ab-a094-348594b05bdb\" (UID: \"3c4444c3-8376-49ab-a094-348594b05bdb\") " Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.438729 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lwdt\" (UniqueName: \"kubernetes.io/projected/3c4444c3-8376-49ab-a094-348594b05bdb-kube-api-access-9lwdt\") pod \"3c4444c3-8376-49ab-a094-348594b05bdb\" (UID: \"3c4444c3-8376-49ab-a094-348594b05bdb\") " Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.438749 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4444c3-8376-49ab-a094-348594b05bdb-catalog-content\") pod \"3c4444c3-8376-49ab-a094-348594b05bdb\" (UID: \"3c4444c3-8376-49ab-a094-348594b05bdb\") " Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.438923 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm6bz\" (UniqueName: \"kubernetes.io/projected/8689f548-c815-44d2-bd27-6d3358162480-kube-api-access-gm6bz\") pod \"route-controller-manager-6c78568d4c-5mng5\" (UID: \"8689f548-c815-44d2-bd27-6d3358162480\") " pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.438966 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8689f548-c815-44d2-bd27-6d3358162480-config\") pod \"route-controller-manager-6c78568d4c-5mng5\" (UID: \"8689f548-c815-44d2-bd27-6d3358162480\") " pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.438992 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8689f548-c815-44d2-bd27-6d3358162480-serving-cert\") pod \"route-controller-manager-6c78568d4c-5mng5\" (UID: \"8689f548-c815-44d2-bd27-6d3358162480\") " pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.439015 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8689f548-c815-44d2-bd27-6d3358162480-client-ca\") pod \"route-controller-manager-6c78568d4c-5mng5\" (UID: \"8689f548-c815-44d2-bd27-6d3358162480\") " pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.439902 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8689f548-c815-44d2-bd27-6d3358162480-client-ca\") pod \"route-controller-manager-6c78568d4c-5mng5\" (UID: \"8689f548-c815-44d2-bd27-6d3358162480\") " pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.440724 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c4444c3-8376-49ab-a094-348594b05bdb-utilities" (OuterVolumeSpecName: "utilities") pod "3c4444c3-8376-49ab-a094-348594b05bdb" (UID: "3c4444c3-8376-49ab-a094-348594b05bdb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.446352 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c4444c3-8376-49ab-a094-348594b05bdb-kube-api-access-9lwdt" (OuterVolumeSpecName: "kube-api-access-9lwdt") pod "3c4444c3-8376-49ab-a094-348594b05bdb" (UID: "3c4444c3-8376-49ab-a094-348594b05bdb"). InnerVolumeSpecName "kube-api-access-9lwdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.447705 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8689f548-c815-44d2-bd27-6d3358162480-config\") pod \"route-controller-manager-6c78568d4c-5mng5\" (UID: \"8689f548-c815-44d2-bd27-6d3358162480\") " pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.452693 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8689f548-c815-44d2-bd27-6d3358162480-serving-cert\") pod \"route-controller-manager-6c78568d4c-5mng5\" (UID: \"8689f548-c815-44d2-bd27-6d3358162480\") " pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.461122 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm6bz\" (UniqueName: \"kubernetes.io/projected/8689f548-c815-44d2-bd27-6d3358162480-kube-api-access-gm6bz\") pod \"route-controller-manager-6c78568d4c-5mng5\" (UID: \"8689f548-c815-44d2-bd27-6d3358162480\") " pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.469931 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c4444c3-8376-49ab-a094-348594b05bdb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c4444c3-8376-49ab-a094-348594b05bdb" (UID: "3c4444c3-8376-49ab-a094-348594b05bdb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.497694 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.509171 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.540225 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4444c3-8376-49ab-a094-348594b05bdb-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.540266 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lwdt\" (UniqueName: \"kubernetes.io/projected/3c4444c3-8376-49ab-a094-348594b05bdb-kube-api-access-9lwdt\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.540280 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4444c3-8376-49ab-a094-348594b05bdb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.731789 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kz6z" event={"ID":"97816ab1-5779-4821-9bba-63b4e6d4586b","Type":"ContainerDied","Data":"dd212cb0479fd262985e9374633defa3b6809fffcab5cff743342b606faca3d4"} Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.732309 4736 scope.go:117] "RemoveContainer" containerID="ce635b3bb11314063cc8faab5ed8a09c515658044fa14d103e82d9fc3e6e207e" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.732407 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kz6z" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.773206 4736 generic.go:334] "Generic (PLEG): container finished" podID="3c4444c3-8376-49ab-a094-348594b05bdb" containerID="e189afeeb23313a01f2d6ecfb1b597f2da43c48a7c824d1cdf7957407db493f7" exitCode=0 Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.773285 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dps9k" event={"ID":"3c4444c3-8376-49ab-a094-348594b05bdb","Type":"ContainerDied","Data":"e189afeeb23313a01f2d6ecfb1b597f2da43c48a7c824d1cdf7957407db493f7"} Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.773321 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dps9k" event={"ID":"3c4444c3-8376-49ab-a094-348594b05bdb","Type":"ContainerDied","Data":"320f36cfa17ae95b40451342750c8e1df9f9ddd423cfcee96ac36f3be8bfb37a"} Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.773413 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dps9k" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.822877 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kz6z"] Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.835930 4736 scope.go:117] "RemoveContainer" containerID="60a30547e8f8d34c6087f57e7fe28ca1ef139d1225972d999355dd6c0694e6af" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.855693 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5kz6z"] Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.876181 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dps9k"] Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.878632 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dps9k"] Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.889775 4736 scope.go:117] "RemoveContainer" containerID="2581552dafdf852f01bc737a7388894a9ba69a6d680d4d83a031660169d07f92" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.908182 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq"] Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.931382 4736 scope.go:117] "RemoveContainer" containerID="e189afeeb23313a01f2d6ecfb1b597f2da43c48a7c824d1cdf7957407db493f7" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.972155 4736 scope.go:117] "RemoveContainer" containerID="bbc3391f2ccdcc30e935c13bfc2ead8cad727c6dc8ec8cd0f60564d091ec0665" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.983810 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1246239d-d9d8-4a08-8068-2d03260a7365" path="/var/lib/kubelet/pods/1246239d-d9d8-4a08-8068-2d03260a7365/volumes" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.984533 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c4444c3-8376-49ab-a094-348594b05bdb" path="/var/lib/kubelet/pods/3c4444c3-8376-49ab-a094-348594b05bdb/volumes" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.985193 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59fd1022-975a-43a4-b97c-ab2de9b0aacb" path="/var/lib/kubelet/pods/59fd1022-975a-43a4-b97c-ab2de9b0aacb/volumes" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.985707 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97816ab1-5779-4821-9bba-63b4e6d4586b" path="/var/lib/kubelet/pods/97816ab1-5779-4821-9bba-63b4e6d4586b/volumes" Mar 16 15:17:48 crc kubenswrapper[4736]: I0316 15:17:48.987283 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5"] Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.019701 4736 scope.go:117] "RemoveContainer" containerID="cf69c4a4a6f43ea169a9b04463334d0e0ef3bd4b123e6a63fb5b42b9351006fc" Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.051603 4736 scope.go:117] "RemoveContainer" containerID="e189afeeb23313a01f2d6ecfb1b597f2da43c48a7c824d1cdf7957407db493f7" Mar 16 15:17:49 crc kubenswrapper[4736]: E0316 15:17:49.053224 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e189afeeb23313a01f2d6ecfb1b597f2da43c48a7c824d1cdf7957407db493f7\": container with ID starting with e189afeeb23313a01f2d6ecfb1b597f2da43c48a7c824d1cdf7957407db493f7 not found: ID does not exist" containerID="e189afeeb23313a01f2d6ecfb1b597f2da43c48a7c824d1cdf7957407db493f7" Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.053264 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e189afeeb23313a01f2d6ecfb1b597f2da43c48a7c824d1cdf7957407db493f7"} err="failed to get container status \"e189afeeb23313a01f2d6ecfb1b597f2da43c48a7c824d1cdf7957407db493f7\": rpc error: code = NotFound desc = could not find container \"e189afeeb23313a01f2d6ecfb1b597f2da43c48a7c824d1cdf7957407db493f7\": container with ID starting with e189afeeb23313a01f2d6ecfb1b597f2da43c48a7c824d1cdf7957407db493f7 not found: ID does not exist" Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.053308 4736 scope.go:117] "RemoveContainer" containerID="bbc3391f2ccdcc30e935c13bfc2ead8cad727c6dc8ec8cd0f60564d091ec0665" Mar 16 15:17:49 crc kubenswrapper[4736]: E0316 15:17:49.053662 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc3391f2ccdcc30e935c13bfc2ead8cad727c6dc8ec8cd0f60564d091ec0665\": container with ID starting with bbc3391f2ccdcc30e935c13bfc2ead8cad727c6dc8ec8cd0f60564d091ec0665 not found: ID does not exist" containerID="bbc3391f2ccdcc30e935c13bfc2ead8cad727c6dc8ec8cd0f60564d091ec0665" Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.053691 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc3391f2ccdcc30e935c13bfc2ead8cad727c6dc8ec8cd0f60564d091ec0665"} err="failed to get container status \"bbc3391f2ccdcc30e935c13bfc2ead8cad727c6dc8ec8cd0f60564d091ec0665\": rpc error: code = NotFound desc = could not find container \"bbc3391f2ccdcc30e935c13bfc2ead8cad727c6dc8ec8cd0f60564d091ec0665\": container with ID starting with bbc3391f2ccdcc30e935c13bfc2ead8cad727c6dc8ec8cd0f60564d091ec0665 not found: ID does not exist" Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.053705 4736 scope.go:117] "RemoveContainer" containerID="cf69c4a4a6f43ea169a9b04463334d0e0ef3bd4b123e6a63fb5b42b9351006fc" Mar 16 15:17:49 crc kubenswrapper[4736]: E0316 15:17:49.053973 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf69c4a4a6f43ea169a9b04463334d0e0ef3bd4b123e6a63fb5b42b9351006fc\": container with ID starting with cf69c4a4a6f43ea169a9b04463334d0e0ef3bd4b123e6a63fb5b42b9351006fc not found: ID does not exist" containerID="cf69c4a4a6f43ea169a9b04463334d0e0ef3bd4b123e6a63fb5b42b9351006fc" Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.053999 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf69c4a4a6f43ea169a9b04463334d0e0ef3bd4b123e6a63fb5b42b9351006fc"} err="failed to get container status \"cf69c4a4a6f43ea169a9b04463334d0e0ef3bd4b123e6a63fb5b42b9351006fc\": rpc error: code = NotFound desc = could not find container \"cf69c4a4a6f43ea169a9b04463334d0e0ef3bd4b123e6a63fb5b42b9351006fc\": container with ID starting with cf69c4a4a6f43ea169a9b04463334d0e0ef3bd4b123e6a63fb5b42b9351006fc not found: ID does not exist" Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.796705 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" event={"ID":"8689f548-c815-44d2-bd27-6d3358162480","Type":"ContainerStarted","Data":"81297ce68d472c73984a413a9341745498f245b0ecfc34a81e953fc7ebd531ab"} Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.796755 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" event={"ID":"8689f548-c815-44d2-bd27-6d3358162480","Type":"ContainerStarted","Data":"af5b7ee9b87209d5950e2736ab3b3cf598ea53571f52db44910f1bb7a9da55f8"} Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.798083 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.800020 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" event={"ID":"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9","Type":"ContainerStarted","Data":"4c868133d356d86ab34a705ad7c3e480fa969e780792fe496de77ac689661480"} Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.800046 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" event={"ID":"7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9","Type":"ContainerStarted","Data":"77ad5fd66c1f69966a03b5c27eb01dc7af444a11686867920aaed71350d778b6"} Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.800642 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.806889 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.808073 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" Mar 16 15:17:49 crc kubenswrapper[4736]: I0316 15:17:49.824304 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" podStartSLOduration=3.8242849359999997 podStartE2EDuration="3.824284936s" podCreationTimestamp="2026-03-16 15:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:17:49.820918948 +0000 UTC m=+271.548309235" watchObservedRunningTime="2026-03-16 15:17:49.824284936 +0000 UTC m=+271.551675223" Mar 16 15:17:50 crc kubenswrapper[4736]: I0316 15:17:50.342732 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" podStartSLOduration=4.342710894 podStartE2EDuration="4.342710894s" podCreationTimestamp="2026-03-16 15:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:17:49.865896428 +0000 UTC m=+271.593286715" watchObservedRunningTime="2026-03-16 15:17:50.342710894 +0000 UTC m=+272.070101181" Mar 16 15:17:50 crc kubenswrapper[4736]: I0316 15:17:50.343920 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lvjdd"] Mar 16 15:17:50 crc kubenswrapper[4736]: I0316 15:17:50.344160 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lvjdd" podUID="d5522833-034d-43cc-954b-427ef568ce69" containerName="registry-server" containerID="cri-o://466422a466d0c4f26d0e10f87cf93441534a75cdefc8d3cf35cf255f61ecc6e5" gracePeriod=2 Mar 16 15:17:50 crc kubenswrapper[4736]: I0316 15:17:50.832700 4736 generic.go:334] "Generic (PLEG): container finished" podID="d5522833-034d-43cc-954b-427ef568ce69" containerID="466422a466d0c4f26d0e10f87cf93441534a75cdefc8d3cf35cf255f61ecc6e5" exitCode=0 Mar 16 15:17:50 crc kubenswrapper[4736]: I0316 15:17:50.832774 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvjdd" event={"ID":"d5522833-034d-43cc-954b-427ef568ce69","Type":"ContainerDied","Data":"466422a466d0c4f26d0e10f87cf93441534a75cdefc8d3cf35cf255f61ecc6e5"} Mar 16 15:17:50 crc kubenswrapper[4736]: I0316 15:17:50.858473 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:17:50 crc kubenswrapper[4736]: I0316 15:17:50.995824 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8xhn\" (UniqueName: \"kubernetes.io/projected/d5522833-034d-43cc-954b-427ef568ce69-kube-api-access-c8xhn\") pod \"d5522833-034d-43cc-954b-427ef568ce69\" (UID: \"d5522833-034d-43cc-954b-427ef568ce69\") " Mar 16 15:17:50 crc kubenswrapper[4736]: I0316 15:17:50.995909 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5522833-034d-43cc-954b-427ef568ce69-utilities\") pod \"d5522833-034d-43cc-954b-427ef568ce69\" (UID: \"d5522833-034d-43cc-954b-427ef568ce69\") " Mar 16 15:17:50 crc kubenswrapper[4736]: I0316 15:17:50.996056 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5522833-034d-43cc-954b-427ef568ce69-catalog-content\") pod \"d5522833-034d-43cc-954b-427ef568ce69\" (UID: \"d5522833-034d-43cc-954b-427ef568ce69\") " Mar 16 15:17:50 crc kubenswrapper[4736]: I0316 15:17:50.998324 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5522833-034d-43cc-954b-427ef568ce69-utilities" (OuterVolumeSpecName: "utilities") pod "d5522833-034d-43cc-954b-427ef568ce69" (UID: "d5522833-034d-43cc-954b-427ef568ce69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:17:51 crc kubenswrapper[4736]: I0316 15:17:51.009999 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5522833-034d-43cc-954b-427ef568ce69-kube-api-access-c8xhn" (OuterVolumeSpecName: "kube-api-access-c8xhn") pod "d5522833-034d-43cc-954b-427ef568ce69" (UID: "d5522833-034d-43cc-954b-427ef568ce69"). InnerVolumeSpecName "kube-api-access-c8xhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:17:51 crc kubenswrapper[4736]: I0316 15:17:51.101589 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5522833-034d-43cc-954b-427ef568ce69-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:51 crc kubenswrapper[4736]: I0316 15:17:51.101621 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8xhn\" (UniqueName: \"kubernetes.io/projected/d5522833-034d-43cc-954b-427ef568ce69-kube-api-access-c8xhn\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:51 crc kubenswrapper[4736]: I0316 15:17:51.170369 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5522833-034d-43cc-954b-427ef568ce69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5522833-034d-43cc-954b-427ef568ce69" (UID: "d5522833-034d-43cc-954b-427ef568ce69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:17:51 crc kubenswrapper[4736]: I0316 15:17:51.205299 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5522833-034d-43cc-954b-427ef568ce69-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:51 crc kubenswrapper[4736]: I0316 15:17:51.858687 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lvjdd" event={"ID":"d5522833-034d-43cc-954b-427ef568ce69","Type":"ContainerDied","Data":"8266123176f45f541c52df398b8a9eecf289e0a1974dfdeaef02b00f0a8941c2"} Mar 16 15:17:51 crc kubenswrapper[4736]: I0316 15:17:51.859370 4736 scope.go:117] "RemoveContainer" containerID="466422a466d0c4f26d0e10f87cf93441534a75cdefc8d3cf35cf255f61ecc6e5" Mar 16 15:17:51 crc kubenswrapper[4736]: I0316 15:17:51.858735 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lvjdd" Mar 16 15:17:51 crc kubenswrapper[4736]: I0316 15:17:51.886073 4736 scope.go:117] "RemoveContainer" containerID="d93b714827ae3f55857b40bebe05954821c8e7712858821b07cc082ef054ef59" Mar 16 15:17:51 crc kubenswrapper[4736]: I0316 15:17:51.934900 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lvjdd"] Mar 16 15:17:51 crc kubenswrapper[4736]: I0316 15:17:51.940067 4736 scope.go:117] "RemoveContainer" containerID="09f0a15b6fdb9e131cce6d7e2c2dfb000798917494b24142965e4e28390ce0dd" Mar 16 15:17:51 crc kubenswrapper[4736]: I0316 15:17:51.940774 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lvjdd"] Mar 16 15:17:52 crc kubenswrapper[4736]: I0316 15:17:52.306301 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 16 15:17:52 crc kubenswrapper[4736]: I0316 15:17:52.815987 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:17:52 crc kubenswrapper[4736]: I0316 15:17:52.816686 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:17:52 crc kubenswrapper[4736]: I0316 15:17:52.892205 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:17:52 crc kubenswrapper[4736]: I0316 15:17:52.985270 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5522833-034d-43cc-954b-427ef568ce69" path="/var/lib/kubelet/pods/d5522833-034d-43cc-954b-427ef568ce69/volumes" Mar 16 15:17:53 crc kubenswrapper[4736]: I0316 15:17:53.265007 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:17:53 crc kubenswrapper[4736]: I0316 15:17:53.265061 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:17:53 crc kubenswrapper[4736]: I0316 15:17:53.311970 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:17:53 crc kubenswrapper[4736]: I0316 15:17:53.918014 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:17:53 crc kubenswrapper[4736]: I0316 15:17:53.948240 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:17:56 crc kubenswrapper[4736]: I0316 15:17:56.746927 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t25g4"] Mar 16 15:17:56 crc kubenswrapper[4736]: I0316 15:17:56.747823 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t25g4" podUID="26502a77-e11a-496c-8bf5-483d61d2ed8d" containerName="registry-server" containerID="cri-o://247bbe9b3ba888028cd217128f24c68e127f92b04ffe072c26276b8f084c8e95" gracePeriod=2 Mar 16 15:17:56 crc kubenswrapper[4736]: I0316 15:17:56.899361 4736 generic.go:334] "Generic (PLEG): container finished" podID="26502a77-e11a-496c-8bf5-483d61d2ed8d" containerID="247bbe9b3ba888028cd217128f24c68e127f92b04ffe072c26276b8f084c8e95" exitCode=0 Mar 16 15:17:56 crc kubenswrapper[4736]: I0316 15:17:56.899426 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t25g4" event={"ID":"26502a77-e11a-496c-8bf5-483d61d2ed8d","Type":"ContainerDied","Data":"247bbe9b3ba888028cd217128f24c68e127f92b04ffe072c26276b8f084c8e95"} Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.251772 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.411508 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26502a77-e11a-496c-8bf5-483d61d2ed8d-catalog-content\") pod \"26502a77-e11a-496c-8bf5-483d61d2ed8d\" (UID: \"26502a77-e11a-496c-8bf5-483d61d2ed8d\") " Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.411822 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26502a77-e11a-496c-8bf5-483d61d2ed8d-utilities\") pod \"26502a77-e11a-496c-8bf5-483d61d2ed8d\" (UID: \"26502a77-e11a-496c-8bf5-483d61d2ed8d\") " Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.412061 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvv9z\" (UniqueName: \"kubernetes.io/projected/26502a77-e11a-496c-8bf5-483d61d2ed8d-kube-api-access-lvv9z\") pod \"26502a77-e11a-496c-8bf5-483d61d2ed8d\" (UID: \"26502a77-e11a-496c-8bf5-483d61d2ed8d\") " Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.413025 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26502a77-e11a-496c-8bf5-483d61d2ed8d-utilities" (OuterVolumeSpecName: "utilities") pod "26502a77-e11a-496c-8bf5-483d61d2ed8d" (UID: "26502a77-e11a-496c-8bf5-483d61d2ed8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.413502 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26502a77-e11a-496c-8bf5-483d61d2ed8d-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.426334 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26502a77-e11a-496c-8bf5-483d61d2ed8d-kube-api-access-lvv9z" (OuterVolumeSpecName: "kube-api-access-lvv9z") pod "26502a77-e11a-496c-8bf5-483d61d2ed8d" (UID: "26502a77-e11a-496c-8bf5-483d61d2ed8d"). InnerVolumeSpecName "kube-api-access-lvv9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.478926 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26502a77-e11a-496c-8bf5-483d61d2ed8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26502a77-e11a-496c-8bf5-483d61d2ed8d" (UID: "26502a77-e11a-496c-8bf5-483d61d2ed8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.515548 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvv9z\" (UniqueName: \"kubernetes.io/projected/26502a77-e11a-496c-8bf5-483d61d2ed8d-kube-api-access-lvv9z\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.515820 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26502a77-e11a-496c-8bf5-483d61d2ed8d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.909420 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t25g4" event={"ID":"26502a77-e11a-496c-8bf5-483d61d2ed8d","Type":"ContainerDied","Data":"dc7890518e14952e1980342bd1f2281aef4266457f56f0ecc4a3b773f0123f78"} Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.909514 4736 scope.go:117] "RemoveContainer" containerID="247bbe9b3ba888028cd217128f24c68e127f92b04ffe072c26276b8f084c8e95" Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.909592 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t25g4" Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.935701 4736 scope.go:117] "RemoveContainer" containerID="090dfa0f0dca5d28837aaefc14dd133148294f93f67e780537821ab3d8b3b756" Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.956249 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t25g4"] Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.960137 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t25g4"] Mar 16 15:17:57 crc kubenswrapper[4736]: I0316 15:17:57.969357 4736 scope.go:117] "RemoveContainer" containerID="5b12af9e3b54963b5e8e5b98814ed043daad58c499c93fd5863a3f1e51120dd5" Mar 16 15:17:58 crc kubenswrapper[4736]: I0316 15:17:58.987288 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26502a77-e11a-496c-8bf5-483d61d2ed8d" path="/var/lib/kubelet/pods/26502a77-e11a-496c-8bf5-483d61d2ed8d/volumes" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.922631 4736 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.922929 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26502a77-e11a-496c-8bf5-483d61d2ed8d" containerName="extract-utilities" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.922946 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="26502a77-e11a-496c-8bf5-483d61d2ed8d" containerName="extract-utilities" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.922959 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26502a77-e11a-496c-8bf5-483d61d2ed8d" containerName="extract-content" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.922970 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="26502a77-e11a-496c-8bf5-483d61d2ed8d" containerName="extract-content" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.922983 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5522833-034d-43cc-954b-427ef568ce69" containerName="registry-server" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.922992 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5522833-034d-43cc-954b-427ef568ce69" containerName="registry-server" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.923003 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5522833-034d-43cc-954b-427ef568ce69" containerName="extract-utilities" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.923011 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5522833-034d-43cc-954b-427ef568ce69" containerName="extract-utilities" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.923024 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4444c3-8376-49ab-a094-348594b05bdb" containerName="extract-content" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.923032 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4444c3-8376-49ab-a094-348594b05bdb" containerName="extract-content" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.923048 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4444c3-8376-49ab-a094-348594b05bdb" containerName="extract-utilities" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.923055 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4444c3-8376-49ab-a094-348594b05bdb" containerName="extract-utilities" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.923066 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5522833-034d-43cc-954b-427ef568ce69" containerName="extract-content" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.923074 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5522833-034d-43cc-954b-427ef568ce69" containerName="extract-content" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.923084 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4444c3-8376-49ab-a094-348594b05bdb" containerName="registry-server" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.923093 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4444c3-8376-49ab-a094-348594b05bdb" containerName="registry-server" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.923136 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26502a77-e11a-496c-8bf5-483d61d2ed8d" containerName="registry-server" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.923148 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="26502a77-e11a-496c-8bf5-483d61d2ed8d" containerName="registry-server" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.923281 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5522833-034d-43cc-954b-427ef568ce69" containerName="registry-server" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.923303 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c4444c3-8376-49ab-a094-348594b05bdb" containerName="registry-server" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.923315 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="26502a77-e11a-496c-8bf5-483d61d2ed8d" containerName="registry-server" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.923784 4736 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.923997 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.924135 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c" gracePeriod=15 Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.924204 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144" gracePeriod=15 Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.924317 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630" gracePeriod=15 Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.924349 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722" gracePeriod=15 Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.924298 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b" gracePeriod=15 Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925220 4736 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.925388 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925402 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.925412 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925420 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.925431 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925439 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.925448 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925455 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.925468 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925476 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.925491 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925501 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.925510 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925518 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.925528 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925535 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925670 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925682 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925695 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925707 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925716 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925725 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925737 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925745 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.925877 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925887 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: E0316 15:17:59.925904 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.925913 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.926026 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 16 15:17:59 crc kubenswrapper[4736]: I0316 15:17:59.975695 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.056125 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.056200 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.056238 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.056289 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.056322 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.056346 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.056385 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.056427 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158230 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158300 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158322 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158348 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158370 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158406 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158432 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158457 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158526 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158570 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158592 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158616 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158639 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158672 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158715 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.158757 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.277734 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:00 crc kubenswrapper[4736]: W0316 15:18:00.305805 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-15631be1ebed4e15446ef7a0854def2c2921c580417761433b98b39ec3146720 WatchSource:0}: Error finding container 15631be1ebed4e15446ef7a0854def2c2921c580417761433b98b39ec3146720: Status 404 returned error can't find the container with id 15631be1ebed4e15446ef7a0854def2c2921c580417761433b98b39ec3146720 Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.312124 4736 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189d5b5c512b5219 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:18:00.310755865 +0000 UTC m=+282.038146152,LastTimestamp:2026-03-16 15:18:00.310755865 +0000 UTC m=+282.038146152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.322247 4736 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.322574 4736 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.323160 4736 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.323909 4736 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.324618 4736 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.324671 4736 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.325037 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="200ms" Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.526019 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="400ms" Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.719836 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:18:00Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:18:00Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:18:00Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-16T15:18:00Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.721249 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.721877 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.722437 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.722877 4736 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.722917 4736 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 16 15:18:00 crc kubenswrapper[4736]: E0316 15:18:00.927760 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="800ms" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.955308 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e2b9e2ab9bc96f7fc86b7d8bd37f26e03f0aa13ddcaca1cd6f9fdb4b9d1bdfa7"} Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.955376 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"15631be1ebed4e15446ef7a0854def2c2921c580417761433b98b39ec3146720"} Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.957004 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.957613 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.959396 4736 generic.go:334] "Generic (PLEG): container finished" podID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" containerID="05c0f04ce8c46d4bb85549b4f8dcdcc3b4142b6c4f8fc2685f187f1577bc08b8" exitCode=0 Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.959492 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7d5f695c-45db-4f9e-8dbd-8bf8b5718855","Type":"ContainerDied","Data":"05c0f04ce8c46d4bb85549b4f8dcdcc3b4142b6c4f8fc2685f187f1577bc08b8"} Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.960319 4736 status_manager.go:851] "Failed to get status for pod" podUID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.961375 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.961922 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.965830 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.967720 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.968578 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144" exitCode=0 Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.968604 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b" exitCode=0 Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.968615 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630" exitCode=0 Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.968628 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722" exitCode=2 Mar 16 15:18:00 crc kubenswrapper[4736]: I0316 15:18:00.968687 4736 scope.go:117] "RemoveContainer" containerID="df990fa683ff1e7ef0120f9f98a3c75f7ef0c1e46c0b91a576087a205ee658ab" Mar 16 15:18:01 crc kubenswrapper[4736]: E0316 15:18:01.729869 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="1.6s" Mar 16 15:18:01 crc kubenswrapper[4736]: I0316 15:18:01.981005 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.424388 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.425600 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.426085 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.426497 4736 status_manager.go:851] "Failed to get status for pod" podUID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.426918 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.543921 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.544667 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.544917 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.545179 4736 status_manager.go:851] "Failed to get status for pod" podUID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.548002 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.548238 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.548337 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.548577 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.548621 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.548641 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.548809 4736 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.548833 4736 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.548850 4736 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.650311 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-kubelet-dir\") pod \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\" (UID: \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\") " Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.650476 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-kube-api-access\") pod \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\" (UID: \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\") " Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.650584 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-var-lock\") pod \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\" (UID: \"7d5f695c-45db-4f9e-8dbd-8bf8b5718855\") " Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.650592 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7d5f695c-45db-4f9e-8dbd-8bf8b5718855" (UID: "7d5f695c-45db-4f9e-8dbd-8bf8b5718855"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.650723 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-var-lock" (OuterVolumeSpecName: "var-lock") pod "7d5f695c-45db-4f9e-8dbd-8bf8b5718855" (UID: "7d5f695c-45db-4f9e-8dbd-8bf8b5718855"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.650916 4736 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-var-lock\") on node \"crc\" DevicePath \"\"" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.650939 4736 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.661988 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7d5f695c-45db-4f9e-8dbd-8bf8b5718855" (UID: "7d5f695c-45db-4f9e-8dbd-8bf8b5718855"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.752028 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d5f695c-45db-4f9e-8dbd-8bf8b5718855-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.989875 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.992365 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.995173 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c" exitCode=0 Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.995279 4736 scope.go:117] "RemoveContainer" containerID="1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.995437 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:02 crc kubenswrapper[4736]: I0316 15:18:02.999699 4736 status_manager.go:851] "Failed to get status for pod" podUID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:02.999990 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.000324 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.003060 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7d5f695c-45db-4f9e-8dbd-8bf8b5718855","Type":"ContainerDied","Data":"8449eac36684e55c04ccb11e5aa39e4e5ef9f72fbe3da0ad71e1281b8fd4f17d"} Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.003192 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8449eac36684e55c04ccb11e5aa39e4e5ef9f72fbe3da0ad71e1281b8fd4f17d" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.003264 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.022234 4736 scope.go:117] "RemoveContainer" containerID="e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.029716 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.031224 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.031718 4736 status_manager.go:851] "Failed to get status for pod" podUID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.033048 4736 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.033453 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.034500 4736 status_manager.go:851] "Failed to get status for pod" podUID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.051955 4736 scope.go:117] "RemoveContainer" containerID="28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.075795 4736 scope.go:117] "RemoveContainer" containerID="7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.101360 4736 scope.go:117] "RemoveContainer" containerID="dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.119384 4736 scope.go:117] "RemoveContainer" containerID="e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.147920 4736 scope.go:117] "RemoveContainer" containerID="1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144" Mar 16 15:18:03 crc kubenswrapper[4736]: E0316 15:18:03.148600 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\": container with ID starting with 1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144 not found: ID does not exist" containerID="1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.148699 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144"} err="failed to get container status \"1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\": rpc error: code = NotFound desc = could not find container \"1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144\": container with ID starting with 1136babff63148657c8e26362f097ea1584bb3bc0be98915c41f869ddd2c9144 not found: ID does not exist" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.149332 4736 scope.go:117] "RemoveContainer" containerID="e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b" Mar 16 15:18:03 crc kubenswrapper[4736]: E0316 15:18:03.150068 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\": container with ID starting with e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b not found: ID does not exist" containerID="e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.150133 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b"} err="failed to get container status \"e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\": rpc error: code = NotFound desc = could not find container \"e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b\": container with ID starting with e7902ed250a7a60e05ebdf92de1f46183376b1e9ef73893e7de65edeaa35760b not found: ID does not exist" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.150164 4736 scope.go:117] "RemoveContainer" containerID="28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630" Mar 16 15:18:03 crc kubenswrapper[4736]: E0316 15:18:03.150457 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\": container with ID starting with 28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630 not found: ID does not exist" containerID="28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.150488 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630"} err="failed to get container status \"28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\": rpc error: code = NotFound desc = could not find container \"28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630\": container with ID starting with 28a8e5e9e0bad5da8847cbea52b5b0145b0fee8861b0864e21698d4e9a7f9630 not found: ID does not exist" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.150503 4736 scope.go:117] "RemoveContainer" containerID="7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722" Mar 16 15:18:03 crc kubenswrapper[4736]: E0316 15:18:03.150727 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\": container with ID starting with 7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722 not found: ID does not exist" containerID="7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.150742 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722"} err="failed to get container status \"7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\": rpc error: code = NotFound desc = could not find container \"7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722\": container with ID starting with 7ac2647e0506ac814650e7093d45cc4826ecec58c8571856b1abc0767934d722 not found: ID does not exist" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.150753 4736 scope.go:117] "RemoveContainer" containerID="dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c" Mar 16 15:18:03 crc kubenswrapper[4736]: E0316 15:18:03.151184 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\": container with ID starting with dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c not found: ID does not exist" containerID="dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.151204 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c"} err="failed to get container status \"dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\": rpc error: code = NotFound desc = could not find container \"dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c\": container with ID starting with dbf6788d965301202994ce3c6e506f8e18c1999d327150583bdeace23d0fb50c not found: ID does not exist" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.151219 4736 scope.go:117] "RemoveContainer" containerID="e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6" Mar 16 15:18:03 crc kubenswrapper[4736]: E0316 15:18:03.151511 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\": container with ID starting with e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6 not found: ID does not exist" containerID="e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6" Mar 16 15:18:03 crc kubenswrapper[4736]: I0316 15:18:03.151527 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6"} err="failed to get container status \"e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\": rpc error: code = NotFound desc = could not find container \"e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6\": container with ID starting with e4a6333ea1b9e85f92e3f2a3991ec2ae063c557dd2fbbc187c65517fdfa7e2c6 not found: ID does not exist" Mar 16 15:18:03 crc kubenswrapper[4736]: E0316 15:18:03.330481 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="3.2s" Mar 16 15:18:04 crc kubenswrapper[4736]: E0316 15:18:04.258566 4736 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189d5b5c512b5219 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-16 15:18:00.310755865 +0000 UTC m=+282.038146152,LastTimestamp:2026-03-16 15:18:00.310755865 +0000 UTC m=+282.038146152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 16 15:18:06 crc kubenswrapper[4736]: E0316 15:18:06.532000 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="6.4s" Mar 16 15:18:08 crc kubenswrapper[4736]: I0316 15:18:08.507861 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:18:08 crc kubenswrapper[4736]: I0316 15:18:08.509431 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:18:08 crc kubenswrapper[4736]: I0316 15:18:08.509552 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:18:08 crc kubenswrapper[4736]: I0316 15:18:08.510709 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 15:18:08 crc kubenswrapper[4736]: I0316 15:18:08.510837 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8" gracePeriod=600 Mar 16 15:18:08 crc kubenswrapper[4736]: I0316 15:18:08.981860 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:08 crc kubenswrapper[4736]: I0316 15:18:08.982897 4736 status_manager.go:851] "Failed to get status for pod" podUID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:09 crc kubenswrapper[4736]: I0316 15:18:09.057223 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8" exitCode=0 Mar 16 15:18:09 crc kubenswrapper[4736]: I0316 15:18:09.057283 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8"} Mar 16 15:18:09 crc kubenswrapper[4736]: I0316 15:18:09.057315 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"1c8d8f26777ffc8692ceadd75541f1f6f498f2d7da2d9e6a13c8dd2dcd18a2d7"} Mar 16 15:18:09 crc kubenswrapper[4736]: I0316 15:18:09.058601 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:09 crc kubenswrapper[4736]: I0316 15:18:09.058809 4736 status_manager.go:851] "Failed to get status for pod" podUID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:09 crc kubenswrapper[4736]: I0316 15:18:09.059121 4736 status_manager.go:851] "Failed to get status for pod" podUID="45c93e24-5358-402f-9ace-e85478dedb49" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-j9cg2\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:11 crc kubenswrapper[4736]: I0316 15:18:11.977385 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:11 crc kubenswrapper[4736]: I0316 15:18:11.978583 4736 status_manager.go:851] "Failed to get status for pod" podUID="45c93e24-5358-402f-9ace-e85478dedb49" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-j9cg2\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:11 crc kubenswrapper[4736]: I0316 15:18:11.978943 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:11 crc kubenswrapper[4736]: I0316 15:18:11.979323 4736 status_manager.go:851] "Failed to get status for pod" podUID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:11 crc kubenswrapper[4736]: I0316 15:18:11.997957 4736 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90c5713e-4cbf-4152-9dd1-2ba1ec0df626" Mar 16 15:18:11 crc kubenswrapper[4736]: I0316 15:18:11.998000 4736 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90c5713e-4cbf-4152-9dd1-2ba1ec0df626" Mar 16 15:18:11 crc kubenswrapper[4736]: E0316 15:18:11.998736 4736 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:12 crc kubenswrapper[4736]: I0316 15:18:12.000150 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:12 crc kubenswrapper[4736]: W0316 15:18:12.058050 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-1b03ec747d7968e4a98a48e3388d87c2b0ce19e9f1145037def06a1dc7002b71 WatchSource:0}: Error finding container 1b03ec747d7968e4a98a48e3388d87c2b0ce19e9f1145037def06a1dc7002b71: Status 404 returned error can't find the container with id 1b03ec747d7968e4a98a48e3388d87c2b0ce19e9f1145037def06a1dc7002b71 Mar 16 15:18:12 crc kubenswrapper[4736]: I0316 15:18:12.130289 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1b03ec747d7968e4a98a48e3388d87c2b0ce19e9f1145037def06a1dc7002b71"} Mar 16 15:18:12 crc kubenswrapper[4736]: E0316 15:18:12.933529 4736 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="7s" Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.142063 4736 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="27ebe377a8c5f0f273dd5ad3e1b79064253ec02a76d97dc42b045fd7b234c596" exitCode=0 Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.142209 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"27ebe377a8c5f0f273dd5ad3e1b79064253ec02a76d97dc42b045fd7b234c596"} Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.142835 4736 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90c5713e-4cbf-4152-9dd1-2ba1ec0df626" Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.142883 4736 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90c5713e-4cbf-4152-9dd1-2ba1ec0df626" Mar 16 15:18:13 crc kubenswrapper[4736]: E0316 15:18:13.143903 4736 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.143977 4736 status_manager.go:851] "Failed to get status for pod" podUID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.144617 4736 status_manager.go:851] "Failed to get status for pod" podUID="45c93e24-5358-402f-9ace-e85478dedb49" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-j9cg2\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.145066 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.145824 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.148641 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.148715 4736 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5e5468b43d6997c8121c6f55c925c07ad833511ca569242bfbb99b4c49f9b712" exitCode=1 Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.148756 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5e5468b43d6997c8121c6f55c925c07ad833511ca569242bfbb99b4c49f9b712"} Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.149564 4736 scope.go:117] "RemoveContainer" containerID="5e5468b43d6997c8121c6f55c925c07ad833511ca569242bfbb99b4c49f9b712" Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.149717 4736 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.149936 4736 status_manager.go:851] "Failed to get status for pod" podUID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.150181 4736 status_manager.go:851] "Failed to get status for pod" podUID="45c93e24-5358-402f-9ace-e85478dedb49" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-j9cg2\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:13 crc kubenswrapper[4736]: I0316 15:18:13.150528 4736 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 16 15:18:14 crc kubenswrapper[4736]: I0316 15:18:14.158191 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c408081c9b64a358b0f1960e33ec82dcbda4962bda72346668a79e60105b2e3"} Mar 16 15:18:14 crc kubenswrapper[4736]: I0316 15:18:14.159067 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"16eb0eb5bbe530e65fd7823b17b537ff74ccfb607a7c15d8393cda3432a786b0"} Mar 16 15:18:14 crc kubenswrapper[4736]: I0316 15:18:14.159083 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"55e74b4ea85a17e8411da3cde3147972a861db58024c2414ac7c8f0cd8997ea9"} Mar 16 15:18:14 crc kubenswrapper[4736]: I0316 15:18:14.165010 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 16 15:18:14 crc kubenswrapper[4736]: I0316 15:18:14.168654 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 16 15:18:14 crc kubenswrapper[4736]: I0316 15:18:14.168720 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8c858879965660c3666f856c1a09b94be4961bfc1fa4ac3161cb694730e48794"} Mar 16 15:18:15 crc kubenswrapper[4736]: I0316 15:18:15.179015 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cf176e959acaa3b6e804aeb36ab0b768a7220662f18c7daec4d24df69a137c80"} Mar 16 15:18:15 crc kubenswrapper[4736]: I0316 15:18:15.179626 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d334b342fa222c2972cd77e9ef868f9a7dda1599cb31faca846ff1058b6baafc"} Mar 16 15:18:15 crc kubenswrapper[4736]: I0316 15:18:15.180075 4736 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90c5713e-4cbf-4152-9dd1-2ba1ec0df626" Mar 16 15:18:15 crc kubenswrapper[4736]: I0316 15:18:15.180090 4736 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90c5713e-4cbf-4152-9dd1-2ba1ec0df626" Mar 16 15:18:15 crc kubenswrapper[4736]: I0316 15:18:15.180367 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:17 crc kubenswrapper[4736]: I0316 15:18:17.000728 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:17 crc kubenswrapper[4736]: I0316 15:18:17.000784 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:17 crc kubenswrapper[4736]: I0316 15:18:17.006994 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:20 crc kubenswrapper[4736]: I0316 15:18:20.170784 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:18:20 crc kubenswrapper[4736]: I0316 15:18:20.177832 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:18:20 crc kubenswrapper[4736]: I0316 15:18:20.196818 4736 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:20 crc kubenswrapper[4736]: I0316 15:18:20.201097 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:18:20 crc kubenswrapper[4736]: I0316 15:18:20.244389 4736 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="de9f312c-1445-416e-b692-ce5c64dfbdfa" Mar 16 15:18:21 crc kubenswrapper[4736]: I0316 15:18:21.213584 4736 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90c5713e-4cbf-4152-9dd1-2ba1ec0df626" Mar 16 15:18:21 crc kubenswrapper[4736]: I0316 15:18:21.214191 4736 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90c5713e-4cbf-4152-9dd1-2ba1ec0df626" Mar 16 15:18:21 crc kubenswrapper[4736]: I0316 15:18:21.220005 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:21 crc kubenswrapper[4736]: I0316 15:18:21.220521 4736 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="de9f312c-1445-416e-b692-ce5c64dfbdfa" Mar 16 15:18:22 crc kubenswrapper[4736]: I0316 15:18:22.223055 4736 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90c5713e-4cbf-4152-9dd1-2ba1ec0df626" Mar 16 15:18:22 crc kubenswrapper[4736]: I0316 15:18:22.225294 4736 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90c5713e-4cbf-4152-9dd1-2ba1ec0df626" Mar 16 15:18:22 crc kubenswrapper[4736]: I0316 15:18:22.227909 4736 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="de9f312c-1445-416e-b692-ce5c64dfbdfa" Mar 16 15:18:30 crc kubenswrapper[4736]: I0316 15:18:30.159842 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 16 15:18:30 crc kubenswrapper[4736]: I0316 15:18:30.210312 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 16 15:18:30 crc kubenswrapper[4736]: I0316 15:18:30.472662 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 16 15:18:30 crc kubenswrapper[4736]: I0316 15:18:30.858778 4736 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 16 15:18:31 crc kubenswrapper[4736]: I0316 15:18:31.386202 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 16 15:18:31 crc kubenswrapper[4736]: I0316 15:18:31.455425 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 16 15:18:31 crc kubenswrapper[4736]: I0316 15:18:31.653736 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 16 15:18:31 crc kubenswrapper[4736]: I0316 15:18:31.806381 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 16 15:18:31 crc kubenswrapper[4736]: I0316 15:18:31.918597 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 16 15:18:31 crc kubenswrapper[4736]: I0316 15:18:31.933303 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 16 15:18:32 crc kubenswrapper[4736]: I0316 15:18:32.175069 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 16 15:18:32 crc kubenswrapper[4736]: I0316 15:18:32.335410 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 16 15:18:32 crc kubenswrapper[4736]: I0316 15:18:32.349995 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 16 15:18:32 crc kubenswrapper[4736]: I0316 15:18:32.676814 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 16 15:18:32 crc kubenswrapper[4736]: I0316 15:18:32.749938 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 16 15:18:32 crc kubenswrapper[4736]: I0316 15:18:32.763363 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 16 15:18:32 crc kubenswrapper[4736]: I0316 15:18:32.990183 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.009158 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.507208 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.528800 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.726094 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.848171 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.888382 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.936660 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.937870 4736 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.942092 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=34.942069083 podStartE2EDuration="34.942069083s" podCreationTimestamp="2026-03-16 15:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:18:20.13872239 +0000 UTC m=+301.866112687" watchObservedRunningTime="2026-03-16 15:18:33.942069083 +0000 UTC m=+315.669459410" Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.945483 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.945563 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.946171 4736 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90c5713e-4cbf-4152-9dd1-2ba1ec0df626" Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.946221 4736 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="90c5713e-4cbf-4152-9dd1-2ba1ec0df626" Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.958801 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 16 15:18:33 crc kubenswrapper[4736]: I0316 15:18:33.988476 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.988447274 podStartE2EDuration="13.988447274s" podCreationTimestamp="2026-03-16 15:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:18:33.981558423 +0000 UTC m=+315.708948710" watchObservedRunningTime="2026-03-16 15:18:33.988447274 +0000 UTC m=+315.715837591" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.036380 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.264466 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.498228 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.538465 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.561037 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.604255 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.608680 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.612520 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.649251 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.654087 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.675150 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.828517 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.829858 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.891944 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.897261 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.902644 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 16 15:18:34 crc kubenswrapper[4736]: I0316 15:18:34.919625 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 16 15:18:35 crc kubenswrapper[4736]: I0316 15:18:35.156357 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 16 15:18:35 crc kubenswrapper[4736]: I0316 15:18:35.157424 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 16 15:18:35 crc kubenswrapper[4736]: I0316 15:18:35.167089 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 16 15:18:35 crc kubenswrapper[4736]: I0316 15:18:35.277257 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 16 15:18:35 crc kubenswrapper[4736]: I0316 15:18:35.449373 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 16 15:18:35 crc kubenswrapper[4736]: I0316 15:18:35.479999 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 16 15:18:35 crc kubenswrapper[4736]: I0316 15:18:35.541186 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 16 15:18:35 crc kubenswrapper[4736]: I0316 15:18:35.734373 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 16 15:18:35 crc kubenswrapper[4736]: I0316 15:18:35.748090 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 16 15:18:35 crc kubenswrapper[4736]: I0316 15:18:35.787808 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 16 15:18:35 crc kubenswrapper[4736]: I0316 15:18:35.868913 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 16 15:18:35 crc kubenswrapper[4736]: I0316 15:18:35.869097 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 16 15:18:35 crc kubenswrapper[4736]: I0316 15:18:35.870973 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 16 15:18:35 crc kubenswrapper[4736]: I0316 15:18:35.962249 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.086633 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.120640 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.143132 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.160376 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.162053 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.284071 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.302220 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.342728 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.389288 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.611125 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.616519 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.663722 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.785819 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.830976 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.874712 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.901608 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 16 15:18:36 crc kubenswrapper[4736]: I0316 15:18:36.917909 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.010362 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.030462 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.041365 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.041765 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.190857 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.245888 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.323843 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.373275 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.453701 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.475562 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.481945 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.482461 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.507736 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.512379 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.513197 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.690767 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.773307 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.811122 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.823973 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 16 15:18:37 crc kubenswrapper[4736]: I0316 15:18:37.876938 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.018583 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.037565 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.054204 4736 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.065693 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.067861 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.083154 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.380832 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.412722 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.451203 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.452833 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.477494 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.557295 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.632275 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.637044 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.682378 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.714183 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 16 15:18:38 crc kubenswrapper[4736]: I0316 15:18:38.779573 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.075562 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.077442 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.086450 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.129391 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.211020 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.220149 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.303697 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.373371 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.423749 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.471174 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.476349 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.478829 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.484389 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.493594 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.499080 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.499119 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.563349 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.565771 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.592941 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.618322 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.655832 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.668031 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.684336 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.744745 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.844436 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.899906 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.907603 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 16 15:18:39 crc kubenswrapper[4736]: I0316 15:18:39.970058 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 16 15:18:40 crc kubenswrapper[4736]: I0316 15:18:40.083293 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 16 15:18:40 crc kubenswrapper[4736]: I0316 15:18:40.096307 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 16 15:18:40 crc kubenswrapper[4736]: I0316 15:18:40.155469 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 16 15:18:40 crc kubenswrapper[4736]: I0316 15:18:40.176386 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 16 15:18:40 crc kubenswrapper[4736]: I0316 15:18:40.181055 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 16 15:18:40 crc kubenswrapper[4736]: I0316 15:18:40.332669 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 16 15:18:40 crc kubenswrapper[4736]: I0316 15:18:40.333980 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 16 15:18:40 crc kubenswrapper[4736]: I0316 15:18:40.400577 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 16 15:18:40 crc kubenswrapper[4736]: I0316 15:18:40.520466 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 16 15:18:40 crc kubenswrapper[4736]: I0316 15:18:40.610507 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 16 15:18:40 crc kubenswrapper[4736]: I0316 15:18:40.766948 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 16 15:18:40 crc kubenswrapper[4736]: I0316 15:18:40.938894 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.004075 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.033857 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.082869 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.142393 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.155629 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.186464 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.199795 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.324812 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.362661 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.509091 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.602326 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.618373 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.644190 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.708296 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.714745 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.736846 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.751297 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.791933 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.868373 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.922751 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.935562 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 16 15:18:41 crc kubenswrapper[4736]: I0316 15:18:41.982141 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.007157 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.082938 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.116001 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.197051 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.235443 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.249938 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.347369 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.407659 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.433838 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.440323 4736 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.458965 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.618565 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.664279 4736 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.676812 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.856756 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.934563 4736 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 16 15:18:42 crc kubenswrapper[4736]: I0316 15:18:42.934816 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e2b9e2ab9bc96f7fc86b7d8bd37f26e03f0aa13ddcaca1cd6f9fdb4b9d1bdfa7" gracePeriod=5 Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.045681 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.062742 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.084135 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.090251 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.122375 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.222500 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.236867 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.375499 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.441178 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.446740 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.448635 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561238-dp7c4"] Mar 16 15:18:43 crc kubenswrapper[4736]: E0316 15:18:43.449171 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" containerName="installer" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.449198 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" containerName="installer" Mar 16 15:18:43 crc kubenswrapper[4736]: E0316 15:18:43.449231 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.449243 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.452578 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5f695c-45db-4f9e-8dbd-8bf8b5718855" containerName="installer" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.452611 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.455687 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561238-dp7c4" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.459337 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.459817 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.460088 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.503004 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561238-dp7c4"] Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.505419 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.511997 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.587625 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.597335 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khws8\" (UniqueName: \"kubernetes.io/projected/608f1e44-20c1-4c3e-80d3-80ce123c1b2b-kube-api-access-khws8\") pod \"auto-csr-approver-29561238-dp7c4\" (UID: \"608f1e44-20c1-4c3e-80d3-80ce123c1b2b\") " pod="openshift-infra/auto-csr-approver-29561238-dp7c4" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.698692 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khws8\" (UniqueName: \"kubernetes.io/projected/608f1e44-20c1-4c3e-80d3-80ce123c1b2b-kube-api-access-khws8\") pod \"auto-csr-approver-29561238-dp7c4\" (UID: \"608f1e44-20c1-4c3e-80d3-80ce123c1b2b\") " pod="openshift-infra/auto-csr-approver-29561238-dp7c4" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.704696 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.724135 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khws8\" (UniqueName: \"kubernetes.io/projected/608f1e44-20c1-4c3e-80d3-80ce123c1b2b-kube-api-access-khws8\") pod \"auto-csr-approver-29561238-dp7c4\" (UID: \"608f1e44-20c1-4c3e-80d3-80ce123c1b2b\") " pod="openshift-infra/auto-csr-approver-29561238-dp7c4" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.764336 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.783085 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561238-dp7c4" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.824446 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 16 15:18:43 crc kubenswrapper[4736]: I0316 15:18:43.948446 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.001372 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.105555 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.169696 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561238-dp7c4"] Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.182670 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.271699 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.290880 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.307133 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.351354 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.370465 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561238-dp7c4" event={"ID":"608f1e44-20c1-4c3e-80d3-80ce123c1b2b","Type":"ContainerStarted","Data":"f643e5d4f7643e3935158b6529ba0ef440e2b984a3fd544e1ee4e583e5eb5d6c"} Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.380915 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.403731 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.411928 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.466258 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.505233 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.628868 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.630215 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.638629 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.735658 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.837544 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.952084 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.973197 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 16 15:18:44 crc kubenswrapper[4736]: I0316 15:18:44.984249 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 16 15:18:45 crc kubenswrapper[4736]: I0316 15:18:45.023756 4736 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 16 15:18:45 crc kubenswrapper[4736]: I0316 15:18:45.024849 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 16 15:18:45 crc kubenswrapper[4736]: I0316 15:18:45.052093 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 16 15:18:45 crc kubenswrapper[4736]: I0316 15:18:45.103257 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 16 15:18:45 crc kubenswrapper[4736]: I0316 15:18:45.205744 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 16 15:18:45 crc kubenswrapper[4736]: I0316 15:18:45.217917 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 16 15:18:45 crc kubenswrapper[4736]: I0316 15:18:45.281808 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 16 15:18:45 crc kubenswrapper[4736]: I0316 15:18:45.629341 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 16 15:18:45 crc kubenswrapper[4736]: I0316 15:18:45.631750 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 16 15:18:45 crc kubenswrapper[4736]: I0316 15:18:45.774250 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 16 15:18:45 crc kubenswrapper[4736]: I0316 15:18:45.778793 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 16 15:18:45 crc kubenswrapper[4736]: I0316 15:18:45.810208 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 16 15:18:45 crc kubenswrapper[4736]: I0316 15:18:45.851346 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 16 15:18:46 crc kubenswrapper[4736]: I0316 15:18:46.387846 4736 generic.go:334] "Generic (PLEG): container finished" podID="608f1e44-20c1-4c3e-80d3-80ce123c1b2b" containerID="91640302b04a74294e374a0d7f0ba157d488b60040d4efcead0a384042cfff37" exitCode=0 Mar 16 15:18:46 crc kubenswrapper[4736]: I0316 15:18:46.387912 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561238-dp7c4" event={"ID":"608f1e44-20c1-4c3e-80d3-80ce123c1b2b","Type":"ContainerDied","Data":"91640302b04a74294e374a0d7f0ba157d488b60040d4efcead0a384042cfff37"} Mar 16 15:18:46 crc kubenswrapper[4736]: I0316 15:18:46.434161 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 16 15:18:46 crc kubenswrapper[4736]: I0316 15:18:46.472681 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 16 15:18:46 crc kubenswrapper[4736]: I0316 15:18:46.472827 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 16 15:18:46 crc kubenswrapper[4736]: I0316 15:18:46.795499 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 16 15:18:46 crc kubenswrapper[4736]: I0316 15:18:46.802135 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 16 15:18:46 crc kubenswrapper[4736]: I0316 15:18:46.846402 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 16 15:18:46 crc kubenswrapper[4736]: I0316 15:18:46.874407 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 16 15:18:47 crc kubenswrapper[4736]: I0316 15:18:47.112859 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 16 15:18:47 crc kubenswrapper[4736]: I0316 15:18:47.251699 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 16 15:18:47 crc kubenswrapper[4736]: I0316 15:18:47.671034 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561238-dp7c4" Mar 16 15:18:47 crc kubenswrapper[4736]: I0316 15:18:47.767798 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 16 15:18:47 crc kubenswrapper[4736]: I0316 15:18:47.772223 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khws8\" (UniqueName: \"kubernetes.io/projected/608f1e44-20c1-4c3e-80d3-80ce123c1b2b-kube-api-access-khws8\") pod \"608f1e44-20c1-4c3e-80d3-80ce123c1b2b\" (UID: \"608f1e44-20c1-4c3e-80d3-80ce123c1b2b\") " Mar 16 15:18:47 crc kubenswrapper[4736]: I0316 15:18:47.778053 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/608f1e44-20c1-4c3e-80d3-80ce123c1b2b-kube-api-access-khws8" (OuterVolumeSpecName: "kube-api-access-khws8") pod "608f1e44-20c1-4c3e-80d3-80ce123c1b2b" (UID: "608f1e44-20c1-4c3e-80d3-80ce123c1b2b"). InnerVolumeSpecName "kube-api-access-khws8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:18:47 crc kubenswrapper[4736]: I0316 15:18:47.874551 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khws8\" (UniqueName: \"kubernetes.io/projected/608f1e44-20c1-4c3e-80d3-80ce123c1b2b-kube-api-access-khws8\") on node \"crc\" DevicePath \"\"" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.193931 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.369913 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.399142 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561238-dp7c4" event={"ID":"608f1e44-20c1-4c3e-80d3-80ce123c1b2b","Type":"ContainerDied","Data":"f643e5d4f7643e3935158b6529ba0ef440e2b984a3fd544e1ee4e583e5eb5d6c"} Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.399205 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f643e5d4f7643e3935158b6529ba0ef440e2b984a3fd544e1ee4e583e5eb5d6c" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.399167 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561238-dp7c4" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.406898 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.407160 4736 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e2b9e2ab9bc96f7fc86b7d8bd37f26e03f0aa13ddcaca1cd6f9fdb4b9d1bdfa7" exitCode=137 Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.492531 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.492605 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.586251 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.586363 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.586448 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.586476 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.586488 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.586911 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.587204 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.587272 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.587296 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.596860 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.616168 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.687728 4736 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.687766 4736 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.687776 4736 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.687785 4736 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.687794 4736 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.739634 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.984964 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.985517 4736 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.997274 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 16 15:18:48 crc kubenswrapper[4736]: I0316 15:18:48.997311 4736 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="859c39fb-608e-40e8-bf4c-4b35ea1079eb" Mar 16 15:18:49 crc kubenswrapper[4736]: I0316 15:18:49.002303 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 16 15:18:49 crc kubenswrapper[4736]: I0316 15:18:49.002412 4736 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="859c39fb-608e-40e8-bf4c-4b35ea1079eb" Mar 16 15:18:49 crc kubenswrapper[4736]: I0316 15:18:49.156259 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 16 15:18:49 crc kubenswrapper[4736]: I0316 15:18:49.415307 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 16 15:18:49 crc kubenswrapper[4736]: I0316 15:18:49.415685 4736 scope.go:117] "RemoveContainer" containerID="e2b9e2ab9bc96f7fc86b7d8bd37f26e03f0aa13ddcaca1cd6f9fdb4b9d1bdfa7" Mar 16 15:18:49 crc kubenswrapper[4736]: I0316 15:18:49.415730 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 16 15:18:50 crc kubenswrapper[4736]: I0316 15:18:50.217813 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.278592 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cfclg"] Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.280866 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cfclg" podUID="795c05a5-413f-4361-ab0e-6796cf7862f9" containerName="registry-server" containerID="cri-o://b42da5c540efd0302155b2f45e92c10815e0ff8442d4780a124c73a18908141e" gracePeriod=30 Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.295466 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qkm5c"] Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.295773 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qkm5c" podUID="6503814f-9075-4f44-8a49-9a79e5ac3c42" containerName="registry-server" containerID="cri-o://cb56f1e801af91d3d000d614a32589cea3358b0017e0ced619af07aaec861a21" gracePeriod=30 Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.301668 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kr8vn"] Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.301974 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" podUID="d5ddb97a-f00f-442e-bb57-4cf82b3eb29b" containerName="marketplace-operator" containerID="cri-o://d319502e04e719f2f898a6200a6256d5b73d6f35513e32f9f9e4ba732df89abd" gracePeriod=30 Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.314737 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz7zr"] Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.315011 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fz7zr" podUID="880121ba-67c4-47f7-86a7-1c0caead4c3a" containerName="registry-server" containerID="cri-o://da9dac940376bf434308fae1e016e001ade44584111c2149b9b8a3c24c246bd5" gracePeriod=30 Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.326971 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tcpnf"] Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.336189 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4qcc8"] Mar 16 15:19:01 crc kubenswrapper[4736]: E0316 15:19:01.338862 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="608f1e44-20c1-4c3e-80d3-80ce123c1b2b" containerName="oc" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.338960 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="608f1e44-20c1-4c3e-80d3-80ce123c1b2b" containerName="oc" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.339179 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="608f1e44-20c1-4c3e-80d3-80ce123c1b2b" containerName="oc" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.339809 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.361045 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4qcc8"] Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.423592 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7c040e8d-b247-49a6-93bd-f928c704b135-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4qcc8\" (UID: \"7c040e8d-b247-49a6-93bd-f928c704b135\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.424022 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxn55\" (UniqueName: \"kubernetes.io/projected/7c040e8d-b247-49a6-93bd-f928c704b135-kube-api-access-wxn55\") pod \"marketplace-operator-79b997595-4qcc8\" (UID: \"7c040e8d-b247-49a6-93bd-f928c704b135\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.424247 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c040e8d-b247-49a6-93bd-f928c704b135-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4qcc8\" (UID: \"7c040e8d-b247-49a6-93bd-f928c704b135\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.527442 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c040e8d-b247-49a6-93bd-f928c704b135-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4qcc8\" (UID: \"7c040e8d-b247-49a6-93bd-f928c704b135\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.527923 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7c040e8d-b247-49a6-93bd-f928c704b135-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4qcc8\" (UID: \"7c040e8d-b247-49a6-93bd-f928c704b135\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.528073 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxn55\" (UniqueName: \"kubernetes.io/projected/7c040e8d-b247-49a6-93bd-f928c704b135-kube-api-access-wxn55\") pod \"marketplace-operator-79b997595-4qcc8\" (UID: \"7c040e8d-b247-49a6-93bd-f928c704b135\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.529120 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c040e8d-b247-49a6-93bd-f928c704b135-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4qcc8\" (UID: \"7c040e8d-b247-49a6-93bd-f928c704b135\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.538412 4736 generic.go:334] "Generic (PLEG): container finished" podID="880121ba-67c4-47f7-86a7-1c0caead4c3a" containerID="da9dac940376bf434308fae1e016e001ade44584111c2149b9b8a3c24c246bd5" exitCode=0 Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.538473 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz7zr" event={"ID":"880121ba-67c4-47f7-86a7-1c0caead4c3a","Type":"ContainerDied","Data":"da9dac940376bf434308fae1e016e001ade44584111c2149b9b8a3c24c246bd5"} Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.538997 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7c040e8d-b247-49a6-93bd-f928c704b135-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4qcc8\" (UID: \"7c040e8d-b247-49a6-93bd-f928c704b135\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.555610 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxn55\" (UniqueName: \"kubernetes.io/projected/7c040e8d-b247-49a6-93bd-f928c704b135-kube-api-access-wxn55\") pod \"marketplace-operator-79b997595-4qcc8\" (UID: \"7c040e8d-b247-49a6-93bd-f928c704b135\") " pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.561655 4736 generic.go:334] "Generic (PLEG): container finished" podID="d5ddb97a-f00f-442e-bb57-4cf82b3eb29b" containerID="d319502e04e719f2f898a6200a6256d5b73d6f35513e32f9f9e4ba732df89abd" exitCode=0 Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.561756 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" event={"ID":"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b","Type":"ContainerDied","Data":"d319502e04e719f2f898a6200a6256d5b73d6f35513e32f9f9e4ba732df89abd"} Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.574412 4736 generic.go:334] "Generic (PLEG): container finished" podID="795c05a5-413f-4361-ab0e-6796cf7862f9" containerID="b42da5c540efd0302155b2f45e92c10815e0ff8442d4780a124c73a18908141e" exitCode=0 Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.574490 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfclg" event={"ID":"795c05a5-413f-4361-ab0e-6796cf7862f9","Type":"ContainerDied","Data":"b42da5c540efd0302155b2f45e92c10815e0ff8442d4780a124c73a18908141e"} Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.580906 4736 generic.go:334] "Generic (PLEG): container finished" podID="6503814f-9075-4f44-8a49-9a79e5ac3c42" containerID="cb56f1e801af91d3d000d614a32589cea3358b0017e0ced619af07aaec861a21" exitCode=0 Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.582065 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tcpnf" podUID="cde7b0b7-ad25-49ce-8441-9f6517808da6" containerName="registry-server" containerID="cri-o://99b8ada50feaa798184be7afd3b227c772aa07ebf078bb249112e0a83ad04e39" gracePeriod=30 Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.582576 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkm5c" event={"ID":"6503814f-9075-4f44-8a49-9a79e5ac3c42","Type":"ContainerDied","Data":"cb56f1e801af91d3d000d614a32589cea3358b0017e0ced619af07aaec861a21"} Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.671488 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.774818 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.836540 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-marketplace-operator-metrics\") pod \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\" (UID: \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\") " Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.836599 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-marketplace-trusted-ca\") pod \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\" (UID: \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\") " Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.837755 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d5ddb97a-f00f-442e-bb57-4cf82b3eb29b" (UID: "d5ddb97a-f00f-442e-bb57-4cf82b3eb29b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.849191 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d5ddb97a-f00f-442e-bb57-4cf82b3eb29b" (UID: "d5ddb97a-f00f-442e-bb57-4cf82b3eb29b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.937498 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6fjr\" (UniqueName: \"kubernetes.io/projected/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-kube-api-access-f6fjr\") pod \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\" (UID: \"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b\") " Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.937788 4736 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.937801 4736 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.943806 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-kube-api-access-f6fjr" (OuterVolumeSpecName: "kube-api-access-f6fjr") pod "d5ddb97a-f00f-442e-bb57-4cf82b3eb29b" (UID: "d5ddb97a-f00f-442e-bb57-4cf82b3eb29b"). InnerVolumeSpecName "kube-api-access-f6fjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.952571 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.982824 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:19:01 crc kubenswrapper[4736]: I0316 15:19:01.986282 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.040049 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6fjr\" (UniqueName: \"kubernetes.io/projected/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b-kube-api-access-f6fjr\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.140681 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795c05a5-413f-4361-ab0e-6796cf7862f9-catalog-content\") pod \"795c05a5-413f-4361-ab0e-6796cf7862f9\" (UID: \"795c05a5-413f-4361-ab0e-6796cf7862f9\") " Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.140724 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snmz5\" (UniqueName: \"kubernetes.io/projected/880121ba-67c4-47f7-86a7-1c0caead4c3a-kube-api-access-snmz5\") pod \"880121ba-67c4-47f7-86a7-1c0caead4c3a\" (UID: \"880121ba-67c4-47f7-86a7-1c0caead4c3a\") " Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.140787 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4tzz\" (UniqueName: \"kubernetes.io/projected/6503814f-9075-4f44-8a49-9a79e5ac3c42-kube-api-access-r4tzz\") pod \"6503814f-9075-4f44-8a49-9a79e5ac3c42\" (UID: \"6503814f-9075-4f44-8a49-9a79e5ac3c42\") " Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.140881 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkjwz\" (UniqueName: \"kubernetes.io/projected/795c05a5-413f-4361-ab0e-6796cf7862f9-kube-api-access-hkjwz\") pod \"795c05a5-413f-4361-ab0e-6796cf7862f9\" (UID: \"795c05a5-413f-4361-ab0e-6796cf7862f9\") " Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.142785 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/880121ba-67c4-47f7-86a7-1c0caead4c3a-catalog-content\") pod \"880121ba-67c4-47f7-86a7-1c0caead4c3a\" (UID: \"880121ba-67c4-47f7-86a7-1c0caead4c3a\") " Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.142821 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6503814f-9075-4f44-8a49-9a79e5ac3c42-catalog-content\") pod \"6503814f-9075-4f44-8a49-9a79e5ac3c42\" (UID: \"6503814f-9075-4f44-8a49-9a79e5ac3c42\") " Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.142858 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795c05a5-413f-4361-ab0e-6796cf7862f9-utilities\") pod \"795c05a5-413f-4361-ab0e-6796cf7862f9\" (UID: \"795c05a5-413f-4361-ab0e-6796cf7862f9\") " Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.142886 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/880121ba-67c4-47f7-86a7-1c0caead4c3a-utilities\") pod \"880121ba-67c4-47f7-86a7-1c0caead4c3a\" (UID: \"880121ba-67c4-47f7-86a7-1c0caead4c3a\") " Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.142915 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6503814f-9075-4f44-8a49-9a79e5ac3c42-utilities\") pod \"6503814f-9075-4f44-8a49-9a79e5ac3c42\" (UID: \"6503814f-9075-4f44-8a49-9a79e5ac3c42\") " Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.144191 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6503814f-9075-4f44-8a49-9a79e5ac3c42-utilities" (OuterVolumeSpecName: "utilities") pod "6503814f-9075-4f44-8a49-9a79e5ac3c42" (UID: "6503814f-9075-4f44-8a49-9a79e5ac3c42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.144965 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/795c05a5-413f-4361-ab0e-6796cf7862f9-utilities" (OuterVolumeSpecName: "utilities") pod "795c05a5-413f-4361-ab0e-6796cf7862f9" (UID: "795c05a5-413f-4361-ab0e-6796cf7862f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.145518 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/880121ba-67c4-47f7-86a7-1c0caead4c3a-utilities" (OuterVolumeSpecName: "utilities") pod "880121ba-67c4-47f7-86a7-1c0caead4c3a" (UID: "880121ba-67c4-47f7-86a7-1c0caead4c3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.152325 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6503814f-9075-4f44-8a49-9a79e5ac3c42-kube-api-access-r4tzz" (OuterVolumeSpecName: "kube-api-access-r4tzz") pod "6503814f-9075-4f44-8a49-9a79e5ac3c42" (UID: "6503814f-9075-4f44-8a49-9a79e5ac3c42"). InnerVolumeSpecName "kube-api-access-r4tzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.152353 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/880121ba-67c4-47f7-86a7-1c0caead4c3a-kube-api-access-snmz5" (OuterVolumeSpecName: "kube-api-access-snmz5") pod "880121ba-67c4-47f7-86a7-1c0caead4c3a" (UID: "880121ba-67c4-47f7-86a7-1c0caead4c3a"). InnerVolumeSpecName "kube-api-access-snmz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.156486 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.167223 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/795c05a5-413f-4361-ab0e-6796cf7862f9-kube-api-access-hkjwz" (OuterVolumeSpecName: "kube-api-access-hkjwz") pod "795c05a5-413f-4361-ab0e-6796cf7862f9" (UID: "795c05a5-413f-4361-ab0e-6796cf7862f9"). InnerVolumeSpecName "kube-api-access-hkjwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:19:02 crc kubenswrapper[4736]: W0316 15:19:02.171197 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c040e8d_b247_49a6_93bd_f928c704b135.slice/crio-ed2dd6a5ea0483c1671ee10681eb560465f2248a0707036b0ac0f3e4e24eea90 WatchSource:0}: Error finding container ed2dd6a5ea0483c1671ee10681eb560465f2248a0707036b0ac0f3e4e24eea90: Status 404 returned error can't find the container with id ed2dd6a5ea0483c1671ee10681eb560465f2248a0707036b0ac0f3e4e24eea90 Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.188176 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4qcc8"] Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.213417 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/795c05a5-413f-4361-ab0e-6796cf7862f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "795c05a5-413f-4361-ab0e-6796cf7862f9" (UID: "795c05a5-413f-4361-ab0e-6796cf7862f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.218763 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6503814f-9075-4f44-8a49-9a79e5ac3c42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6503814f-9075-4f44-8a49-9a79e5ac3c42" (UID: "6503814f-9075-4f44-8a49-9a79e5ac3c42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.233318 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/880121ba-67c4-47f7-86a7-1c0caead4c3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "880121ba-67c4-47f7-86a7-1c0caead4c3a" (UID: "880121ba-67c4-47f7-86a7-1c0caead4c3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.244346 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkjwz\" (UniqueName: \"kubernetes.io/projected/795c05a5-413f-4361-ab0e-6796cf7862f9-kube-api-access-hkjwz\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.244376 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/880121ba-67c4-47f7-86a7-1c0caead4c3a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.244393 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6503814f-9075-4f44-8a49-9a79e5ac3c42-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.244404 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795c05a5-413f-4361-ab0e-6796cf7862f9-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.244414 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/880121ba-67c4-47f7-86a7-1c0caead4c3a-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.244423 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6503814f-9075-4f44-8a49-9a79e5ac3c42-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.244433 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795c05a5-413f-4361-ab0e-6796cf7862f9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.244442 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snmz5\" (UniqueName: \"kubernetes.io/projected/880121ba-67c4-47f7-86a7-1c0caead4c3a-kube-api-access-snmz5\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.244451 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4tzz\" (UniqueName: \"kubernetes.io/projected/6503814f-9075-4f44-8a49-9a79e5ac3c42-kube-api-access-r4tzz\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.346887 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsrrk\" (UniqueName: \"kubernetes.io/projected/cde7b0b7-ad25-49ce-8441-9f6517808da6-kube-api-access-hsrrk\") pod \"cde7b0b7-ad25-49ce-8441-9f6517808da6\" (UID: \"cde7b0b7-ad25-49ce-8441-9f6517808da6\") " Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.347022 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde7b0b7-ad25-49ce-8441-9f6517808da6-utilities\") pod \"cde7b0b7-ad25-49ce-8441-9f6517808da6\" (UID: \"cde7b0b7-ad25-49ce-8441-9f6517808da6\") " Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.348334 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde7b0b7-ad25-49ce-8441-9f6517808da6-catalog-content\") pod \"cde7b0b7-ad25-49ce-8441-9f6517808da6\" (UID: \"cde7b0b7-ad25-49ce-8441-9f6517808da6\") " Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.350047 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cde7b0b7-ad25-49ce-8441-9f6517808da6-utilities" (OuterVolumeSpecName: "utilities") pod "cde7b0b7-ad25-49ce-8441-9f6517808da6" (UID: "cde7b0b7-ad25-49ce-8441-9f6517808da6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.358957 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde7b0b7-ad25-49ce-8441-9f6517808da6-kube-api-access-hsrrk" (OuterVolumeSpecName: "kube-api-access-hsrrk") pod "cde7b0b7-ad25-49ce-8441-9f6517808da6" (UID: "cde7b0b7-ad25-49ce-8441-9f6517808da6"). InnerVolumeSpecName "kube-api-access-hsrrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.450256 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde7b0b7-ad25-49ce-8441-9f6517808da6-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.450301 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsrrk\" (UniqueName: \"kubernetes.io/projected/cde7b0b7-ad25-49ce-8441-9f6517808da6-kube-api-access-hsrrk\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.494737 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cde7b0b7-ad25-49ce-8441-9f6517808da6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cde7b0b7-ad25-49ce-8441-9f6517808da6" (UID: "cde7b0b7-ad25-49ce-8441-9f6517808da6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.551932 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde7b0b7-ad25-49ce-8441-9f6517808da6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.590599 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cfclg" event={"ID":"795c05a5-413f-4361-ab0e-6796cf7862f9","Type":"ContainerDied","Data":"28410739555aef9eca2cdcd4d5a07bf4c81278a86fc1ec288f5ee92ce8e01fbb"} Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.590667 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cfclg" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.590691 4736 scope.go:117] "RemoveContainer" containerID="b42da5c540efd0302155b2f45e92c10815e0ff8442d4780a124c73a18908141e" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.593982 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qkm5c" event={"ID":"6503814f-9075-4f44-8a49-9a79e5ac3c42","Type":"ContainerDied","Data":"beaf9729b22b6de8596c1b74adaffbccbfc09082f86c77272dba47151f85fd38"} Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.594058 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qkm5c" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.596394 4736 generic.go:334] "Generic (PLEG): container finished" podID="cde7b0b7-ad25-49ce-8441-9f6517808da6" containerID="99b8ada50feaa798184be7afd3b227c772aa07ebf078bb249112e0a83ad04e39" exitCode=0 Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.596475 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcpnf" event={"ID":"cde7b0b7-ad25-49ce-8441-9f6517808da6","Type":"ContainerDied","Data":"99b8ada50feaa798184be7afd3b227c772aa07ebf078bb249112e0a83ad04e39"} Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.596527 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tcpnf" event={"ID":"cde7b0b7-ad25-49ce-8441-9f6517808da6","Type":"ContainerDied","Data":"983fc78448b7669bba68b1675c2093ad19dcb7b7994a9f0695eec8bb00ab047e"} Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.596540 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tcpnf" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.598345 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" event={"ID":"7c040e8d-b247-49a6-93bd-f928c704b135","Type":"ContainerStarted","Data":"b6e183703b7b93693faffeb6e592688ddc8939e1154f7b027f2df94acc619562"} Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.598454 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" event={"ID":"7c040e8d-b247-49a6-93bd-f928c704b135","Type":"ContainerStarted","Data":"ed2dd6a5ea0483c1671ee10681eb560465f2248a0707036b0ac0f3e4e24eea90"} Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.598526 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.600904 4736 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4qcc8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.69:8080/healthz\": dial tcp 10.217.0.69:8080: connect: connection refused" start-of-body= Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.600971 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" podUID="7c040e8d-b247-49a6-93bd-f928c704b135" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.69:8080/healthz\": dial tcp 10.217.0.69:8080: connect: connection refused" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.601908 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.602254 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kr8vn" event={"ID":"d5ddb97a-f00f-442e-bb57-4cf82b3eb29b","Type":"ContainerDied","Data":"881a605d8d36bfc008bf35936cff8e59e819f3376209b711756765d7d206ecd9"} Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.607317 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fz7zr" event={"ID":"880121ba-67c4-47f7-86a7-1c0caead4c3a","Type":"ContainerDied","Data":"240100c7c1f09ef7c07acb1519dd6fb83d765b09ec61841640f9e5991ee452c6"} Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.607405 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fz7zr" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.612405 4736 scope.go:117] "RemoveContainer" containerID="6098e1484cb7d025ce673ed3d5ade854a158dca952c8af481c284f694b316996" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.642581 4736 scope.go:117] "RemoveContainer" containerID="5b78202d24b2123af522c393e8280a1eea4835d1b6b1e477e41d61e85b1dcdb4" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.645894 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" podStartSLOduration=1.645872684 podStartE2EDuration="1.645872684s" podCreationTimestamp="2026-03-16 15:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:19:02.643704203 +0000 UTC m=+344.371094500" watchObservedRunningTime="2026-03-16 15:19:02.645872684 +0000 UTC m=+344.373262971" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.675118 4736 scope.go:117] "RemoveContainer" containerID="cb56f1e801af91d3d000d614a32589cea3358b0017e0ced619af07aaec861a21" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.693506 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tcpnf"] Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.700707 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tcpnf"] Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.707993 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kr8vn"] Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.716934 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kr8vn"] Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.724034 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cfclg"] Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.731327 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cfclg"] Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.731752 4736 scope.go:117] "RemoveContainer" containerID="9849ac1005bd0e02a40354df3fc7734e6f42f4a4d1dd969ac2d94f979b325425" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.734568 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz7zr"] Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.737260 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fz7zr"] Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.740920 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qkm5c"] Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.744939 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qkm5c"] Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.760587 4736 scope.go:117] "RemoveContainer" containerID="e2c4a70f32e43c645141549522a3c2c2e12924943f5f0c5f048846518d3e3eb8" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.778643 4736 scope.go:117] "RemoveContainer" containerID="99b8ada50feaa798184be7afd3b227c772aa07ebf078bb249112e0a83ad04e39" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.809869 4736 scope.go:117] "RemoveContainer" containerID="8a04c9711d64214d73ac06562faa22cb71b9fba1a5f1d882ad83ffc33eb6f017" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.830586 4736 scope.go:117] "RemoveContainer" containerID="dc0b14879a2829db11025f69e5fe37aa97efbe2c6e54916bf564e3fb7903a7e5" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.848834 4736 scope.go:117] "RemoveContainer" containerID="99b8ada50feaa798184be7afd3b227c772aa07ebf078bb249112e0a83ad04e39" Mar 16 15:19:02 crc kubenswrapper[4736]: E0316 15:19:02.849522 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99b8ada50feaa798184be7afd3b227c772aa07ebf078bb249112e0a83ad04e39\": container with ID starting with 99b8ada50feaa798184be7afd3b227c772aa07ebf078bb249112e0a83ad04e39 not found: ID does not exist" containerID="99b8ada50feaa798184be7afd3b227c772aa07ebf078bb249112e0a83ad04e39" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.849647 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99b8ada50feaa798184be7afd3b227c772aa07ebf078bb249112e0a83ad04e39"} err="failed to get container status \"99b8ada50feaa798184be7afd3b227c772aa07ebf078bb249112e0a83ad04e39\": rpc error: code = NotFound desc = could not find container \"99b8ada50feaa798184be7afd3b227c772aa07ebf078bb249112e0a83ad04e39\": container with ID starting with 99b8ada50feaa798184be7afd3b227c772aa07ebf078bb249112e0a83ad04e39 not found: ID does not exist" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.849765 4736 scope.go:117] "RemoveContainer" containerID="8a04c9711d64214d73ac06562faa22cb71b9fba1a5f1d882ad83ffc33eb6f017" Mar 16 15:19:02 crc kubenswrapper[4736]: E0316 15:19:02.850253 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a04c9711d64214d73ac06562faa22cb71b9fba1a5f1d882ad83ffc33eb6f017\": container with ID starting with 8a04c9711d64214d73ac06562faa22cb71b9fba1a5f1d882ad83ffc33eb6f017 not found: ID does not exist" containerID="8a04c9711d64214d73ac06562faa22cb71b9fba1a5f1d882ad83ffc33eb6f017" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.850308 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a04c9711d64214d73ac06562faa22cb71b9fba1a5f1d882ad83ffc33eb6f017"} err="failed to get container status \"8a04c9711d64214d73ac06562faa22cb71b9fba1a5f1d882ad83ffc33eb6f017\": rpc error: code = NotFound desc = could not find container \"8a04c9711d64214d73ac06562faa22cb71b9fba1a5f1d882ad83ffc33eb6f017\": container with ID starting with 8a04c9711d64214d73ac06562faa22cb71b9fba1a5f1d882ad83ffc33eb6f017 not found: ID does not exist" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.850349 4736 scope.go:117] "RemoveContainer" containerID="dc0b14879a2829db11025f69e5fe37aa97efbe2c6e54916bf564e3fb7903a7e5" Mar 16 15:19:02 crc kubenswrapper[4736]: E0316 15:19:02.850738 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc0b14879a2829db11025f69e5fe37aa97efbe2c6e54916bf564e3fb7903a7e5\": container with ID starting with dc0b14879a2829db11025f69e5fe37aa97efbe2c6e54916bf564e3fb7903a7e5 not found: ID does not exist" containerID="dc0b14879a2829db11025f69e5fe37aa97efbe2c6e54916bf564e3fb7903a7e5" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.850776 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc0b14879a2829db11025f69e5fe37aa97efbe2c6e54916bf564e3fb7903a7e5"} err="failed to get container status \"dc0b14879a2829db11025f69e5fe37aa97efbe2c6e54916bf564e3fb7903a7e5\": rpc error: code = NotFound desc = could not find container \"dc0b14879a2829db11025f69e5fe37aa97efbe2c6e54916bf564e3fb7903a7e5\": container with ID starting with dc0b14879a2829db11025f69e5fe37aa97efbe2c6e54916bf564e3fb7903a7e5 not found: ID does not exist" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.850792 4736 scope.go:117] "RemoveContainer" containerID="d319502e04e719f2f898a6200a6256d5b73d6f35513e32f9f9e4ba732df89abd" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.865866 4736 scope.go:117] "RemoveContainer" containerID="da9dac940376bf434308fae1e016e001ade44584111c2149b9b8a3c24c246bd5" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.879953 4736 scope.go:117] "RemoveContainer" containerID="67144720a29b5aec9b60f2b279512d9f77b84cba034a0763e9743faa49a08ba9" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.894299 4736 scope.go:117] "RemoveContainer" containerID="7944eccfa093949af54052118a9973403889526f4aa971d781d02cc4e66cc879" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.986181 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6503814f-9075-4f44-8a49-9a79e5ac3c42" path="/var/lib/kubelet/pods/6503814f-9075-4f44-8a49-9a79e5ac3c42/volumes" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.987034 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="795c05a5-413f-4361-ab0e-6796cf7862f9" path="/var/lib/kubelet/pods/795c05a5-413f-4361-ab0e-6796cf7862f9/volumes" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.987656 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="880121ba-67c4-47f7-86a7-1c0caead4c3a" path="/var/lib/kubelet/pods/880121ba-67c4-47f7-86a7-1c0caead4c3a/volumes" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.988646 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cde7b0b7-ad25-49ce-8441-9f6517808da6" path="/var/lib/kubelet/pods/cde7b0b7-ad25-49ce-8441-9f6517808da6/volumes" Mar 16 15:19:02 crc kubenswrapper[4736]: I0316 15:19:02.989298 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ddb97a-f00f-442e-bb57-4cf82b3eb29b" path="/var/lib/kubelet/pods/d5ddb97a-f00f-442e-bb57-4cf82b3eb29b/volumes" Mar 16 15:19:03 crc kubenswrapper[4736]: I0316 15:19:03.627227 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.480067 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-plpwc"] Mar 16 15:19:55 crc kubenswrapper[4736]: E0316 15:19:55.481018 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ddb97a-f00f-442e-bb57-4cf82b3eb29b" containerName="marketplace-operator" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481034 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ddb97a-f00f-442e-bb57-4cf82b3eb29b" containerName="marketplace-operator" Mar 16 15:19:55 crc kubenswrapper[4736]: E0316 15:19:55.481048 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795c05a5-413f-4361-ab0e-6796cf7862f9" containerName="extract-content" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481058 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="795c05a5-413f-4361-ab0e-6796cf7862f9" containerName="extract-content" Mar 16 15:19:55 crc kubenswrapper[4736]: E0316 15:19:55.481071 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6503814f-9075-4f44-8a49-9a79e5ac3c42" containerName="extract-utilities" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481079 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6503814f-9075-4f44-8a49-9a79e5ac3c42" containerName="extract-utilities" Mar 16 15:19:55 crc kubenswrapper[4736]: E0316 15:19:55.481090 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6503814f-9075-4f44-8a49-9a79e5ac3c42" containerName="registry-server" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481098 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6503814f-9075-4f44-8a49-9a79e5ac3c42" containerName="registry-server" Mar 16 15:19:55 crc kubenswrapper[4736]: E0316 15:19:55.481117 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="880121ba-67c4-47f7-86a7-1c0caead4c3a" containerName="extract-utilities" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481137 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="880121ba-67c4-47f7-86a7-1c0caead4c3a" containerName="extract-utilities" Mar 16 15:19:55 crc kubenswrapper[4736]: E0316 15:19:55.481151 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde7b0b7-ad25-49ce-8441-9f6517808da6" containerName="extract-utilities" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481159 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde7b0b7-ad25-49ce-8441-9f6517808da6" containerName="extract-utilities" Mar 16 15:19:55 crc kubenswrapper[4736]: E0316 15:19:55.481172 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6503814f-9075-4f44-8a49-9a79e5ac3c42" containerName="extract-content" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481180 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6503814f-9075-4f44-8a49-9a79e5ac3c42" containerName="extract-content" Mar 16 15:19:55 crc kubenswrapper[4736]: E0316 15:19:55.481191 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde7b0b7-ad25-49ce-8441-9f6517808da6" containerName="extract-content" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481200 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde7b0b7-ad25-49ce-8441-9f6517808da6" containerName="extract-content" Mar 16 15:19:55 crc kubenswrapper[4736]: E0316 15:19:55.481212 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde7b0b7-ad25-49ce-8441-9f6517808da6" containerName="registry-server" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481221 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde7b0b7-ad25-49ce-8441-9f6517808da6" containerName="registry-server" Mar 16 15:19:55 crc kubenswrapper[4736]: E0316 15:19:55.481232 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="880121ba-67c4-47f7-86a7-1c0caead4c3a" containerName="extract-content" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481239 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="880121ba-67c4-47f7-86a7-1c0caead4c3a" containerName="extract-content" Mar 16 15:19:55 crc kubenswrapper[4736]: E0316 15:19:55.481248 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795c05a5-413f-4361-ab0e-6796cf7862f9" containerName="extract-utilities" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481256 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="795c05a5-413f-4361-ab0e-6796cf7862f9" containerName="extract-utilities" Mar 16 15:19:55 crc kubenswrapper[4736]: E0316 15:19:55.481267 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795c05a5-413f-4361-ab0e-6796cf7862f9" containerName="registry-server" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481274 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="795c05a5-413f-4361-ab0e-6796cf7862f9" containerName="registry-server" Mar 16 15:19:55 crc kubenswrapper[4736]: E0316 15:19:55.481288 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="880121ba-67c4-47f7-86a7-1c0caead4c3a" containerName="registry-server" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481296 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="880121ba-67c4-47f7-86a7-1c0caead4c3a" containerName="registry-server" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481420 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ddb97a-f00f-442e-bb57-4cf82b3eb29b" containerName="marketplace-operator" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481432 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde7b0b7-ad25-49ce-8441-9f6517808da6" containerName="registry-server" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481450 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6503814f-9075-4f44-8a49-9a79e5ac3c42" containerName="registry-server" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481461 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="795c05a5-413f-4361-ab0e-6796cf7862f9" containerName="registry-server" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.481470 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="880121ba-67c4-47f7-86a7-1c0caead4c3a" containerName="registry-server" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.482034 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.510407 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-plpwc"] Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.654042 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c7534805-d03b-464c-bef7-43cd08a62da4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.654125 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c7534805-d03b-464c-bef7-43cd08a62da4-bound-sa-token\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.654280 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.654328 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c7534805-d03b-464c-bef7-43cd08a62da4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.654370 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7534805-d03b-464c-bef7-43cd08a62da4-trusted-ca\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.654388 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c7534805-d03b-464c-bef7-43cd08a62da4-registry-tls\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.654412 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c7534805-d03b-464c-bef7-43cd08a62da4-registry-certificates\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.654434 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm88m\" (UniqueName: \"kubernetes.io/projected/c7534805-d03b-464c-bef7-43cd08a62da4-kube-api-access-wm88m\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.699345 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.754879 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7534805-d03b-464c-bef7-43cd08a62da4-trusted-ca\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.754922 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c7534805-d03b-464c-bef7-43cd08a62da4-registry-tls\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.754944 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c7534805-d03b-464c-bef7-43cd08a62da4-registry-certificates\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.754963 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm88m\" (UniqueName: \"kubernetes.io/projected/c7534805-d03b-464c-bef7-43cd08a62da4-kube-api-access-wm88m\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.754993 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c7534805-d03b-464c-bef7-43cd08a62da4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.755016 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c7534805-d03b-464c-bef7-43cd08a62da4-bound-sa-token\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.755058 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c7534805-d03b-464c-bef7-43cd08a62da4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.756094 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c7534805-d03b-464c-bef7-43cd08a62da4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.756517 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c7534805-d03b-464c-bef7-43cd08a62da4-trusted-ca\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.757750 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c7534805-d03b-464c-bef7-43cd08a62da4-registry-certificates\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.761083 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c7534805-d03b-464c-bef7-43cd08a62da4-registry-tls\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.770299 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c7534805-d03b-464c-bef7-43cd08a62da4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.776175 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c7534805-d03b-464c-bef7-43cd08a62da4-bound-sa-token\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.776531 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm88m\" (UniqueName: \"kubernetes.io/projected/c7534805-d03b-464c-bef7-43cd08a62da4-kube-api-access-wm88m\") pod \"image-registry-66df7c8f76-plpwc\" (UID: \"c7534805-d03b-464c-bef7-43cd08a62da4\") " pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.799341 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:55 crc kubenswrapper[4736]: I0316 15:19:55.998488 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-plpwc"] Mar 16 15:19:56 crc kubenswrapper[4736]: I0316 15:19:56.916308 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" event={"ID":"c7534805-d03b-464c-bef7-43cd08a62da4","Type":"ContainerStarted","Data":"ca8d18895aa15d20d66ae0d6009b808880603bbccc1d163084e2d8b9c1b1652e"} Mar 16 15:19:56 crc kubenswrapper[4736]: I0316 15:19:56.916649 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" event={"ID":"c7534805-d03b-464c-bef7-43cd08a62da4","Type":"ContainerStarted","Data":"ae5382990ca3533202bc866098ef3a5fe438069a08214b6981d350da8f772397"} Mar 16 15:19:56 crc kubenswrapper[4736]: I0316 15:19:56.916893 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:19:56 crc kubenswrapper[4736]: I0316 15:19:56.946652 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" podStartSLOduration=1.946631908 podStartE2EDuration="1.946631908s" podCreationTimestamp="2026-03-16 15:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:19:56.944295527 +0000 UTC m=+398.671685814" watchObservedRunningTime="2026-03-16 15:19:56.946631908 +0000 UTC m=+398.674022195" Mar 16 15:20:00 crc kubenswrapper[4736]: I0316 15:20:00.129753 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561240-d98xk"] Mar 16 15:20:00 crc kubenswrapper[4736]: I0316 15:20:00.131339 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561240-d98xk" Mar 16 15:20:00 crc kubenswrapper[4736]: I0316 15:20:00.136075 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:20:00 crc kubenswrapper[4736]: I0316 15:20:00.138748 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:20:00 crc kubenswrapper[4736]: I0316 15:20:00.139128 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:20:00 crc kubenswrapper[4736]: I0316 15:20:00.144269 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561240-d98xk"] Mar 16 15:20:00 crc kubenswrapper[4736]: I0316 15:20:00.310245 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8m4h\" (UniqueName: \"kubernetes.io/projected/ae18c6b9-34d8-48e3-804c-f71bbf6fef7e-kube-api-access-k8m4h\") pod \"auto-csr-approver-29561240-d98xk\" (UID: \"ae18c6b9-34d8-48e3-804c-f71bbf6fef7e\") " pod="openshift-infra/auto-csr-approver-29561240-d98xk" Mar 16 15:20:00 crc kubenswrapper[4736]: I0316 15:20:00.411821 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8m4h\" (UniqueName: \"kubernetes.io/projected/ae18c6b9-34d8-48e3-804c-f71bbf6fef7e-kube-api-access-k8m4h\") pod \"auto-csr-approver-29561240-d98xk\" (UID: \"ae18c6b9-34d8-48e3-804c-f71bbf6fef7e\") " pod="openshift-infra/auto-csr-approver-29561240-d98xk" Mar 16 15:20:00 crc kubenswrapper[4736]: I0316 15:20:00.430266 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8m4h\" (UniqueName: \"kubernetes.io/projected/ae18c6b9-34d8-48e3-804c-f71bbf6fef7e-kube-api-access-k8m4h\") pod \"auto-csr-approver-29561240-d98xk\" (UID: \"ae18c6b9-34d8-48e3-804c-f71bbf6fef7e\") " pod="openshift-infra/auto-csr-approver-29561240-d98xk" Mar 16 15:20:00 crc kubenswrapper[4736]: I0316 15:20:00.452361 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561240-d98xk" Mar 16 15:20:00 crc kubenswrapper[4736]: I0316 15:20:00.768364 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561240-d98xk"] Mar 16 15:20:00 crc kubenswrapper[4736]: W0316 15:20:00.774138 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae18c6b9_34d8_48e3_804c_f71bbf6fef7e.slice/crio-097adaf876c7e6c24d89f227f8365d54726f91b07680f6f6b775142a7a346147 WatchSource:0}: Error finding container 097adaf876c7e6c24d89f227f8365d54726f91b07680f6f6b775142a7a346147: Status 404 returned error can't find the container with id 097adaf876c7e6c24d89f227f8365d54726f91b07680f6f6b775142a7a346147 Mar 16 15:20:00 crc kubenswrapper[4736]: I0316 15:20:00.937461 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561240-d98xk" event={"ID":"ae18c6b9-34d8-48e3-804c-f71bbf6fef7e","Type":"ContainerStarted","Data":"097adaf876c7e6c24d89f227f8365d54726f91b07680f6f6b775142a7a346147"} Mar 16 15:20:02 crc kubenswrapper[4736]: I0316 15:20:02.950741 4736 generic.go:334] "Generic (PLEG): container finished" podID="ae18c6b9-34d8-48e3-804c-f71bbf6fef7e" containerID="8aedc307974f86293ae2757cc01a47771f84ff5c449c7a6c07565bd14805b8fc" exitCode=0 Mar 16 15:20:02 crc kubenswrapper[4736]: I0316 15:20:02.951191 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561240-d98xk" event={"ID":"ae18c6b9-34d8-48e3-804c-f71bbf6fef7e","Type":"ContainerDied","Data":"8aedc307974f86293ae2757cc01a47771f84ff5c449c7a6c07565bd14805b8fc"} Mar 16 15:20:04 crc kubenswrapper[4736]: I0316 15:20:04.163365 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561240-d98xk" Mar 16 15:20:04 crc kubenswrapper[4736]: I0316 15:20:04.268631 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8m4h\" (UniqueName: \"kubernetes.io/projected/ae18c6b9-34d8-48e3-804c-f71bbf6fef7e-kube-api-access-k8m4h\") pod \"ae18c6b9-34d8-48e3-804c-f71bbf6fef7e\" (UID: \"ae18c6b9-34d8-48e3-804c-f71bbf6fef7e\") " Mar 16 15:20:04 crc kubenswrapper[4736]: I0316 15:20:04.282153 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae18c6b9-34d8-48e3-804c-f71bbf6fef7e-kube-api-access-k8m4h" (OuterVolumeSpecName: "kube-api-access-k8m4h") pod "ae18c6b9-34d8-48e3-804c-f71bbf6fef7e" (UID: "ae18c6b9-34d8-48e3-804c-f71bbf6fef7e"). InnerVolumeSpecName "kube-api-access-k8m4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:20:04 crc kubenswrapper[4736]: I0316 15:20:04.370668 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8m4h\" (UniqueName: \"kubernetes.io/projected/ae18c6b9-34d8-48e3-804c-f71bbf6fef7e-kube-api-access-k8m4h\") on node \"crc\" DevicePath \"\"" Mar 16 15:20:04 crc kubenswrapper[4736]: I0316 15:20:04.962308 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561240-d98xk" event={"ID":"ae18c6b9-34d8-48e3-804c-f71bbf6fef7e","Type":"ContainerDied","Data":"097adaf876c7e6c24d89f227f8365d54726f91b07680f6f6b775142a7a346147"} Mar 16 15:20:04 crc kubenswrapper[4736]: I0316 15:20:04.962371 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="097adaf876c7e6c24d89f227f8365d54726f91b07680f6f6b775142a7a346147" Mar 16 15:20:04 crc kubenswrapper[4736]: I0316 15:20:04.962411 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561240-d98xk" Mar 16 15:20:08 crc kubenswrapper[4736]: I0316 15:20:08.508214 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:20:08 crc kubenswrapper[4736]: I0316 15:20:08.508715 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:20:15 crc kubenswrapper[4736]: I0316 15:20:15.808683 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-plpwc" Mar 16 15:20:15 crc kubenswrapper[4736]: I0316 15:20:15.884811 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v8kpj"] Mar 16 15:20:21 crc kubenswrapper[4736]: I0316 15:20:21.990388 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m9cjw"] Mar 16 15:20:21 crc kubenswrapper[4736]: E0316 15:20:21.991562 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae18c6b9-34d8-48e3-804c-f71bbf6fef7e" containerName="oc" Mar 16 15:20:21 crc kubenswrapper[4736]: I0316 15:20:21.991598 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae18c6b9-34d8-48e3-804c-f71bbf6fef7e" containerName="oc" Mar 16 15:20:21 crc kubenswrapper[4736]: I0316 15:20:21.991888 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae18c6b9-34d8-48e3-804c-f71bbf6fef7e" containerName="oc" Mar 16 15:20:21 crc kubenswrapper[4736]: I0316 15:20:21.996967 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:21 crc kubenswrapper[4736]: I0316 15:20:21.999360 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.014355 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m9cjw"] Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.031071 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47430cd6-bdce-4f4c-b736-6dad559aed15-catalog-content\") pod \"community-operators-m9cjw\" (UID: \"47430cd6-bdce-4f4c-b736-6dad559aed15\") " pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.031169 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdm97\" (UniqueName: \"kubernetes.io/projected/47430cd6-bdce-4f4c-b736-6dad559aed15-kube-api-access-qdm97\") pod \"community-operators-m9cjw\" (UID: \"47430cd6-bdce-4f4c-b736-6dad559aed15\") " pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.031239 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47430cd6-bdce-4f4c-b736-6dad559aed15-utilities\") pod \"community-operators-m9cjw\" (UID: \"47430cd6-bdce-4f4c-b736-6dad559aed15\") " pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.132498 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47430cd6-bdce-4f4c-b736-6dad559aed15-catalog-content\") pod \"community-operators-m9cjw\" (UID: \"47430cd6-bdce-4f4c-b736-6dad559aed15\") " pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.132813 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdm97\" (UniqueName: \"kubernetes.io/projected/47430cd6-bdce-4f4c-b736-6dad559aed15-kube-api-access-qdm97\") pod \"community-operators-m9cjw\" (UID: \"47430cd6-bdce-4f4c-b736-6dad559aed15\") " pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.132846 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47430cd6-bdce-4f4c-b736-6dad559aed15-utilities\") pod \"community-operators-m9cjw\" (UID: \"47430cd6-bdce-4f4c-b736-6dad559aed15\") " pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.133374 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47430cd6-bdce-4f4c-b736-6dad559aed15-catalog-content\") pod \"community-operators-m9cjw\" (UID: \"47430cd6-bdce-4f4c-b736-6dad559aed15\") " pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.133401 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47430cd6-bdce-4f4c-b736-6dad559aed15-utilities\") pod \"community-operators-m9cjw\" (UID: \"47430cd6-bdce-4f4c-b736-6dad559aed15\") " pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.164329 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdm97\" (UniqueName: \"kubernetes.io/projected/47430cd6-bdce-4f4c-b736-6dad559aed15-kube-api-access-qdm97\") pod \"community-operators-m9cjw\" (UID: \"47430cd6-bdce-4f4c-b736-6dad559aed15\") " pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.186330 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-48gwj"] Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.187646 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.191388 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.208579 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-48gwj"] Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.234077 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785f2d20-3733-4b65-827e-45a047ecc4c6-catalog-content\") pod \"redhat-operators-48gwj\" (UID: \"785f2d20-3733-4b65-827e-45a047ecc4c6\") " pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.234207 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vsmw\" (UniqueName: \"kubernetes.io/projected/785f2d20-3733-4b65-827e-45a047ecc4c6-kube-api-access-8vsmw\") pod \"redhat-operators-48gwj\" (UID: \"785f2d20-3733-4b65-827e-45a047ecc4c6\") " pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.234266 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785f2d20-3733-4b65-827e-45a047ecc4c6-utilities\") pod \"redhat-operators-48gwj\" (UID: \"785f2d20-3733-4b65-827e-45a047ecc4c6\") " pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.314734 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.335833 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785f2d20-3733-4b65-827e-45a047ecc4c6-catalog-content\") pod \"redhat-operators-48gwj\" (UID: \"785f2d20-3733-4b65-827e-45a047ecc4c6\") " pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.336306 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vsmw\" (UniqueName: \"kubernetes.io/projected/785f2d20-3733-4b65-827e-45a047ecc4c6-kube-api-access-8vsmw\") pod \"redhat-operators-48gwj\" (UID: \"785f2d20-3733-4b65-827e-45a047ecc4c6\") " pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.336480 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785f2d20-3733-4b65-827e-45a047ecc4c6-utilities\") pod \"redhat-operators-48gwj\" (UID: \"785f2d20-3733-4b65-827e-45a047ecc4c6\") " pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.337442 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785f2d20-3733-4b65-827e-45a047ecc4c6-utilities\") pod \"redhat-operators-48gwj\" (UID: \"785f2d20-3733-4b65-827e-45a047ecc4c6\") " pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.337503 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785f2d20-3733-4b65-827e-45a047ecc4c6-catalog-content\") pod \"redhat-operators-48gwj\" (UID: \"785f2d20-3733-4b65-827e-45a047ecc4c6\") " pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.356961 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vsmw\" (UniqueName: \"kubernetes.io/projected/785f2d20-3733-4b65-827e-45a047ecc4c6-kube-api-access-8vsmw\") pod \"redhat-operators-48gwj\" (UID: \"785f2d20-3733-4b65-827e-45a047ecc4c6\") " pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.505540 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.563063 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m9cjw"] Mar 16 15:20:22 crc kubenswrapper[4736]: I0316 15:20:22.713174 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-48gwj"] Mar 16 15:20:22 crc kubenswrapper[4736]: W0316 15:20:22.719426 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod785f2d20_3733_4b65_827e_45a047ecc4c6.slice/crio-2faa7c949699922ca0990abae30a5eea5733b2012648cc84a27a055cda75c596 WatchSource:0}: Error finding container 2faa7c949699922ca0990abae30a5eea5733b2012648cc84a27a055cda75c596: Status 404 returned error can't find the container with id 2faa7c949699922ca0990abae30a5eea5733b2012648cc84a27a055cda75c596 Mar 16 15:20:23 crc kubenswrapper[4736]: I0316 15:20:23.088255 4736 generic.go:334] "Generic (PLEG): container finished" podID="47430cd6-bdce-4f4c-b736-6dad559aed15" containerID="57cb08d2237697596c9ac6aa976960fb7b9941ff088d584e353ff7cc7f84e4a3" exitCode=0 Mar 16 15:20:23 crc kubenswrapper[4736]: I0316 15:20:23.088430 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9cjw" event={"ID":"47430cd6-bdce-4f4c-b736-6dad559aed15","Type":"ContainerDied","Data":"57cb08d2237697596c9ac6aa976960fb7b9941ff088d584e353ff7cc7f84e4a3"} Mar 16 15:20:23 crc kubenswrapper[4736]: I0316 15:20:23.088803 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9cjw" event={"ID":"47430cd6-bdce-4f4c-b736-6dad559aed15","Type":"ContainerStarted","Data":"541d091b5e65e565a38943dd1a10848c30fc34929d1157fb13c8f09b037b9b80"} Mar 16 15:20:23 crc kubenswrapper[4736]: I0316 15:20:23.095271 4736 generic.go:334] "Generic (PLEG): container finished" podID="785f2d20-3733-4b65-827e-45a047ecc4c6" containerID="0ee69cb373612e21e2294d9b24629cbdd631c6bb9b84ca384c3ddde50bab6932" exitCode=0 Mar 16 15:20:23 crc kubenswrapper[4736]: I0316 15:20:23.095324 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48gwj" event={"ID":"785f2d20-3733-4b65-827e-45a047ecc4c6","Type":"ContainerDied","Data":"0ee69cb373612e21e2294d9b24629cbdd631c6bb9b84ca384c3ddde50bab6932"} Mar 16 15:20:23 crc kubenswrapper[4736]: I0316 15:20:23.095359 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48gwj" event={"ID":"785f2d20-3733-4b65-827e-45a047ecc4c6","Type":"ContainerStarted","Data":"2faa7c949699922ca0990abae30a5eea5733b2012648cc84a27a055cda75c596"} Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.103353 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9cjw" event={"ID":"47430cd6-bdce-4f4c-b736-6dad559aed15","Type":"ContainerStarted","Data":"79f09f4b1084ea153226d68e6d153e2d37a78bc305bf557880de922c5eb0d5c1"} Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.106117 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48gwj" event={"ID":"785f2d20-3733-4b65-827e-45a047ecc4c6","Type":"ContainerStarted","Data":"301cfac7673cc6d2892771cbaf4fc28144e592f25278a970a3e8a7dce577b3d9"} Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.379858 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j9zbt"] Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.380885 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.386553 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6s99\" (UniqueName: \"kubernetes.io/projected/64d704b6-37a3-4ea1-bbe6-af675569eb7a-kube-api-access-m6s99\") pod \"certified-operators-j9zbt\" (UID: \"64d704b6-37a3-4ea1-bbe6-af675569eb7a\") " pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.386758 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d704b6-37a3-4ea1-bbe6-af675569eb7a-utilities\") pod \"certified-operators-j9zbt\" (UID: \"64d704b6-37a3-4ea1-bbe6-af675569eb7a\") " pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.386850 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d704b6-37a3-4ea1-bbe6-af675569eb7a-catalog-content\") pod \"certified-operators-j9zbt\" (UID: \"64d704b6-37a3-4ea1-bbe6-af675569eb7a\") " pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.388479 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.420882 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j9zbt"] Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.488747 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d704b6-37a3-4ea1-bbe6-af675569eb7a-utilities\") pod \"certified-operators-j9zbt\" (UID: \"64d704b6-37a3-4ea1-bbe6-af675569eb7a\") " pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.488867 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d704b6-37a3-4ea1-bbe6-af675569eb7a-catalog-content\") pod \"certified-operators-j9zbt\" (UID: \"64d704b6-37a3-4ea1-bbe6-af675569eb7a\") " pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.488908 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6s99\" (UniqueName: \"kubernetes.io/projected/64d704b6-37a3-4ea1-bbe6-af675569eb7a-kube-api-access-m6s99\") pod \"certified-operators-j9zbt\" (UID: \"64d704b6-37a3-4ea1-bbe6-af675569eb7a\") " pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.489421 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d704b6-37a3-4ea1-bbe6-af675569eb7a-utilities\") pod \"certified-operators-j9zbt\" (UID: \"64d704b6-37a3-4ea1-bbe6-af675569eb7a\") " pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.489497 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d704b6-37a3-4ea1-bbe6-af675569eb7a-catalog-content\") pod \"certified-operators-j9zbt\" (UID: \"64d704b6-37a3-4ea1-bbe6-af675569eb7a\") " pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.515814 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6s99\" (UniqueName: \"kubernetes.io/projected/64d704b6-37a3-4ea1-bbe6-af675569eb7a-kube-api-access-m6s99\") pod \"certified-operators-j9zbt\" (UID: \"64d704b6-37a3-4ea1-bbe6-af675569eb7a\") " pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.581603 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5kgqs"] Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.583061 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.586164 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.589454 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fba4db8-97e3-4e96-b22b-0aca71b4217f-catalog-content\") pod \"redhat-marketplace-5kgqs\" (UID: \"9fba4db8-97e3-4e96-b22b-0aca71b4217f\") " pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.589512 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fba4db8-97e3-4e96-b22b-0aca71b4217f-utilities\") pod \"redhat-marketplace-5kgqs\" (UID: \"9fba4db8-97e3-4e96-b22b-0aca71b4217f\") " pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.589568 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prj4z\" (UniqueName: \"kubernetes.io/projected/9fba4db8-97e3-4e96-b22b-0aca71b4217f-kube-api-access-prj4z\") pod \"redhat-marketplace-5kgqs\" (UID: \"9fba4db8-97e3-4e96-b22b-0aca71b4217f\") " pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.633949 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5kgqs"] Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.690413 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fba4db8-97e3-4e96-b22b-0aca71b4217f-utilities\") pod \"redhat-marketplace-5kgqs\" (UID: \"9fba4db8-97e3-4e96-b22b-0aca71b4217f\") " pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.690489 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prj4z\" (UniqueName: \"kubernetes.io/projected/9fba4db8-97e3-4e96-b22b-0aca71b4217f-kube-api-access-prj4z\") pod \"redhat-marketplace-5kgqs\" (UID: \"9fba4db8-97e3-4e96-b22b-0aca71b4217f\") " pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.690523 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fba4db8-97e3-4e96-b22b-0aca71b4217f-catalog-content\") pod \"redhat-marketplace-5kgqs\" (UID: \"9fba4db8-97e3-4e96-b22b-0aca71b4217f\") " pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.691063 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fba4db8-97e3-4e96-b22b-0aca71b4217f-utilities\") pod \"redhat-marketplace-5kgqs\" (UID: \"9fba4db8-97e3-4e96-b22b-0aca71b4217f\") " pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.691081 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fba4db8-97e3-4e96-b22b-0aca71b4217f-catalog-content\") pod \"redhat-marketplace-5kgqs\" (UID: \"9fba4db8-97e3-4e96-b22b-0aca71b4217f\") " pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.714799 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:24 crc kubenswrapper[4736]: I0316 15:20:24.728275 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prj4z\" (UniqueName: \"kubernetes.io/projected/9fba4db8-97e3-4e96-b22b-0aca71b4217f-kube-api-access-prj4z\") pod \"redhat-marketplace-5kgqs\" (UID: \"9fba4db8-97e3-4e96-b22b-0aca71b4217f\") " pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:25 crc kubenswrapper[4736]: I0316 15:20:24.899373 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:25 crc kubenswrapper[4736]: I0316 15:20:25.115039 4736 generic.go:334] "Generic (PLEG): container finished" podID="785f2d20-3733-4b65-827e-45a047ecc4c6" containerID="301cfac7673cc6d2892771cbaf4fc28144e592f25278a970a3e8a7dce577b3d9" exitCode=0 Mar 16 15:20:25 crc kubenswrapper[4736]: I0316 15:20:25.115244 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48gwj" event={"ID":"785f2d20-3733-4b65-827e-45a047ecc4c6","Type":"ContainerDied","Data":"301cfac7673cc6d2892771cbaf4fc28144e592f25278a970a3e8a7dce577b3d9"} Mar 16 15:20:25 crc kubenswrapper[4736]: I0316 15:20:25.119223 4736 generic.go:334] "Generic (PLEG): container finished" podID="47430cd6-bdce-4f4c-b736-6dad559aed15" containerID="79f09f4b1084ea153226d68e6d153e2d37a78bc305bf557880de922c5eb0d5c1" exitCode=0 Mar 16 15:20:25 crc kubenswrapper[4736]: I0316 15:20:25.119300 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9cjw" event={"ID":"47430cd6-bdce-4f4c-b736-6dad559aed15","Type":"ContainerDied","Data":"79f09f4b1084ea153226d68e6d153e2d37a78bc305bf557880de922c5eb0d5c1"} Mar 16 15:20:25 crc kubenswrapper[4736]: I0316 15:20:25.587528 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j9zbt"] Mar 16 15:20:25 crc kubenswrapper[4736]: I0316 15:20:25.648384 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5kgqs"] Mar 16 15:20:25 crc kubenswrapper[4736]: W0316 15:20:25.652981 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fba4db8_97e3_4e96_b22b_0aca71b4217f.slice/crio-f96181348065ef649af4e896f3e7d46212a8f5a3634c14673dbeb6007c25fc93 WatchSource:0}: Error finding container f96181348065ef649af4e896f3e7d46212a8f5a3634c14673dbeb6007c25fc93: Status 404 returned error can't find the container with id f96181348065ef649af4e896f3e7d46212a8f5a3634c14673dbeb6007c25fc93 Mar 16 15:20:26 crc kubenswrapper[4736]: I0316 15:20:26.126500 4736 generic.go:334] "Generic (PLEG): container finished" podID="9fba4db8-97e3-4e96-b22b-0aca71b4217f" containerID="f0053e4190e847fe18b9fab26499067e61be124915559cec8de053fef9761edb" exitCode=0 Mar 16 15:20:26 crc kubenswrapper[4736]: I0316 15:20:26.126594 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kgqs" event={"ID":"9fba4db8-97e3-4e96-b22b-0aca71b4217f","Type":"ContainerDied","Data":"f0053e4190e847fe18b9fab26499067e61be124915559cec8de053fef9761edb"} Mar 16 15:20:26 crc kubenswrapper[4736]: I0316 15:20:26.126631 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kgqs" event={"ID":"9fba4db8-97e3-4e96-b22b-0aca71b4217f","Type":"ContainerStarted","Data":"f96181348065ef649af4e896f3e7d46212a8f5a3634c14673dbeb6007c25fc93"} Mar 16 15:20:26 crc kubenswrapper[4736]: I0316 15:20:26.130176 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9cjw" event={"ID":"47430cd6-bdce-4f4c-b736-6dad559aed15","Type":"ContainerStarted","Data":"35c716cda902b938a5da689de1a2a9b4d282a507edaf9f170de20dff9481ec3c"} Mar 16 15:20:26 crc kubenswrapper[4736]: I0316 15:20:26.134165 4736 generic.go:334] "Generic (PLEG): container finished" podID="64d704b6-37a3-4ea1-bbe6-af675569eb7a" containerID="c66d67999fdfe9e3c8f09fabbd15f0a29ba7c9f8e67fdafa1cd4749c6e66ca2c" exitCode=0 Mar 16 15:20:26 crc kubenswrapper[4736]: I0316 15:20:26.134241 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9zbt" event={"ID":"64d704b6-37a3-4ea1-bbe6-af675569eb7a","Type":"ContainerDied","Data":"c66d67999fdfe9e3c8f09fabbd15f0a29ba7c9f8e67fdafa1cd4749c6e66ca2c"} Mar 16 15:20:26 crc kubenswrapper[4736]: I0316 15:20:26.134266 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9zbt" event={"ID":"64d704b6-37a3-4ea1-bbe6-af675569eb7a","Type":"ContainerStarted","Data":"4f839f680d1d4c9a4bc834114f3e612f8c9cafca615d73dbec87d0d9417d2ffc"} Mar 16 15:20:26 crc kubenswrapper[4736]: I0316 15:20:26.141596 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48gwj" event={"ID":"785f2d20-3733-4b65-827e-45a047ecc4c6","Type":"ContainerStarted","Data":"4829d889cfbe624e06456c45e8fbb5283c57d8a49ecd6ffc4da94ce06b53aeae"} Mar 16 15:20:26 crc kubenswrapper[4736]: I0316 15:20:26.172558 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-48gwj" podStartSLOduration=1.6888342010000001 podStartE2EDuration="4.172537624s" podCreationTimestamp="2026-03-16 15:20:22 +0000 UTC" firstStartedPulling="2026-03-16 15:20:23.100745586 +0000 UTC m=+424.828135873" lastFinishedPulling="2026-03-16 15:20:25.584449009 +0000 UTC m=+427.311839296" observedRunningTime="2026-03-16 15:20:26.170562873 +0000 UTC m=+427.897953160" watchObservedRunningTime="2026-03-16 15:20:26.172537624 +0000 UTC m=+427.899927901" Mar 16 15:20:26 crc kubenswrapper[4736]: I0316 15:20:26.217714 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m9cjw" podStartSLOduration=2.7980188999999998 podStartE2EDuration="5.217692666s" podCreationTimestamp="2026-03-16 15:20:21 +0000 UTC" firstStartedPulling="2026-03-16 15:20:23.093165627 +0000 UTC m=+424.820555914" lastFinishedPulling="2026-03-16 15:20:25.512839393 +0000 UTC m=+427.240229680" observedRunningTime="2026-03-16 15:20:26.196877304 +0000 UTC m=+427.924267591" watchObservedRunningTime="2026-03-16 15:20:26.217692666 +0000 UTC m=+427.945082953" Mar 16 15:20:27 crc kubenswrapper[4736]: I0316 15:20:27.154051 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kgqs" event={"ID":"9fba4db8-97e3-4e96-b22b-0aca71b4217f","Type":"ContainerStarted","Data":"f66964986de2f04e17358e804bcc3badf65219a901e56a97f19275ed9de95fda"} Mar 16 15:20:28 crc kubenswrapper[4736]: I0316 15:20:28.162990 4736 generic.go:334] "Generic (PLEG): container finished" podID="64d704b6-37a3-4ea1-bbe6-af675569eb7a" containerID="7ab5f9ef288f9b435b42840b1085fc81caeb224d2882af3016d5510cf6635c93" exitCode=0 Mar 16 15:20:28 crc kubenswrapper[4736]: I0316 15:20:28.163094 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9zbt" event={"ID":"64d704b6-37a3-4ea1-bbe6-af675569eb7a","Type":"ContainerDied","Data":"7ab5f9ef288f9b435b42840b1085fc81caeb224d2882af3016d5510cf6635c93"} Mar 16 15:20:28 crc kubenswrapper[4736]: I0316 15:20:28.167567 4736 generic.go:334] "Generic (PLEG): container finished" podID="9fba4db8-97e3-4e96-b22b-0aca71b4217f" containerID="f66964986de2f04e17358e804bcc3badf65219a901e56a97f19275ed9de95fda" exitCode=0 Mar 16 15:20:28 crc kubenswrapper[4736]: I0316 15:20:28.167633 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kgqs" event={"ID":"9fba4db8-97e3-4e96-b22b-0aca71b4217f","Type":"ContainerDied","Data":"f66964986de2f04e17358e804bcc3badf65219a901e56a97f19275ed9de95fda"} Mar 16 15:20:29 crc kubenswrapper[4736]: I0316 15:20:29.177063 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9zbt" event={"ID":"64d704b6-37a3-4ea1-bbe6-af675569eb7a","Type":"ContainerStarted","Data":"6292466489c371845f1caf57ad3f6f2926e071fc50168962358ab62f2607e215"} Mar 16 15:20:29 crc kubenswrapper[4736]: I0316 15:20:29.180030 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5kgqs" event={"ID":"9fba4db8-97e3-4e96-b22b-0aca71b4217f","Type":"ContainerStarted","Data":"a21e0c1c9c49eeea36bdb74986055caf02dff5845e3c0687cf513ce205a38abf"} Mar 16 15:20:29 crc kubenswrapper[4736]: I0316 15:20:29.248384 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j9zbt" podStartSLOduration=2.466770847 podStartE2EDuration="5.248362143s" podCreationTimestamp="2026-03-16 15:20:24 +0000 UTC" firstStartedPulling="2026-03-16 15:20:26.135632402 +0000 UTC m=+427.863022679" lastFinishedPulling="2026-03-16 15:20:28.917223688 +0000 UTC m=+430.644613975" observedRunningTime="2026-03-16 15:20:29.213952847 +0000 UTC m=+430.941343154" watchObservedRunningTime="2026-03-16 15:20:29.248362143 +0000 UTC m=+430.975752430" Mar 16 15:20:29 crc kubenswrapper[4736]: I0316 15:20:29.248644 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5kgqs" podStartSLOduration=2.792533049 podStartE2EDuration="5.248640631s" podCreationTimestamp="2026-03-16 15:20:24 +0000 UTC" firstStartedPulling="2026-03-16 15:20:26.129248258 +0000 UTC m=+427.856638555" lastFinishedPulling="2026-03-16 15:20:28.58535585 +0000 UTC m=+430.312746137" observedRunningTime="2026-03-16 15:20:29.24497832 +0000 UTC m=+430.972368607" watchObservedRunningTime="2026-03-16 15:20:29.248640631 +0000 UTC m=+430.976030928" Mar 16 15:20:32 crc kubenswrapper[4736]: I0316 15:20:32.315826 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:32 crc kubenswrapper[4736]: I0316 15:20:32.316277 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:32 crc kubenswrapper[4736]: I0316 15:20:32.375691 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:32 crc kubenswrapper[4736]: I0316 15:20:32.506201 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:32 crc kubenswrapper[4736]: I0316 15:20:32.506254 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:32 crc kubenswrapper[4736]: I0316 15:20:32.565943 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:33 crc kubenswrapper[4736]: I0316 15:20:33.248628 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 15:20:33 crc kubenswrapper[4736]: I0316 15:20:33.281636 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m9cjw" Mar 16 15:20:34 crc kubenswrapper[4736]: I0316 15:20:34.716061 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:34 crc kubenswrapper[4736]: I0316 15:20:34.716758 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:34 crc kubenswrapper[4736]: I0316 15:20:34.786522 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:34 crc kubenswrapper[4736]: I0316 15:20:34.900800 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:34 crc kubenswrapper[4736]: I0316 15:20:34.902083 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:34 crc kubenswrapper[4736]: I0316 15:20:34.949132 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:35 crc kubenswrapper[4736]: I0316 15:20:35.255432 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5kgqs" Mar 16 15:20:35 crc kubenswrapper[4736]: I0316 15:20:35.280335 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j9zbt" Mar 16 15:20:38 crc kubenswrapper[4736]: I0316 15:20:38.508928 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:20:38 crc kubenswrapper[4736]: I0316 15:20:38.509988 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:20:40 crc kubenswrapper[4736]: I0316 15:20:40.928461 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" podUID="8ccfe744-3d0c-404c-aed7-94c575a05b34" containerName="registry" containerID="cri-o://ece24d02401d4dfa92dd4e1ea857c5dd736234308f790c057bfca365fb0ff31e" gracePeriod=30 Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.268773 4736 generic.go:334] "Generic (PLEG): container finished" podID="8ccfe744-3d0c-404c-aed7-94c575a05b34" containerID="ece24d02401d4dfa92dd4e1ea857c5dd736234308f790c057bfca365fb0ff31e" exitCode=0 Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.268922 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" event={"ID":"8ccfe744-3d0c-404c-aed7-94c575a05b34","Type":"ContainerDied","Data":"ece24d02401d4dfa92dd4e1ea857c5dd736234308f790c057bfca365fb0ff31e"} Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.314147 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.409058 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-bound-sa-token\") pod \"8ccfe744-3d0c-404c-aed7-94c575a05b34\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.409125 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ccfe744-3d0c-404c-aed7-94c575a05b34-installation-pull-secrets\") pod \"8ccfe744-3d0c-404c-aed7-94c575a05b34\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.409174 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnj7z\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-kube-api-access-rnj7z\") pod \"8ccfe744-3d0c-404c-aed7-94c575a05b34\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.409197 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ccfe744-3d0c-404c-aed7-94c575a05b34-registry-certificates\") pod \"8ccfe744-3d0c-404c-aed7-94c575a05b34\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.409216 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-registry-tls\") pod \"8ccfe744-3d0c-404c-aed7-94c575a05b34\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.409257 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ccfe744-3d0c-404c-aed7-94c575a05b34-ca-trust-extracted\") pod \"8ccfe744-3d0c-404c-aed7-94c575a05b34\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.410534 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ccfe744-3d0c-404c-aed7-94c575a05b34-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8ccfe744-3d0c-404c-aed7-94c575a05b34" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.425499 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ccfe744-3d0c-404c-aed7-94c575a05b34-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8ccfe744-3d0c-404c-aed7-94c575a05b34" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.425608 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8ccfe744-3d0c-404c-aed7-94c575a05b34\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.425657 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ccfe744-3d0c-404c-aed7-94c575a05b34-trusted-ca\") pod \"8ccfe744-3d0c-404c-aed7-94c575a05b34\" (UID: \"8ccfe744-3d0c-404c-aed7-94c575a05b34\") " Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.425850 4736 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8ccfe744-3d0c-404c-aed7-94c575a05b34-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.425868 4736 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8ccfe744-3d0c-404c-aed7-94c575a05b34-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.427540 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ccfe744-3d0c-404c-aed7-94c575a05b34-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8ccfe744-3d0c-404c-aed7-94c575a05b34" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.431823 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8ccfe744-3d0c-404c-aed7-94c575a05b34" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.432206 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8ccfe744-3d0c-404c-aed7-94c575a05b34" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.431162 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccfe744-3d0c-404c-aed7-94c575a05b34-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8ccfe744-3d0c-404c-aed7-94c575a05b34" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.433214 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-kube-api-access-rnj7z" (OuterVolumeSpecName: "kube-api-access-rnj7z") pod "8ccfe744-3d0c-404c-aed7-94c575a05b34" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34"). InnerVolumeSpecName "kube-api-access-rnj7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.437725 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "8ccfe744-3d0c-404c-aed7-94c575a05b34" (UID: "8ccfe744-3d0c-404c-aed7-94c575a05b34"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.526982 4736 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.527047 4736 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8ccfe744-3d0c-404c-aed7-94c575a05b34-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.527061 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnj7z\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-kube-api-access-rnj7z\") on node \"crc\" DevicePath \"\"" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.527086 4736 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8ccfe744-3d0c-404c-aed7-94c575a05b34-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:20:41 crc kubenswrapper[4736]: I0316 15:20:41.527095 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ccfe744-3d0c-404c-aed7-94c575a05b34-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:20:42 crc kubenswrapper[4736]: I0316 15:20:42.276959 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" event={"ID":"8ccfe744-3d0c-404c-aed7-94c575a05b34","Type":"ContainerDied","Data":"1084d7a9051df595dbb0c8bfcff027e0b1d2ea778112918d9e425e42f6d5e1dd"} Mar 16 15:20:42 crc kubenswrapper[4736]: I0316 15:20:42.277513 4736 scope.go:117] "RemoveContainer" containerID="ece24d02401d4dfa92dd4e1ea857c5dd736234308f790c057bfca365fb0ff31e" Mar 16 15:20:42 crc kubenswrapper[4736]: I0316 15:20:42.277042 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-v8kpj" Mar 16 15:20:42 crc kubenswrapper[4736]: I0316 15:20:42.316956 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v8kpj"] Mar 16 15:20:42 crc kubenswrapper[4736]: I0316 15:20:42.329662 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-v8kpj"] Mar 16 15:20:42 crc kubenswrapper[4736]: I0316 15:20:42.985812 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ccfe744-3d0c-404c-aed7-94c575a05b34" path="/var/lib/kubelet/pods/8ccfe744-3d0c-404c-aed7-94c575a05b34/volumes" Mar 16 15:21:08 crc kubenswrapper[4736]: I0316 15:21:08.508069 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:21:08 crc kubenswrapper[4736]: I0316 15:21:08.509232 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:21:08 crc kubenswrapper[4736]: I0316 15:21:08.509323 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:21:08 crc kubenswrapper[4736]: I0316 15:21:08.510757 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c8d8f26777ffc8692ceadd75541f1f6f498f2d7da2d9e6a13c8dd2dcd18a2d7"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 15:21:08 crc kubenswrapper[4736]: I0316 15:21:08.510906 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://1c8d8f26777ffc8692ceadd75541f1f6f498f2d7da2d9e6a13c8dd2dcd18a2d7" gracePeriod=600 Mar 16 15:21:09 crc kubenswrapper[4736]: I0316 15:21:09.454252 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="1c8d8f26777ffc8692ceadd75541f1f6f498f2d7da2d9e6a13c8dd2dcd18a2d7" exitCode=0 Mar 16 15:21:09 crc kubenswrapper[4736]: I0316 15:21:09.454598 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"1c8d8f26777ffc8692ceadd75541f1f6f498f2d7da2d9e6a13c8dd2dcd18a2d7"} Mar 16 15:21:09 crc kubenswrapper[4736]: I0316 15:21:09.455135 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"6b9240748b433f9c88fc04b11becfe88b0e7f67784da8195763a15bd86d7528d"} Mar 16 15:21:09 crc kubenswrapper[4736]: I0316 15:21:09.455324 4736 scope.go:117] "RemoveContainer" containerID="7e91831abbb600471d0541d4e307614920a595b5eab27273498fcfbf4e0a08f8" Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.154683 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561242-k42xz"] Mar 16 15:22:00 crc kubenswrapper[4736]: E0316 15:22:00.157230 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccfe744-3d0c-404c-aed7-94c575a05b34" containerName="registry" Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.157260 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccfe744-3d0c-404c-aed7-94c575a05b34" containerName="registry" Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.157489 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccfe744-3d0c-404c-aed7-94c575a05b34" containerName="registry" Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.158242 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561242-k42xz" Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.162524 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.162777 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.162911 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.174391 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561242-k42xz"] Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.346640 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7tp2\" (UniqueName: \"kubernetes.io/projected/8d1657e3-98d0-4f79-9e5b-27a428d12c85-kube-api-access-m7tp2\") pod \"auto-csr-approver-29561242-k42xz\" (UID: \"8d1657e3-98d0-4f79-9e5b-27a428d12c85\") " pod="openshift-infra/auto-csr-approver-29561242-k42xz" Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.447831 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7tp2\" (UniqueName: \"kubernetes.io/projected/8d1657e3-98d0-4f79-9e5b-27a428d12c85-kube-api-access-m7tp2\") pod \"auto-csr-approver-29561242-k42xz\" (UID: \"8d1657e3-98d0-4f79-9e5b-27a428d12c85\") " pod="openshift-infra/auto-csr-approver-29561242-k42xz" Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.482632 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7tp2\" (UniqueName: \"kubernetes.io/projected/8d1657e3-98d0-4f79-9e5b-27a428d12c85-kube-api-access-m7tp2\") pod \"auto-csr-approver-29561242-k42xz\" (UID: \"8d1657e3-98d0-4f79-9e5b-27a428d12c85\") " pod="openshift-infra/auto-csr-approver-29561242-k42xz" Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.492695 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561242-k42xz" Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.743863 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561242-k42xz"] Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.755981 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 15:22:00 crc kubenswrapper[4736]: I0316 15:22:00.910833 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561242-k42xz" event={"ID":"8d1657e3-98d0-4f79-9e5b-27a428d12c85","Type":"ContainerStarted","Data":"c12dde61ab0ea5fe92642d6af511cbbd3c1845a03eac59dd1fe5e71a6024e726"} Mar 16 15:22:02 crc kubenswrapper[4736]: I0316 15:22:02.928517 4736 generic.go:334] "Generic (PLEG): container finished" podID="8d1657e3-98d0-4f79-9e5b-27a428d12c85" containerID="02ffa8fbd7725bd1f500fbbad2ca7ed5e168b976674a32288d293bab1b1bcc25" exitCode=0 Mar 16 15:22:02 crc kubenswrapper[4736]: I0316 15:22:02.928713 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561242-k42xz" event={"ID":"8d1657e3-98d0-4f79-9e5b-27a428d12c85","Type":"ContainerDied","Data":"02ffa8fbd7725bd1f500fbbad2ca7ed5e168b976674a32288d293bab1b1bcc25"} Mar 16 15:22:04 crc kubenswrapper[4736]: I0316 15:22:04.203524 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561242-k42xz" Mar 16 15:22:04 crc kubenswrapper[4736]: I0316 15:22:04.220433 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7tp2\" (UniqueName: \"kubernetes.io/projected/8d1657e3-98d0-4f79-9e5b-27a428d12c85-kube-api-access-m7tp2\") pod \"8d1657e3-98d0-4f79-9e5b-27a428d12c85\" (UID: \"8d1657e3-98d0-4f79-9e5b-27a428d12c85\") " Mar 16 15:22:04 crc kubenswrapper[4736]: I0316 15:22:04.233082 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d1657e3-98d0-4f79-9e5b-27a428d12c85-kube-api-access-m7tp2" (OuterVolumeSpecName: "kube-api-access-m7tp2") pod "8d1657e3-98d0-4f79-9e5b-27a428d12c85" (UID: "8d1657e3-98d0-4f79-9e5b-27a428d12c85"). InnerVolumeSpecName "kube-api-access-m7tp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:22:04 crc kubenswrapper[4736]: I0316 15:22:04.321837 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7tp2\" (UniqueName: \"kubernetes.io/projected/8d1657e3-98d0-4f79-9e5b-27a428d12c85-kube-api-access-m7tp2\") on node \"crc\" DevicePath \"\"" Mar 16 15:22:04 crc kubenswrapper[4736]: I0316 15:22:04.945457 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561242-k42xz" event={"ID":"8d1657e3-98d0-4f79-9e5b-27a428d12c85","Type":"ContainerDied","Data":"c12dde61ab0ea5fe92642d6af511cbbd3c1845a03eac59dd1fe5e71a6024e726"} Mar 16 15:22:04 crc kubenswrapper[4736]: I0316 15:22:04.945521 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c12dde61ab0ea5fe92642d6af511cbbd3c1845a03eac59dd1fe5e71a6024e726" Mar 16 15:22:04 crc kubenswrapper[4736]: I0316 15:22:04.945566 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561242-k42xz" Mar 16 15:22:05 crc kubenswrapper[4736]: I0316 15:22:05.293125 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561236-cxwv6"] Mar 16 15:22:05 crc kubenswrapper[4736]: I0316 15:22:05.300906 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561236-cxwv6"] Mar 16 15:22:06 crc kubenswrapper[4736]: I0316 15:22:06.988578 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae9de77b-767a-4a87-b4bd-648728dc9826" path="/var/lib/kubelet/pods/ae9de77b-767a-4a87-b4bd-648728dc9826/volumes" Mar 16 15:23:08 crc kubenswrapper[4736]: I0316 15:23:08.508088 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:23:08 crc kubenswrapper[4736]: I0316 15:23:08.509100 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:23:21 crc kubenswrapper[4736]: I0316 15:23:21.728469 4736 scope.go:117] "RemoveContainer" containerID="a3033f5ff86fc64106eaedf12e9b3591ba74d7569ab978294dcff96605aa6b70" Mar 16 15:23:38 crc kubenswrapper[4736]: I0316 15:23:38.508045 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:23:38 crc kubenswrapper[4736]: I0316 15:23:38.510387 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:24:00 crc kubenswrapper[4736]: I0316 15:24:00.141785 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561244-zqjt8"] Mar 16 15:24:00 crc kubenswrapper[4736]: E0316 15:24:00.142872 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1657e3-98d0-4f79-9e5b-27a428d12c85" containerName="oc" Mar 16 15:24:00 crc kubenswrapper[4736]: I0316 15:24:00.142895 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1657e3-98d0-4f79-9e5b-27a428d12c85" containerName="oc" Mar 16 15:24:00 crc kubenswrapper[4736]: I0316 15:24:00.143082 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1657e3-98d0-4f79-9e5b-27a428d12c85" containerName="oc" Mar 16 15:24:00 crc kubenswrapper[4736]: I0316 15:24:00.143770 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561244-zqjt8" Mar 16 15:24:00 crc kubenswrapper[4736]: I0316 15:24:00.146289 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:24:00 crc kubenswrapper[4736]: I0316 15:24:00.146399 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561244-zqjt8"] Mar 16 15:24:00 crc kubenswrapper[4736]: I0316 15:24:00.147366 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:24:00 crc kubenswrapper[4736]: I0316 15:24:00.147722 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:24:00 crc kubenswrapper[4736]: I0316 15:24:00.336337 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbpz\" (UniqueName: \"kubernetes.io/projected/986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad-kube-api-access-dkbpz\") pod \"auto-csr-approver-29561244-zqjt8\" (UID: \"986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad\") " pod="openshift-infra/auto-csr-approver-29561244-zqjt8" Mar 16 15:24:00 crc kubenswrapper[4736]: I0316 15:24:00.437611 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkbpz\" (UniqueName: \"kubernetes.io/projected/986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad-kube-api-access-dkbpz\") pod \"auto-csr-approver-29561244-zqjt8\" (UID: \"986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad\") " pod="openshift-infra/auto-csr-approver-29561244-zqjt8" Mar 16 15:24:00 crc kubenswrapper[4736]: I0316 15:24:00.461376 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkbpz\" (UniqueName: \"kubernetes.io/projected/986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad-kube-api-access-dkbpz\") pod \"auto-csr-approver-29561244-zqjt8\" (UID: \"986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad\") " pod="openshift-infra/auto-csr-approver-29561244-zqjt8" Mar 16 15:24:00 crc kubenswrapper[4736]: I0316 15:24:00.461840 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561244-zqjt8" Mar 16 15:24:00 crc kubenswrapper[4736]: I0316 15:24:00.645262 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561244-zqjt8"] Mar 16 15:24:00 crc kubenswrapper[4736]: I0316 15:24:00.833745 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561244-zqjt8" event={"ID":"986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad","Type":"ContainerStarted","Data":"722ddc7796dc53b8058f6b077fcd00d7e581446668edfe453bd540f438a5cabe"} Mar 16 15:24:02 crc kubenswrapper[4736]: I0316 15:24:02.846025 4736 generic.go:334] "Generic (PLEG): container finished" podID="986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad" containerID="647bdc45d9a68be3ffa555e0db22de624942eb07560e00e78f8a83ba3b5c139e" exitCode=0 Mar 16 15:24:02 crc kubenswrapper[4736]: I0316 15:24:02.846157 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561244-zqjt8" event={"ID":"986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad","Type":"ContainerDied","Data":"647bdc45d9a68be3ffa555e0db22de624942eb07560e00e78f8a83ba3b5c139e"} Mar 16 15:24:04 crc kubenswrapper[4736]: I0316 15:24:04.039982 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561244-zqjt8" Mar 16 15:24:04 crc kubenswrapper[4736]: I0316 15:24:04.187792 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkbpz\" (UniqueName: \"kubernetes.io/projected/986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad-kube-api-access-dkbpz\") pod \"986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad\" (UID: \"986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad\") " Mar 16 15:24:04 crc kubenswrapper[4736]: I0316 15:24:04.195655 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad-kube-api-access-dkbpz" (OuterVolumeSpecName: "kube-api-access-dkbpz") pod "986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad" (UID: "986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad"). InnerVolumeSpecName "kube-api-access-dkbpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:24:04 crc kubenswrapper[4736]: I0316 15:24:04.289218 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkbpz\" (UniqueName: \"kubernetes.io/projected/986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad-kube-api-access-dkbpz\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:04 crc kubenswrapper[4736]: I0316 15:24:04.858448 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561244-zqjt8" event={"ID":"986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad","Type":"ContainerDied","Data":"722ddc7796dc53b8058f6b077fcd00d7e581446668edfe453bd540f438a5cabe"} Mar 16 15:24:04 crc kubenswrapper[4736]: I0316 15:24:04.858701 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="722ddc7796dc53b8058f6b077fcd00d7e581446668edfe453bd540f438a5cabe" Mar 16 15:24:04 crc kubenswrapper[4736]: I0316 15:24:04.858497 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561244-zqjt8" Mar 16 15:24:05 crc kubenswrapper[4736]: I0316 15:24:05.101700 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561238-dp7c4"] Mar 16 15:24:05 crc kubenswrapper[4736]: I0316 15:24:05.105779 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561238-dp7c4"] Mar 16 15:24:06 crc kubenswrapper[4736]: I0316 15:24:06.990797 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="608f1e44-20c1-4c3e-80d3-80ce123c1b2b" path="/var/lib/kubelet/pods/608f1e44-20c1-4c3e-80d3-80ce123c1b2b/volumes" Mar 16 15:24:08 crc kubenswrapper[4736]: I0316 15:24:08.508647 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:24:08 crc kubenswrapper[4736]: I0316 15:24:08.508720 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:24:08 crc kubenswrapper[4736]: I0316 15:24:08.508770 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:24:08 crc kubenswrapper[4736]: I0316 15:24:08.509442 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b9240748b433f9c88fc04b11becfe88b0e7f67784da8195763a15bd86d7528d"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 15:24:08 crc kubenswrapper[4736]: I0316 15:24:08.509500 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://6b9240748b433f9c88fc04b11becfe88b0e7f67784da8195763a15bd86d7528d" gracePeriod=600 Mar 16 15:24:08 crc kubenswrapper[4736]: I0316 15:24:08.946673 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="6b9240748b433f9c88fc04b11becfe88b0e7f67784da8195763a15bd86d7528d" exitCode=0 Mar 16 15:24:08 crc kubenswrapper[4736]: I0316 15:24:08.947128 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"6b9240748b433f9c88fc04b11becfe88b0e7f67784da8195763a15bd86d7528d"} Mar 16 15:24:08 crc kubenswrapper[4736]: I0316 15:24:08.947169 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"fc3be125f2287a40d18c8298f349a4df97007877194d3db241a15fe3b7bae6c8"} Mar 16 15:24:08 crc kubenswrapper[4736]: I0316 15:24:08.947192 4736 scope.go:117] "RemoveContainer" containerID="1c8d8f26777ffc8692ceadd75541f1f6f498f2d7da2d9e6a13c8dd2dcd18a2d7" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.093642 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nvl2s"] Mar 16 15:24:10 crc kubenswrapper[4736]: E0316 15:24:10.095381 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad" containerName="oc" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.095448 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad" containerName="oc" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.095611 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad" containerName="oc" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.096126 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nvl2s" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.104776 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-775xw"] Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.105922 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-775xw" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.109518 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.110144 4736 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-5dp47" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.110448 4736 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-98bzm" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.110449 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.123387 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qrt7l"] Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.124538 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.126650 4736 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-z7dp2" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.135201 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nvl2s"] Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.141413 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-775xw"] Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.157589 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qrt7l"] Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.283308 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w84tc\" (UniqueName: \"kubernetes.io/projected/d2eb8b3d-8b48-4110-bab7-66fc20948ee5-kube-api-access-w84tc\") pod \"cert-manager-webhook-687f57d79b-qrt7l\" (UID: \"d2eb8b3d-8b48-4110-bab7-66fc20948ee5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.283392 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bcl6\" (UniqueName: \"kubernetes.io/projected/8c13d851-26c9-4a4f-8ffc-a94a10784cf2-kube-api-access-6bcl6\") pod \"cert-manager-cainjector-cf98fcc89-nvl2s\" (UID: \"8c13d851-26c9-4a4f-8ffc-a94a10784cf2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nvl2s" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.283620 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlfdp\" (UniqueName: \"kubernetes.io/projected/855eb880-6d37-4d3c-a863-d4cb7520dc47-kube-api-access-zlfdp\") pod \"cert-manager-858654f9db-775xw\" (UID: \"855eb880-6d37-4d3c-a863-d4cb7520dc47\") " pod="cert-manager/cert-manager-858654f9db-775xw" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.385549 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w84tc\" (UniqueName: \"kubernetes.io/projected/d2eb8b3d-8b48-4110-bab7-66fc20948ee5-kube-api-access-w84tc\") pod \"cert-manager-webhook-687f57d79b-qrt7l\" (UID: \"d2eb8b3d-8b48-4110-bab7-66fc20948ee5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.385609 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bcl6\" (UniqueName: \"kubernetes.io/projected/8c13d851-26c9-4a4f-8ffc-a94a10784cf2-kube-api-access-6bcl6\") pod \"cert-manager-cainjector-cf98fcc89-nvl2s\" (UID: \"8c13d851-26c9-4a4f-8ffc-a94a10784cf2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nvl2s" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.385651 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlfdp\" (UniqueName: \"kubernetes.io/projected/855eb880-6d37-4d3c-a863-d4cb7520dc47-kube-api-access-zlfdp\") pod \"cert-manager-858654f9db-775xw\" (UID: \"855eb880-6d37-4d3c-a863-d4cb7520dc47\") " pod="cert-manager/cert-manager-858654f9db-775xw" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.407315 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlfdp\" (UniqueName: \"kubernetes.io/projected/855eb880-6d37-4d3c-a863-d4cb7520dc47-kube-api-access-zlfdp\") pod \"cert-manager-858654f9db-775xw\" (UID: \"855eb880-6d37-4d3c-a863-d4cb7520dc47\") " pod="cert-manager/cert-manager-858654f9db-775xw" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.410183 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w84tc\" (UniqueName: \"kubernetes.io/projected/d2eb8b3d-8b48-4110-bab7-66fc20948ee5-kube-api-access-w84tc\") pod \"cert-manager-webhook-687f57d79b-qrt7l\" (UID: \"d2eb8b3d-8b48-4110-bab7-66fc20948ee5\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.412458 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bcl6\" (UniqueName: \"kubernetes.io/projected/8c13d851-26c9-4a4f-8ffc-a94a10784cf2-kube-api-access-6bcl6\") pod \"cert-manager-cainjector-cf98fcc89-nvl2s\" (UID: \"8c13d851-26c9-4a4f-8ffc-a94a10784cf2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nvl2s" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.428706 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nvl2s" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.438574 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-775xw" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.445604 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.736456 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qrt7l"] Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.808885 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-775xw"] Mar 16 15:24:10 crc kubenswrapper[4736]: W0316 15:24:10.817748 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod855eb880_6d37_4d3c_a863_d4cb7520dc47.slice/crio-ce94cc9f300c173dc5d732b4137ab8ddfb3c4968bfd5982feed961d99f2d87a8 WatchSource:0}: Error finding container ce94cc9f300c173dc5d732b4137ab8ddfb3c4968bfd5982feed961d99f2d87a8: Status 404 returned error can't find the container with id ce94cc9f300c173dc5d732b4137ab8ddfb3c4968bfd5982feed961d99f2d87a8 Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.963598 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" event={"ID":"d2eb8b3d-8b48-4110-bab7-66fc20948ee5","Type":"ContainerStarted","Data":"85a249d20e8f54a40d7a2bfc8098d4dd2d4a9c53489b419f4c1278413d9a507e"} Mar 16 15:24:10 crc kubenswrapper[4736]: I0316 15:24:10.964719 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-775xw" event={"ID":"855eb880-6d37-4d3c-a863-d4cb7520dc47","Type":"ContainerStarted","Data":"ce94cc9f300c173dc5d732b4137ab8ddfb3c4968bfd5982feed961d99f2d87a8"} Mar 16 15:24:11 crc kubenswrapper[4736]: I0316 15:24:11.016035 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nvl2s"] Mar 16 15:24:11 crc kubenswrapper[4736]: W0316 15:24:11.022383 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c13d851_26c9_4a4f_8ffc_a94a10784cf2.slice/crio-366853e3b9e831d54174da7392478b32866f174f9d1bd3b565bbe81bbea4ea00 WatchSource:0}: Error finding container 366853e3b9e831d54174da7392478b32866f174f9d1bd3b565bbe81bbea4ea00: Status 404 returned error can't find the container with id 366853e3b9e831d54174da7392478b32866f174f9d1bd3b565bbe81bbea4ea00 Mar 16 15:24:11 crc kubenswrapper[4736]: I0316 15:24:11.973299 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nvl2s" event={"ID":"8c13d851-26c9-4a4f-8ffc-a94a10784cf2","Type":"ContainerStarted","Data":"366853e3b9e831d54174da7392478b32866f174f9d1bd3b565bbe81bbea4ea00"} Mar 16 15:24:15 crc kubenswrapper[4736]: I0316 15:24:15.011023 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" event={"ID":"d2eb8b3d-8b48-4110-bab7-66fc20948ee5","Type":"ContainerStarted","Data":"6d4e1b1beca8dae16b42102ac4f7dfae7fdf360c7dce9588237aeb624eb8362f"} Mar 16 15:24:15 crc kubenswrapper[4736]: I0316 15:24:15.012028 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" Mar 16 15:24:15 crc kubenswrapper[4736]: I0316 15:24:15.014032 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-775xw" event={"ID":"855eb880-6d37-4d3c-a863-d4cb7520dc47","Type":"ContainerStarted","Data":"ca37e6988f6439b73ebe7a1f4c851209821e6233504fdfb555d7f2cf9f1b1032"} Mar 16 15:24:15 crc kubenswrapper[4736]: I0316 15:24:15.022035 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nvl2s" event={"ID":"8c13d851-26c9-4a4f-8ffc-a94a10784cf2","Type":"ContainerStarted","Data":"1f4e19ba3b5d0805c3c620709a9196254180404a0d5176c0d34df021bdf08f2d"} Mar 16 15:24:15 crc kubenswrapper[4736]: I0316 15:24:15.042141 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" podStartSLOduration=1.008243479 podStartE2EDuration="5.042089863s" podCreationTimestamp="2026-03-16 15:24:10 +0000 UTC" firstStartedPulling="2026-03-16 15:24:10.748144246 +0000 UTC m=+652.475534533" lastFinishedPulling="2026-03-16 15:24:14.78199062 +0000 UTC m=+656.509380917" observedRunningTime="2026-03-16 15:24:15.033795222 +0000 UTC m=+656.761185509" watchObservedRunningTime="2026-03-16 15:24:15.042089863 +0000 UTC m=+656.769480160" Mar 16 15:24:15 crc kubenswrapper[4736]: I0316 15:24:15.053748 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nvl2s" podStartSLOduration=1.368343527 podStartE2EDuration="5.053725247s" podCreationTimestamp="2026-03-16 15:24:10 +0000 UTC" firstStartedPulling="2026-03-16 15:24:11.024663106 +0000 UTC m=+652.752053403" lastFinishedPulling="2026-03-16 15:24:14.710044816 +0000 UTC m=+656.437435123" observedRunningTime="2026-03-16 15:24:15.048422699 +0000 UTC m=+656.775812996" watchObservedRunningTime="2026-03-16 15:24:15.053725247 +0000 UTC m=+656.781115544" Mar 16 15:24:15 crc kubenswrapper[4736]: I0316 15:24:15.075470 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-775xw" podStartSLOduration=1.181944105 podStartE2EDuration="5.075448072s" podCreationTimestamp="2026-03-16 15:24:10 +0000 UTC" firstStartedPulling="2026-03-16 15:24:10.81905923 +0000 UTC m=+652.546449517" lastFinishedPulling="2026-03-16 15:24:14.712563187 +0000 UTC m=+656.439953484" observedRunningTime="2026-03-16 15:24:15.069708381 +0000 UTC m=+656.797098668" watchObservedRunningTime="2026-03-16 15:24:15.075448072 +0000 UTC m=+656.802838359" Mar 16 15:24:19 crc kubenswrapper[4736]: I0316 15:24:19.889160 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w7kdw"] Mar 16 15:24:19 crc kubenswrapper[4736]: I0316 15:24:19.890861 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovn-controller" containerID="cri-o://db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b" gracePeriod=30 Mar 16 15:24:19 crc kubenswrapper[4736]: I0316 15:24:19.891076 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="northd" containerID="cri-o://b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694" gracePeriod=30 Mar 16 15:24:19 crc kubenswrapper[4736]: I0316 15:24:19.891194 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463" gracePeriod=30 Mar 16 15:24:19 crc kubenswrapper[4736]: I0316 15:24:19.891262 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="kube-rbac-proxy-node" containerID="cri-o://48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162" gracePeriod=30 Mar 16 15:24:19 crc kubenswrapper[4736]: I0316 15:24:19.891305 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovn-acl-logging" containerID="cri-o://1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc" gracePeriod=30 Mar 16 15:24:19 crc kubenswrapper[4736]: I0316 15:24:19.891464 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="sbdb" containerID="cri-o://3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a" gracePeriod=30 Mar 16 15:24:19 crc kubenswrapper[4736]: I0316 15:24:19.891535 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="nbdb" containerID="cri-o://400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d" gracePeriod=30 Mar 16 15:24:19 crc kubenswrapper[4736]: I0316 15:24:19.949095 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovnkube-controller" containerID="cri-o://b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4" gracePeriod=30 Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.069005 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovnkube-controller/2.log" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.073779 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovn-acl-logging/0.log" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.075740 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovn-controller/0.log" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.076302 4736 generic.go:334] "Generic (PLEG): container finished" podID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerID="060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463" exitCode=0 Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.076335 4736 generic.go:334] "Generic (PLEG): container finished" podID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerID="48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162" exitCode=0 Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.076348 4736 generic.go:334] "Generic (PLEG): container finished" podID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerID="1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc" exitCode=143 Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.076377 4736 generic.go:334] "Generic (PLEG): container finished" podID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerID="db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b" exitCode=143 Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.076426 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerDied","Data":"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463"} Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.076525 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerDied","Data":"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162"} Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.076540 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerDied","Data":"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc"} Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.076555 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerDied","Data":"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b"} Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.079634 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zgcj2_eabe1535-f51c-4a72-b299-aab5ca4ab624/kube-multus/1.log" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.080242 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zgcj2_eabe1535-f51c-4a72-b299-aab5ca4ab624/kube-multus/0.log" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.080290 4736 generic.go:334] "Generic (PLEG): container finished" podID="eabe1535-f51c-4a72-b299-aab5ca4ab624" containerID="a6bfb6a3231ab025727278c14ea32655f87819defe1123015b98b189953c091e" exitCode=2 Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.080328 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zgcj2" event={"ID":"eabe1535-f51c-4a72-b299-aab5ca4ab624","Type":"ContainerDied","Data":"a6bfb6a3231ab025727278c14ea32655f87819defe1123015b98b189953c091e"} Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.080367 4736 scope.go:117] "RemoveContainer" containerID="5910421f4ba213bd9aa01a197a88d934b2be6e645eab623e507a9db641b311ac" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.080999 4736 scope.go:117] "RemoveContainer" containerID="a6bfb6a3231ab025727278c14ea32655f87819defe1123015b98b189953c091e" Mar 16 15:24:20 crc kubenswrapper[4736]: E0316 15:24:20.081190 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-zgcj2_openshift-multus(eabe1535-f51c-4a72-b299-aab5ca4ab624)\"" pod="openshift-multus/multus-zgcj2" podUID="eabe1535-f51c-4a72-b299-aab5ca4ab624" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.257633 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovnkube-controller/2.log" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.259420 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovn-acl-logging/0.log" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.259854 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovn-controller/0.log" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.260299 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.327696 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-var-lib-openvswitch\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.327758 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-ovnkube-config\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.327777 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-etc-openvswitch\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.327792 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-cni-bin\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.327809 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-openvswitch\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.327854 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.327874 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-cni-netd\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.327897 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-env-overrides\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.327921 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-systemd-units\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.327938 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/83041fd9-2e75-4569-ab47-ac7590a189a6-ovn-node-metrics-cert\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.327958 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-log-socket\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.327975 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-node-log\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.328022 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-ovn\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.328047 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-run-netns\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.328064 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-kubelet\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.328079 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-slash\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.328097 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p48n\" (UniqueName: \"kubernetes.io/projected/83041fd9-2e75-4569-ab47-ac7590a189a6-kube-api-access-9p48n\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.328137 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-run-ovn-kubernetes\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.328157 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-ovnkube-script-lib\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.328174 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-systemd\") pod \"83041fd9-2e75-4569-ab47-ac7590a189a6\" (UID: \"83041fd9-2e75-4569-ab47-ac7590a189a6\") " Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.329021 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.329167 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.329313 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.329415 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-log-socket" (OuterVolumeSpecName: "log-socket") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.329452 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-node-log" (OuterVolumeSpecName: "node-log") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.329161 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.329482 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.329523 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.329559 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.329594 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-slash" (OuterVolumeSpecName: "host-slash") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.329580 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.329711 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.329792 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.329846 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.330184 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.330466 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.330920 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.333734 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ngv26"] Mar 16 15:24:20 crc kubenswrapper[4736]: E0316 15:24:20.334297 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="kube-rbac-proxy-ovn-metrics" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.334394 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="kube-rbac-proxy-ovn-metrics" Mar 16 15:24:20 crc kubenswrapper[4736]: E0316 15:24:20.334453 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovn-acl-logging" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.334500 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovn-acl-logging" Mar 16 15:24:20 crc kubenswrapper[4736]: E0316 15:24:20.334552 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovn-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.334600 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovn-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: E0316 15:24:20.334652 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovnkube-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.334705 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovnkube-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: E0316 15:24:20.334754 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="kube-rbac-proxy-node" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.334826 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="kube-rbac-proxy-node" Mar 16 15:24:20 crc kubenswrapper[4736]: E0316 15:24:20.334897 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="nbdb" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.334965 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="nbdb" Mar 16 15:24:20 crc kubenswrapper[4736]: E0316 15:24:20.335015 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovnkube-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.335069 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovnkube-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: E0316 15:24:20.335132 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovnkube-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.335182 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovnkube-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: E0316 15:24:20.335234 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="northd" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.335282 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="northd" Mar 16 15:24:20 crc kubenswrapper[4736]: E0316 15:24:20.335333 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="kubecfg-setup" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.335384 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="kubecfg-setup" Mar 16 15:24:20 crc kubenswrapper[4736]: E0316 15:24:20.335449 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="sbdb" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.335498 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="sbdb" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.335651 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovnkube-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.335708 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="kube-rbac-proxy-node" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.335765 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovn-acl-logging" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.335840 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="sbdb" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.335903 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="kube-rbac-proxy-ovn-metrics" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.335960 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovnkube-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.336006 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="northd" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.336052 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovnkube-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.336097 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="nbdb" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.336162 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovn-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: E0316 15:24:20.336326 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovnkube-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.336380 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovnkube-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.336502 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83041fd9-2e75-4569-ab47-ac7590a189a6-kube-api-access-9p48n" (OuterVolumeSpecName: "kube-api-access-9p48n") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "kube-api-access-9p48n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.336572 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83041fd9-2e75-4569-ab47-ac7590a189a6-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.336605 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerName="ovnkube-controller" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.340401 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.358728 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "83041fd9-2e75-4569-ab47-ac7590a189a6" (UID: "83041fd9-2e75-4569-ab47-ac7590a189a6"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.428704 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-run-openvswitch\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.428759 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crvcs\" (UniqueName: \"kubernetes.io/projected/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-kube-api-access-crvcs\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.428797 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-kubelet\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.428819 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.428841 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-run-ovn-kubernetes\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.428870 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-env-overrides\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429013 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-systemd-units\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429068 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-cni-netd\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429117 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-log-socket\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429173 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-run-systemd\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429192 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-ovnkube-config\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429227 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-slash\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429241 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-etc-openvswitch\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429330 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-cni-bin\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429367 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-run-ovn\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429391 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-node-log\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429416 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-ovnkube-script-lib\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429513 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-ovn-node-metrics-cert\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429537 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-var-lib-openvswitch\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429573 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-run-netns\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429648 4736 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429662 4736 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429673 4736 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429684 4736 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429694 4736 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429706 4736 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429717 4736 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429727 4736 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429739 4736 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429749 4736 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/83041fd9-2e75-4569-ab47-ac7590a189a6-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429773 4736 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-log-socket\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429784 4736 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-node-log\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429792 4736 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429801 4736 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429810 4736 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429820 4736 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-slash\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429830 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p48n\" (UniqueName: \"kubernetes.io/projected/83041fd9-2e75-4569-ab47-ac7590a189a6-kube-api-access-9p48n\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429840 4736 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429861 4736 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/83041fd9-2e75-4569-ab47-ac7590a189a6-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.429871 4736 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/83041fd9-2e75-4569-ab47-ac7590a189a6-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.453038 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530549 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crvcs\" (UniqueName: \"kubernetes.io/projected/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-kube-api-access-crvcs\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530608 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-kubelet\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530633 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530652 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-run-ovn-kubernetes\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530672 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-env-overrides\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530694 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-systemd-units\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530710 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-cni-netd\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530726 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-log-socket\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530754 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-run-systemd\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530772 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-ovnkube-config\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530794 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-slash\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530781 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-kubelet\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530857 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-etc-openvswitch\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530813 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-etc-openvswitch\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530906 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-cni-bin\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530928 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-run-ovn\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530943 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-node-log\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530959 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-ovnkube-script-lib\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.530989 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-ovn-node-metrics-cert\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.531009 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-var-lib-openvswitch\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.531043 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-run-netns\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.531063 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-run-openvswitch\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.531123 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-run-openvswitch\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.531157 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.531183 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-run-ovn-kubernetes\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.531328 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-slash\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.531392 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-systemd-units\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.531420 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-cni-netd\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.531444 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-log-socket\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.531781 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-env-overrides\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.531820 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-cni-bin\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.531853 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-run-ovn\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.531875 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-node-log\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.532638 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-run-systemd\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.532780 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-ovnkube-script-lib\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.533249 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-var-lib-openvswitch\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.533426 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-ovnkube-config\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.533440 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-host-run-netns\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.535621 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-ovn-node-metrics-cert\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.549667 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crvcs\" (UniqueName: \"kubernetes.io/projected/ea8b282e-3be1-4ad6-8baa-6c33fd42b15f-kube-api-access-crvcs\") pod \"ovnkube-node-ngv26\" (UID: \"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f\") " pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: I0316 15:24:20.658889 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:20 crc kubenswrapper[4736]: W0316 15:24:20.680743 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea8b282e_3be1_4ad6_8baa_6c33fd42b15f.slice/crio-0638f484e78200d2b7ba1994643b170fa6703a66d7df52210bc622c821deafa6 WatchSource:0}: Error finding container 0638f484e78200d2b7ba1994643b170fa6703a66d7df52210bc622c821deafa6: Status 404 returned error can't find the container with id 0638f484e78200d2b7ba1994643b170fa6703a66d7df52210bc622c821deafa6 Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.091014 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovnkube-controller/2.log" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.093980 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovn-acl-logging/0.log" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.094763 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w7kdw_83041fd9-2e75-4569-ab47-ac7590a189a6/ovn-controller/0.log" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.095352 4736 generic.go:334] "Generic (PLEG): container finished" podID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerID="b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4" exitCode=0 Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.095379 4736 generic.go:334] "Generic (PLEG): container finished" podID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerID="3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a" exitCode=0 Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.095391 4736 generic.go:334] "Generic (PLEG): container finished" podID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerID="400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d" exitCode=0 Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.095401 4736 generic.go:334] "Generic (PLEG): container finished" podID="83041fd9-2e75-4569-ab47-ac7590a189a6" containerID="b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694" exitCode=0 Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.095437 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerDied","Data":"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4"} Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.095469 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.095484 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerDied","Data":"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a"} Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.095630 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerDied","Data":"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d"} Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.095650 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerDied","Data":"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694"} Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.095666 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w7kdw" event={"ID":"83041fd9-2e75-4569-ab47-ac7590a189a6","Type":"ContainerDied","Data":"a4d2e50e055e9dc6357c91ca24fca2b83be1471759c4d06da42cc823f6324718"} Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.095499 4736 scope.go:117] "RemoveContainer" containerID="b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.097619 4736 generic.go:334] "Generic (PLEG): container finished" podID="ea8b282e-3be1-4ad6-8baa-6c33fd42b15f" containerID="df643518c7124c4fbb0c73d2b6e784dbdb5a20770f35fa7abcf5ef3c39e2c9ef" exitCode=0 Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.097701 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" event={"ID":"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f","Type":"ContainerDied","Data":"df643518c7124c4fbb0c73d2b6e784dbdb5a20770f35fa7abcf5ef3c39e2c9ef"} Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.097741 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" event={"ID":"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f","Type":"ContainerStarted","Data":"0638f484e78200d2b7ba1994643b170fa6703a66d7df52210bc622c821deafa6"} Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.101485 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zgcj2_eabe1535-f51c-4a72-b299-aab5ca4ab624/kube-multus/1.log" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.149253 4736 scope.go:117] "RemoveContainer" containerID="100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.179659 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w7kdw"] Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.184812 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w7kdw"] Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.199408 4736 scope.go:117] "RemoveContainer" containerID="3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.235928 4736 scope.go:117] "RemoveContainer" containerID="400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.270186 4736 scope.go:117] "RemoveContainer" containerID="b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.289459 4736 scope.go:117] "RemoveContainer" containerID="060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.313538 4736 scope.go:117] "RemoveContainer" containerID="48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.339803 4736 scope.go:117] "RemoveContainer" containerID="1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.373093 4736 scope.go:117] "RemoveContainer" containerID="db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.395277 4736 scope.go:117] "RemoveContainer" containerID="8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.410185 4736 scope.go:117] "RemoveContainer" containerID="b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4" Mar 16 15:24:21 crc kubenswrapper[4736]: E0316 15:24:21.410560 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4\": container with ID starting with b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4 not found: ID does not exist" containerID="b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.410635 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4"} err="failed to get container status \"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4\": rpc error: code = NotFound desc = could not find container \"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4\": container with ID starting with b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.410667 4736 scope.go:117] "RemoveContainer" containerID="100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2" Mar 16 15:24:21 crc kubenswrapper[4736]: E0316 15:24:21.410878 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\": container with ID starting with 100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2 not found: ID does not exist" containerID="100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.410903 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2"} err="failed to get container status \"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\": rpc error: code = NotFound desc = could not find container \"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\": container with ID starting with 100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.410922 4736 scope.go:117] "RemoveContainer" containerID="3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a" Mar 16 15:24:21 crc kubenswrapper[4736]: E0316 15:24:21.411128 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\": container with ID starting with 3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a not found: ID does not exist" containerID="3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.411150 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a"} err="failed to get container status \"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\": rpc error: code = NotFound desc = could not find container \"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\": container with ID starting with 3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.411166 4736 scope.go:117] "RemoveContainer" containerID="400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d" Mar 16 15:24:21 crc kubenswrapper[4736]: E0316 15:24:21.411440 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\": container with ID starting with 400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d not found: ID does not exist" containerID="400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.411462 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d"} err="failed to get container status \"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\": rpc error: code = NotFound desc = could not find container \"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\": container with ID starting with 400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.411483 4736 scope.go:117] "RemoveContainer" containerID="b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694" Mar 16 15:24:21 crc kubenswrapper[4736]: E0316 15:24:21.411763 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\": container with ID starting with b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694 not found: ID does not exist" containerID="b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.411787 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694"} err="failed to get container status \"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\": rpc error: code = NotFound desc = could not find container \"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\": container with ID starting with b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.411806 4736 scope.go:117] "RemoveContainer" containerID="060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463" Mar 16 15:24:21 crc kubenswrapper[4736]: E0316 15:24:21.412307 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\": container with ID starting with 060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463 not found: ID does not exist" containerID="060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.412336 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463"} err="failed to get container status \"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\": rpc error: code = NotFound desc = could not find container \"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\": container with ID starting with 060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.412353 4736 scope.go:117] "RemoveContainer" containerID="48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162" Mar 16 15:24:21 crc kubenswrapper[4736]: E0316 15:24:21.412989 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\": container with ID starting with 48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162 not found: ID does not exist" containerID="48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.413016 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162"} err="failed to get container status \"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\": rpc error: code = NotFound desc = could not find container \"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\": container with ID starting with 48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.413066 4736 scope.go:117] "RemoveContainer" containerID="1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc" Mar 16 15:24:21 crc kubenswrapper[4736]: E0316 15:24:21.413548 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\": container with ID starting with 1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc not found: ID does not exist" containerID="1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.413578 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc"} err="failed to get container status \"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\": rpc error: code = NotFound desc = could not find container \"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\": container with ID starting with 1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.413710 4736 scope.go:117] "RemoveContainer" containerID="db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b" Mar 16 15:24:21 crc kubenswrapper[4736]: E0316 15:24:21.414027 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\": container with ID starting with db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b not found: ID does not exist" containerID="db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.414051 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b"} err="failed to get container status \"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\": rpc error: code = NotFound desc = could not find container \"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\": container with ID starting with db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.414070 4736 scope.go:117] "RemoveContainer" containerID="8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9" Mar 16 15:24:21 crc kubenswrapper[4736]: E0316 15:24:21.414313 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\": container with ID starting with 8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9 not found: ID does not exist" containerID="8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.414351 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9"} err="failed to get container status \"8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\": rpc error: code = NotFound desc = could not find container \"8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\": container with ID starting with 8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.414369 4736 scope.go:117] "RemoveContainer" containerID="b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.414740 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4"} err="failed to get container status \"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4\": rpc error: code = NotFound desc = could not find container \"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4\": container with ID starting with b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.414805 4736 scope.go:117] "RemoveContainer" containerID="100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.415149 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2"} err="failed to get container status \"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\": rpc error: code = NotFound desc = could not find container \"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\": container with ID starting with 100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.415178 4736 scope.go:117] "RemoveContainer" containerID="3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.415394 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a"} err="failed to get container status \"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\": rpc error: code = NotFound desc = could not find container \"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\": container with ID starting with 3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.415420 4736 scope.go:117] "RemoveContainer" containerID="400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.415618 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d"} err="failed to get container status \"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\": rpc error: code = NotFound desc = could not find container \"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\": container with ID starting with 400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.415638 4736 scope.go:117] "RemoveContainer" containerID="b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.415816 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694"} err="failed to get container status \"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\": rpc error: code = NotFound desc = could not find container \"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\": container with ID starting with b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.415835 4736 scope.go:117] "RemoveContainer" containerID="060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.416013 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463"} err="failed to get container status \"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\": rpc error: code = NotFound desc = could not find container \"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\": container with ID starting with 060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.416029 4736 scope.go:117] "RemoveContainer" containerID="48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.416256 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162"} err="failed to get container status \"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\": rpc error: code = NotFound desc = could not find container \"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\": container with ID starting with 48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.416273 4736 scope.go:117] "RemoveContainer" containerID="1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.416526 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc"} err="failed to get container status \"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\": rpc error: code = NotFound desc = could not find container \"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\": container with ID starting with 1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.416547 4736 scope.go:117] "RemoveContainer" containerID="db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.416862 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b"} err="failed to get container status \"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\": rpc error: code = NotFound desc = could not find container \"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\": container with ID starting with db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.416915 4736 scope.go:117] "RemoveContainer" containerID="8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.417265 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9"} err="failed to get container status \"8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\": rpc error: code = NotFound desc = could not find container \"8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\": container with ID starting with 8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.417283 4736 scope.go:117] "RemoveContainer" containerID="b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.417623 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4"} err="failed to get container status \"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4\": rpc error: code = NotFound desc = could not find container \"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4\": container with ID starting with b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.417668 4736 scope.go:117] "RemoveContainer" containerID="100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.417941 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2"} err="failed to get container status \"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\": rpc error: code = NotFound desc = could not find container \"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\": container with ID starting with 100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.417960 4736 scope.go:117] "RemoveContainer" containerID="3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.418270 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a"} err="failed to get container status \"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\": rpc error: code = NotFound desc = could not find container \"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\": container with ID starting with 3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.418292 4736 scope.go:117] "RemoveContainer" containerID="400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.419025 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d"} err="failed to get container status \"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\": rpc error: code = NotFound desc = could not find container \"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\": container with ID starting with 400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.419044 4736 scope.go:117] "RemoveContainer" containerID="b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.419925 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694"} err="failed to get container status \"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\": rpc error: code = NotFound desc = could not find container \"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\": container with ID starting with b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.419949 4736 scope.go:117] "RemoveContainer" containerID="060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.420263 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463"} err="failed to get container status \"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\": rpc error: code = NotFound desc = could not find container \"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\": container with ID starting with 060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.420285 4736 scope.go:117] "RemoveContainer" containerID="48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.420892 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162"} err="failed to get container status \"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\": rpc error: code = NotFound desc = could not find container \"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\": container with ID starting with 48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.420917 4736 scope.go:117] "RemoveContainer" containerID="1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.421137 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc"} err="failed to get container status \"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\": rpc error: code = NotFound desc = could not find container \"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\": container with ID starting with 1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.421161 4736 scope.go:117] "RemoveContainer" containerID="db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.421527 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b"} err="failed to get container status \"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\": rpc error: code = NotFound desc = could not find container \"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\": container with ID starting with db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.421577 4736 scope.go:117] "RemoveContainer" containerID="8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.421764 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9"} err="failed to get container status \"8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\": rpc error: code = NotFound desc = could not find container \"8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\": container with ID starting with 8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.421811 4736 scope.go:117] "RemoveContainer" containerID="b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.422002 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4"} err="failed to get container status \"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4\": rpc error: code = NotFound desc = could not find container \"b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4\": container with ID starting with b17ed92f4b5009d7810d2bb7c7054fe98059729c098b89e07585e04c44055af4 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.422023 4736 scope.go:117] "RemoveContainer" containerID="100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.422258 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2"} err="failed to get container status \"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\": rpc error: code = NotFound desc = could not find container \"100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2\": container with ID starting with 100d9890982b05837c344a28c0ddd013e22e10b4683e6c9b084617c64d795fe2 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.422284 4736 scope.go:117] "RemoveContainer" containerID="3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.422533 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a"} err="failed to get container status \"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\": rpc error: code = NotFound desc = could not find container \"3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a\": container with ID starting with 3fdbaf993e1c1a0003560714c9c5e7b26dc537f12f84c02a3b03587fd4efe77a not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.422554 4736 scope.go:117] "RemoveContainer" containerID="400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.422748 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d"} err="failed to get container status \"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\": rpc error: code = NotFound desc = could not find container \"400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d\": container with ID starting with 400e399aae73582947bc42cca84c7b8fa79083634871f776d768df20bfe36e7d not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.422771 4736 scope.go:117] "RemoveContainer" containerID="b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.422975 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694"} err="failed to get container status \"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\": rpc error: code = NotFound desc = could not find container \"b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694\": container with ID starting with b66cc341488ffd5bd3e5d0605b70a7fc680800e8ec641f49ac2d138155140694 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.422996 4736 scope.go:117] "RemoveContainer" containerID="060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.423271 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463"} err="failed to get container status \"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\": rpc error: code = NotFound desc = could not find container \"060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463\": container with ID starting with 060a2fcd2ccdc944bc06343751b1b8ffb551c261a0fb566f40f13ac2070ec463 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.423553 4736 scope.go:117] "RemoveContainer" containerID="48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.423787 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162"} err="failed to get container status \"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\": rpc error: code = NotFound desc = could not find container \"48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162\": container with ID starting with 48a8b31bda313a9c0f8589ea3f75001e962157c51668a9553d347f68dd6e5162 not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.423810 4736 scope.go:117] "RemoveContainer" containerID="1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.423994 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc"} err="failed to get container status \"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\": rpc error: code = NotFound desc = could not find container \"1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc\": container with ID starting with 1e4f1851725554b43283dd85ab47be9eaf61c3f32add15f711a98c5d72a6babc not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.424014 4736 scope.go:117] "RemoveContainer" containerID="db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.424247 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b"} err="failed to get container status \"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\": rpc error: code = NotFound desc = could not find container \"db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b\": container with ID starting with db41d0cdf7a46a258f1fa3cd02c1c5723b59d712a1a8a61c82499d8cbf6e454b not found: ID does not exist" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.424273 4736 scope.go:117] "RemoveContainer" containerID="8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9" Mar 16 15:24:21 crc kubenswrapper[4736]: I0316 15:24:21.424540 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9"} err="failed to get container status \"8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\": rpc error: code = NotFound desc = could not find container \"8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9\": container with ID starting with 8b8d1a155c5836bb680b7a4a783f9ca2fc0e126a763d68005c17b915250dd5f9 not found: ID does not exist" Mar 16 15:24:22 crc kubenswrapper[4736]: I0316 15:24:22.112046 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" event={"ID":"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f","Type":"ContainerStarted","Data":"ea2a6242a8ad2c0385e1517cd4919af7f56e4be5002ce77a19a02548256c46bf"} Mar 16 15:24:22 crc kubenswrapper[4736]: I0316 15:24:22.112478 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" event={"ID":"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f","Type":"ContainerStarted","Data":"78ece4c7f99a32c1da095438c82716a49fd5907460bbc0a23c5d8ddd3de55db4"} Mar 16 15:24:22 crc kubenswrapper[4736]: I0316 15:24:22.112491 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" event={"ID":"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f","Type":"ContainerStarted","Data":"58dd3b8e476b67a9852b242c9870da73ee2d03ba9c4733730bdd1e808ad7da9e"} Mar 16 15:24:22 crc kubenswrapper[4736]: I0316 15:24:22.112501 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" event={"ID":"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f","Type":"ContainerStarted","Data":"b37e7eacd44863d0418d2bedc3a5263d0a2e4f36dc747e6e635fb226d7068e26"} Mar 16 15:24:22 crc kubenswrapper[4736]: I0316 15:24:22.112510 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" event={"ID":"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f","Type":"ContainerStarted","Data":"78501434c91afe987ef8b3c848c86d9ddbe0a7def8ab031e615cd27f041d0537"} Mar 16 15:24:22 crc kubenswrapper[4736]: I0316 15:24:22.112518 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" event={"ID":"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f","Type":"ContainerStarted","Data":"b55264a8b14499e24226defd48d0dd8386baee52ee77ab0e005dbc9495918508"} Mar 16 15:24:22 crc kubenswrapper[4736]: I0316 15:24:22.989952 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83041fd9-2e75-4569-ab47-ac7590a189a6" path="/var/lib/kubelet/pods/83041fd9-2e75-4569-ab47-ac7590a189a6/volumes" Mar 16 15:24:24 crc kubenswrapper[4736]: I0316 15:24:24.127210 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" event={"ID":"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f","Type":"ContainerStarted","Data":"c70ac73ea537f142b889fbff61894491746c022fd0a50cc33677853b26139287"} Mar 16 15:24:27 crc kubenswrapper[4736]: I0316 15:24:27.155976 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" event={"ID":"ea8b282e-3be1-4ad6-8baa-6c33fd42b15f","Type":"ContainerStarted","Data":"18ca81d82ee495a2230dfa820112610614e454e93bb58cd621de5faf21adff95"} Mar 16 15:24:27 crc kubenswrapper[4736]: I0316 15:24:27.157263 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:27 crc kubenswrapper[4736]: I0316 15:24:27.157298 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:27 crc kubenswrapper[4736]: I0316 15:24:27.157310 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:27 crc kubenswrapper[4736]: I0316 15:24:27.195524 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" podStartSLOduration=7.19549967 podStartE2EDuration="7.19549967s" podCreationTimestamp="2026-03-16 15:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:24:27.19372763 +0000 UTC m=+668.921117917" watchObservedRunningTime="2026-03-16 15:24:27.19549967 +0000 UTC m=+668.922889947" Mar 16 15:24:27 crc kubenswrapper[4736]: I0316 15:24:27.199882 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:27 crc kubenswrapper[4736]: I0316 15:24:27.201117 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:32 crc kubenswrapper[4736]: I0316 15:24:32.978560 4736 scope.go:117] "RemoveContainer" containerID="a6bfb6a3231ab025727278c14ea32655f87819defe1123015b98b189953c091e" Mar 16 15:24:33 crc kubenswrapper[4736]: I0316 15:24:33.203003 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-zgcj2_eabe1535-f51c-4a72-b299-aab5ca4ab624/kube-multus/1.log" Mar 16 15:24:33 crc kubenswrapper[4736]: I0316 15:24:33.203656 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-zgcj2" event={"ID":"eabe1535-f51c-4a72-b299-aab5ca4ab624","Type":"ContainerStarted","Data":"20bab30b6718b44365eec300ba4ee7c5d9db9a66ba7f746ebfd17fa0558c1851"} Mar 16 15:24:50 crc kubenswrapper[4736]: I0316 15:24:50.698229 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ngv26" Mar 16 15:24:59 crc kubenswrapper[4736]: I0316 15:24:59.342041 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd"] Mar 16 15:24:59 crc kubenswrapper[4736]: I0316 15:24:59.345323 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" Mar 16 15:24:59 crc kubenswrapper[4736]: I0316 15:24:59.351211 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 16 15:24:59 crc kubenswrapper[4736]: I0316 15:24:59.361579 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd"] Mar 16 15:24:59 crc kubenswrapper[4736]: I0316 15:24:59.449012 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2926967-d729-4a26-8c6a-350f3a0419e1-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd\" (UID: \"f2926967-d729-4a26-8c6a-350f3a0419e1\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" Mar 16 15:24:59 crc kubenswrapper[4736]: I0316 15:24:59.449059 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2926967-d729-4a26-8c6a-350f3a0419e1-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd\" (UID: \"f2926967-d729-4a26-8c6a-350f3a0419e1\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" Mar 16 15:24:59 crc kubenswrapper[4736]: I0316 15:24:59.449085 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649fp\" (UniqueName: \"kubernetes.io/projected/f2926967-d729-4a26-8c6a-350f3a0419e1-kube-api-access-649fp\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd\" (UID: \"f2926967-d729-4a26-8c6a-350f3a0419e1\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" Mar 16 15:24:59 crc kubenswrapper[4736]: I0316 15:24:59.551221 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2926967-d729-4a26-8c6a-350f3a0419e1-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd\" (UID: \"f2926967-d729-4a26-8c6a-350f3a0419e1\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" Mar 16 15:24:59 crc kubenswrapper[4736]: I0316 15:24:59.551335 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-649fp\" (UniqueName: \"kubernetes.io/projected/f2926967-d729-4a26-8c6a-350f3a0419e1-kube-api-access-649fp\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd\" (UID: \"f2926967-d729-4a26-8c6a-350f3a0419e1\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" Mar 16 15:24:59 crc kubenswrapper[4736]: I0316 15:24:59.551574 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2926967-d729-4a26-8c6a-350f3a0419e1-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd\" (UID: \"f2926967-d729-4a26-8c6a-350f3a0419e1\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" Mar 16 15:24:59 crc kubenswrapper[4736]: I0316 15:24:59.551811 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2926967-d729-4a26-8c6a-350f3a0419e1-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd\" (UID: \"f2926967-d729-4a26-8c6a-350f3a0419e1\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" Mar 16 15:24:59 crc kubenswrapper[4736]: I0316 15:24:59.553629 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2926967-d729-4a26-8c6a-350f3a0419e1-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd\" (UID: \"f2926967-d729-4a26-8c6a-350f3a0419e1\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" Mar 16 15:24:59 crc kubenswrapper[4736]: I0316 15:24:59.574337 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-649fp\" (UniqueName: \"kubernetes.io/projected/f2926967-d729-4a26-8c6a-350f3a0419e1-kube-api-access-649fp\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd\" (UID: \"f2926967-d729-4a26-8c6a-350f3a0419e1\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" Mar 16 15:24:59 crc kubenswrapper[4736]: I0316 15:24:59.725523 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" Mar 16 15:25:00 crc kubenswrapper[4736]: I0316 15:25:00.007010 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd"] Mar 16 15:25:00 crc kubenswrapper[4736]: I0316 15:25:00.393313 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" event={"ID":"f2926967-d729-4a26-8c6a-350f3a0419e1","Type":"ContainerStarted","Data":"bc39c4f6ae73c5577e33068451a4f6639e191f8ad554d2ae2024e97c84795ff0"} Mar 16 15:25:00 crc kubenswrapper[4736]: I0316 15:25:00.393718 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" event={"ID":"f2926967-d729-4a26-8c6a-350f3a0419e1","Type":"ContainerStarted","Data":"5f19226a7045990070aedc181bf2886faf226c04a8adca75a64accdec04072bb"} Mar 16 15:25:01 crc kubenswrapper[4736]: I0316 15:25:01.402604 4736 generic.go:334] "Generic (PLEG): container finished" podID="f2926967-d729-4a26-8c6a-350f3a0419e1" containerID="bc39c4f6ae73c5577e33068451a4f6639e191f8ad554d2ae2024e97c84795ff0" exitCode=0 Mar 16 15:25:01 crc kubenswrapper[4736]: I0316 15:25:01.402652 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" event={"ID":"f2926967-d729-4a26-8c6a-350f3a0419e1","Type":"ContainerDied","Data":"bc39c4f6ae73c5577e33068451a4f6639e191f8ad554d2ae2024e97c84795ff0"} Mar 16 15:25:03 crc kubenswrapper[4736]: I0316 15:25:03.416085 4736 generic.go:334] "Generic (PLEG): container finished" podID="f2926967-d729-4a26-8c6a-350f3a0419e1" containerID="8316ecd1349b51803e776706e601673c73250d856b0e6a42a23de0e1423394cc" exitCode=0 Mar 16 15:25:03 crc kubenswrapper[4736]: I0316 15:25:03.416180 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" event={"ID":"f2926967-d729-4a26-8c6a-350f3a0419e1","Type":"ContainerDied","Data":"8316ecd1349b51803e776706e601673c73250d856b0e6a42a23de0e1423394cc"} Mar 16 15:25:04 crc kubenswrapper[4736]: I0316 15:25:04.427290 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" event={"ID":"f2926967-d729-4a26-8c6a-350f3a0419e1","Type":"ContainerDied","Data":"866eddce891b81462a516a54be87e3188c276baf87ee66a549e056825d3a753d"} Mar 16 15:25:04 crc kubenswrapper[4736]: I0316 15:25:04.428258 4736 generic.go:334] "Generic (PLEG): container finished" podID="f2926967-d729-4a26-8c6a-350f3a0419e1" containerID="866eddce891b81462a516a54be87e3188c276baf87ee66a549e056825d3a753d" exitCode=0 Mar 16 15:25:05 crc kubenswrapper[4736]: I0316 15:25:05.737478 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" Mar 16 15:25:05 crc kubenswrapper[4736]: I0316 15:25:05.855524 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2926967-d729-4a26-8c6a-350f3a0419e1-bundle\") pod \"f2926967-d729-4a26-8c6a-350f3a0419e1\" (UID: \"f2926967-d729-4a26-8c6a-350f3a0419e1\") " Mar 16 15:25:05 crc kubenswrapper[4736]: I0316 15:25:05.855608 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-649fp\" (UniqueName: \"kubernetes.io/projected/f2926967-d729-4a26-8c6a-350f3a0419e1-kube-api-access-649fp\") pod \"f2926967-d729-4a26-8c6a-350f3a0419e1\" (UID: \"f2926967-d729-4a26-8c6a-350f3a0419e1\") " Mar 16 15:25:05 crc kubenswrapper[4736]: I0316 15:25:05.855675 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2926967-d729-4a26-8c6a-350f3a0419e1-util\") pod \"f2926967-d729-4a26-8c6a-350f3a0419e1\" (UID: \"f2926967-d729-4a26-8c6a-350f3a0419e1\") " Mar 16 15:25:05 crc kubenswrapper[4736]: I0316 15:25:05.856533 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2926967-d729-4a26-8c6a-350f3a0419e1-bundle" (OuterVolumeSpecName: "bundle") pod "f2926967-d729-4a26-8c6a-350f3a0419e1" (UID: "f2926967-d729-4a26-8c6a-350f3a0419e1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:25:05 crc kubenswrapper[4736]: I0316 15:25:05.865546 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2926967-d729-4a26-8c6a-350f3a0419e1-kube-api-access-649fp" (OuterVolumeSpecName: "kube-api-access-649fp") pod "f2926967-d729-4a26-8c6a-350f3a0419e1" (UID: "f2926967-d729-4a26-8c6a-350f3a0419e1"). InnerVolumeSpecName "kube-api-access-649fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:25:05 crc kubenswrapper[4736]: I0316 15:25:05.869848 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2926967-d729-4a26-8c6a-350f3a0419e1-util" (OuterVolumeSpecName: "util") pod "f2926967-d729-4a26-8c6a-350f3a0419e1" (UID: "f2926967-d729-4a26-8c6a-350f3a0419e1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:25:05 crc kubenswrapper[4736]: I0316 15:25:05.957719 4736 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2926967-d729-4a26-8c6a-350f3a0419e1-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:25:05 crc kubenswrapper[4736]: I0316 15:25:05.957770 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-649fp\" (UniqueName: \"kubernetes.io/projected/f2926967-d729-4a26-8c6a-350f3a0419e1-kube-api-access-649fp\") on node \"crc\" DevicePath \"\"" Mar 16 15:25:05 crc kubenswrapper[4736]: I0316 15:25:05.957793 4736 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2926967-d729-4a26-8c6a-350f3a0419e1-util\") on node \"crc\" DevicePath \"\"" Mar 16 15:25:06 crc kubenswrapper[4736]: I0316 15:25:06.445201 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" event={"ID":"f2926967-d729-4a26-8c6a-350f3a0419e1","Type":"ContainerDied","Data":"5f19226a7045990070aedc181bf2886faf226c04a8adca75a64accdec04072bb"} Mar 16 15:25:06 crc kubenswrapper[4736]: I0316 15:25:06.445275 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd" Mar 16 15:25:06 crc kubenswrapper[4736]: I0316 15:25:06.445279 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f19226a7045990070aedc181bf2886faf226c04a8adca75a64accdec04072bb" Mar 16 15:25:07 crc kubenswrapper[4736]: I0316 15:25:07.842700 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-5cjp6"] Mar 16 15:25:07 crc kubenswrapper[4736]: E0316 15:25:07.843308 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2926967-d729-4a26-8c6a-350f3a0419e1" containerName="extract" Mar 16 15:25:07 crc kubenswrapper[4736]: I0316 15:25:07.843322 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2926967-d729-4a26-8c6a-350f3a0419e1" containerName="extract" Mar 16 15:25:07 crc kubenswrapper[4736]: E0316 15:25:07.843335 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2926967-d729-4a26-8c6a-350f3a0419e1" containerName="util" Mar 16 15:25:07 crc kubenswrapper[4736]: I0316 15:25:07.843342 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2926967-d729-4a26-8c6a-350f3a0419e1" containerName="util" Mar 16 15:25:07 crc kubenswrapper[4736]: E0316 15:25:07.843352 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2926967-d729-4a26-8c6a-350f3a0419e1" containerName="pull" Mar 16 15:25:07 crc kubenswrapper[4736]: I0316 15:25:07.843359 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2926967-d729-4a26-8c6a-350f3a0419e1" containerName="pull" Mar 16 15:25:07 crc kubenswrapper[4736]: I0316 15:25:07.843462 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2926967-d729-4a26-8c6a-350f3a0419e1" containerName="extract" Mar 16 15:25:07 crc kubenswrapper[4736]: I0316 15:25:07.843857 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-5cjp6" Mar 16 15:25:07 crc kubenswrapper[4736]: I0316 15:25:07.845719 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 16 15:25:07 crc kubenswrapper[4736]: I0316 15:25:07.845983 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rwjwl" Mar 16 15:25:07 crc kubenswrapper[4736]: I0316 15:25:07.846147 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 16 15:25:07 crc kubenswrapper[4736]: I0316 15:25:07.862182 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-5cjp6"] Mar 16 15:25:07 crc kubenswrapper[4736]: I0316 15:25:07.997231 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rscs5\" (UniqueName: \"kubernetes.io/projected/46173c99-f17a-4099-a210-397cf7b8cd18-kube-api-access-rscs5\") pod \"nmstate-operator-796d4cfff4-5cjp6\" (UID: \"46173c99-f17a-4099-a210-397cf7b8cd18\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-5cjp6" Mar 16 15:25:08 crc kubenswrapper[4736]: I0316 15:25:08.099064 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rscs5\" (UniqueName: \"kubernetes.io/projected/46173c99-f17a-4099-a210-397cf7b8cd18-kube-api-access-rscs5\") pod \"nmstate-operator-796d4cfff4-5cjp6\" (UID: \"46173c99-f17a-4099-a210-397cf7b8cd18\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-5cjp6" Mar 16 15:25:08 crc kubenswrapper[4736]: I0316 15:25:08.117514 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rscs5\" (UniqueName: \"kubernetes.io/projected/46173c99-f17a-4099-a210-397cf7b8cd18-kube-api-access-rscs5\") pod \"nmstate-operator-796d4cfff4-5cjp6\" (UID: \"46173c99-f17a-4099-a210-397cf7b8cd18\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-5cjp6" Mar 16 15:25:08 crc kubenswrapper[4736]: I0316 15:25:08.157115 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-5cjp6" Mar 16 15:25:08 crc kubenswrapper[4736]: I0316 15:25:08.349826 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-5cjp6"] Mar 16 15:25:08 crc kubenswrapper[4736]: I0316 15:25:08.463836 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-5cjp6" event={"ID":"46173c99-f17a-4099-a210-397cf7b8cd18","Type":"ContainerStarted","Data":"26421c62171edb305cc5d72c19f3c7a6787d035b1f577dd1e7af319d7dc7401e"} Mar 16 15:25:11 crc kubenswrapper[4736]: I0316 15:25:11.483779 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-5cjp6" event={"ID":"46173c99-f17a-4099-a210-397cf7b8cd18","Type":"ContainerStarted","Data":"cc773a3a6328871e6816ade0e86b41cb9a174bad9455b6d665495ed5c8240f17"} Mar 16 15:25:11 crc kubenswrapper[4736]: I0316 15:25:11.506041 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-5cjp6" podStartSLOduration=1.725158112 podStartE2EDuration="4.506015567s" podCreationTimestamp="2026-03-16 15:25:07 +0000 UTC" firstStartedPulling="2026-03-16 15:25:08.362958076 +0000 UTC m=+710.090348363" lastFinishedPulling="2026-03-16 15:25:11.143815531 +0000 UTC m=+712.871205818" observedRunningTime="2026-03-16 15:25:11.499080704 +0000 UTC m=+713.226471011" watchObservedRunningTime="2026-03-16 15:25:11.506015567 +0000 UTC m=+713.233405864" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.445321 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-nhkfx"] Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.446647 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nhkfx" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.449239 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9n7nj" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.486696 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-nhkfx"] Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.528161 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l"] Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.529052 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.543531 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.552459 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tbvd2"] Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.563848 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.571820 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l"] Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.601626 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-867fx\" (UniqueName: \"kubernetes.io/projected/b184bdf0-82cb-428b-96ce-f4ebbada7645-kube-api-access-867fx\") pod \"nmstate-metrics-9b8c8685d-nhkfx\" (UID: \"b184bdf0-82cb-428b-96ce-f4ebbada7645\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nhkfx" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.703457 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eea9e7aa-6f24-4b45-b7b4-347a38dccb64-ovs-socket\") pod \"nmstate-handler-tbvd2\" (UID: \"eea9e7aa-6f24-4b45-b7b4-347a38dccb64\") " pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.703531 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eea9e7aa-6f24-4b45-b7b4-347a38dccb64-nmstate-lock\") pod \"nmstate-handler-tbvd2\" (UID: \"eea9e7aa-6f24-4b45-b7b4-347a38dccb64\") " pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.703560 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/88c72b3e-a013-4f44-ae5f-93e44846f22a-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-xbp5l\" (UID: \"88c72b3e-a013-4f44-ae5f-93e44846f22a\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.703617 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-867fx\" (UniqueName: \"kubernetes.io/projected/b184bdf0-82cb-428b-96ce-f4ebbada7645-kube-api-access-867fx\") pod \"nmstate-metrics-9b8c8685d-nhkfx\" (UID: \"b184bdf0-82cb-428b-96ce-f4ebbada7645\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nhkfx" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.703642 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eea9e7aa-6f24-4b45-b7b4-347a38dccb64-dbus-socket\") pod \"nmstate-handler-tbvd2\" (UID: \"eea9e7aa-6f24-4b45-b7b4-347a38dccb64\") " pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.703670 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ql9\" (UniqueName: \"kubernetes.io/projected/88c72b3e-a013-4f44-ae5f-93e44846f22a-kube-api-access-z8ql9\") pod \"nmstate-webhook-5f558f5558-xbp5l\" (UID: \"88c72b3e-a013-4f44-ae5f-93e44846f22a\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.703695 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzxqd\" (UniqueName: \"kubernetes.io/projected/eea9e7aa-6f24-4b45-b7b4-347a38dccb64-kube-api-access-hzxqd\") pod \"nmstate-handler-tbvd2\" (UID: \"eea9e7aa-6f24-4b45-b7b4-347a38dccb64\") " pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.745629 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-867fx\" (UniqueName: \"kubernetes.io/projected/b184bdf0-82cb-428b-96ce-f4ebbada7645-kube-api-access-867fx\") pod \"nmstate-metrics-9b8c8685d-nhkfx\" (UID: \"b184bdf0-82cb-428b-96ce-f4ebbada7645\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nhkfx" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.764385 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nhkfx" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.793390 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn"] Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.794156 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" Mar 16 15:25:12 crc kubenswrapper[4736]: W0316 15:25:12.801766 4736 reflector.go:561] object-"openshift-nmstate"/"default-dockercfg-pb9jg": failed to list *v1.Secret: secrets "default-dockercfg-pb9jg" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Mar 16 15:25:12 crc kubenswrapper[4736]: E0316 15:25:12.801835 4736 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"default-dockercfg-pb9jg\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-pb9jg\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 16 15:25:12 crc kubenswrapper[4736]: W0316 15:25:12.801899 4736 reflector.go:561] object-"openshift-nmstate"/"plugin-serving-cert": failed to list *v1.Secret: secrets "plugin-serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Mar 16 15:25:12 crc kubenswrapper[4736]: E0316 15:25:12.801910 4736 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"plugin-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"plugin-serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 16 15:25:12 crc kubenswrapper[4736]: W0316 15:25:12.801947 4736 reflector.go:561] object-"openshift-nmstate"/"nginx-conf": failed to list *v1.ConfigMap: configmaps "nginx-conf" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-nmstate": no relationship found between node 'crc' and this object Mar 16 15:25:12 crc kubenswrapper[4736]: E0316 15:25:12.801959 4736 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nginx-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"nginx-conf\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-nmstate\": no relationship found between node 'crc' and this object" logger="UnhandledError" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.806799 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eea9e7aa-6f24-4b45-b7b4-347a38dccb64-dbus-socket\") pod \"nmstate-handler-tbvd2\" (UID: \"eea9e7aa-6f24-4b45-b7b4-347a38dccb64\") " pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.806864 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-tsmxn\" (UID: \"3822e5b4-5129-4b3b-8bf3-7262a5ad4cde\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.806890 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxvhq\" (UniqueName: \"kubernetes.io/projected/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-kube-api-access-sxvhq\") pod \"nmstate-console-plugin-86f58fcf4-tsmxn\" (UID: \"3822e5b4-5129-4b3b-8bf3-7262a5ad4cde\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.806917 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8ql9\" (UniqueName: \"kubernetes.io/projected/88c72b3e-a013-4f44-ae5f-93e44846f22a-kube-api-access-z8ql9\") pod \"nmstate-webhook-5f558f5558-xbp5l\" (UID: \"88c72b3e-a013-4f44-ae5f-93e44846f22a\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.806942 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzxqd\" (UniqueName: \"kubernetes.io/projected/eea9e7aa-6f24-4b45-b7b4-347a38dccb64-kube-api-access-hzxqd\") pod \"nmstate-handler-tbvd2\" (UID: \"eea9e7aa-6f24-4b45-b7b4-347a38dccb64\") " pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.807390 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/eea9e7aa-6f24-4b45-b7b4-347a38dccb64-dbus-socket\") pod \"nmstate-handler-tbvd2\" (UID: \"eea9e7aa-6f24-4b45-b7b4-347a38dccb64\") " pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.807473 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-tsmxn\" (UID: \"3822e5b4-5129-4b3b-8bf3-7262a5ad4cde\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.807532 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eea9e7aa-6f24-4b45-b7b4-347a38dccb64-ovs-socket\") pod \"nmstate-handler-tbvd2\" (UID: \"eea9e7aa-6f24-4b45-b7b4-347a38dccb64\") " pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.807664 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eea9e7aa-6f24-4b45-b7b4-347a38dccb64-nmstate-lock\") pod \"nmstate-handler-tbvd2\" (UID: \"eea9e7aa-6f24-4b45-b7b4-347a38dccb64\") " pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.807733 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/88c72b3e-a013-4f44-ae5f-93e44846f22a-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-xbp5l\" (UID: \"88c72b3e-a013-4f44-ae5f-93e44846f22a\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" Mar 16 15:25:12 crc kubenswrapper[4736]: E0316 15:25:12.808034 4736 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 16 15:25:12 crc kubenswrapper[4736]: E0316 15:25:12.808474 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88c72b3e-a013-4f44-ae5f-93e44846f22a-tls-key-pair podName:88c72b3e-a013-4f44-ae5f-93e44846f22a nodeName:}" failed. No retries permitted until 2026-03-16 15:25:13.308447034 +0000 UTC m=+715.035837321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/88c72b3e-a013-4f44-ae5f-93e44846f22a-tls-key-pair") pod "nmstate-webhook-5f558f5558-xbp5l" (UID: "88c72b3e-a013-4f44-ae5f-93e44846f22a") : secret "openshift-nmstate-webhook" not found Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.808905 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/eea9e7aa-6f24-4b45-b7b4-347a38dccb64-ovs-socket\") pod \"nmstate-handler-tbvd2\" (UID: \"eea9e7aa-6f24-4b45-b7b4-347a38dccb64\") " pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.808943 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/eea9e7aa-6f24-4b45-b7b4-347a38dccb64-nmstate-lock\") pod \"nmstate-handler-tbvd2\" (UID: \"eea9e7aa-6f24-4b45-b7b4-347a38dccb64\") " pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.835555 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn"] Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.859040 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzxqd\" (UniqueName: \"kubernetes.io/projected/eea9e7aa-6f24-4b45-b7b4-347a38dccb64-kube-api-access-hzxqd\") pod \"nmstate-handler-tbvd2\" (UID: \"eea9e7aa-6f24-4b45-b7b4-347a38dccb64\") " pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.872464 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8ql9\" (UniqueName: \"kubernetes.io/projected/88c72b3e-a013-4f44-ae5f-93e44846f22a-kube-api-access-z8ql9\") pod \"nmstate-webhook-5f558f5558-xbp5l\" (UID: \"88c72b3e-a013-4f44-ae5f-93e44846f22a\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.906795 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.915151 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-tsmxn\" (UID: \"3822e5b4-5129-4b3b-8bf3-7262a5ad4cde\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.915205 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxvhq\" (UniqueName: \"kubernetes.io/projected/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-kube-api-access-sxvhq\") pod \"nmstate-console-plugin-86f58fcf4-tsmxn\" (UID: \"3822e5b4-5129-4b3b-8bf3-7262a5ad4cde\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.915256 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-tsmxn\" (UID: \"3822e5b4-5129-4b3b-8bf3-7262a5ad4cde\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" Mar 16 15:25:12 crc kubenswrapper[4736]: I0316 15:25:12.977322 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxvhq\" (UniqueName: \"kubernetes.io/projected/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-kube-api-access-sxvhq\") pod \"nmstate-console-plugin-86f58fcf4-tsmxn\" (UID: \"3822e5b4-5129-4b3b-8bf3-7262a5ad4cde\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.057064 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-b958d4498-zgmwr"] Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.057927 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.139984 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b958d4498-zgmwr"] Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.214581 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-nhkfx"] Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.220513 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4563a36-0b6c-48df-acc0-c0bcd50eb002-console-config\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.220569 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4563a36-0b6c-48df-acc0-c0bcd50eb002-service-ca\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.220590 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4563a36-0b6c-48df-acc0-c0bcd50eb002-console-oauth-config\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.220668 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb8vn\" (UniqueName: \"kubernetes.io/projected/d4563a36-0b6c-48df-acc0-c0bcd50eb002-kube-api-access-wb8vn\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.220685 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4563a36-0b6c-48df-acc0-c0bcd50eb002-oauth-serving-cert\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.220702 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4563a36-0b6c-48df-acc0-c0bcd50eb002-console-serving-cert\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.220734 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4563a36-0b6c-48df-acc0-c0bcd50eb002-trusted-ca-bundle\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.322289 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb8vn\" (UniqueName: \"kubernetes.io/projected/d4563a36-0b6c-48df-acc0-c0bcd50eb002-kube-api-access-wb8vn\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.322382 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4563a36-0b6c-48df-acc0-c0bcd50eb002-oauth-serving-cert\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.322421 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4563a36-0b6c-48df-acc0-c0bcd50eb002-console-serving-cert\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.322469 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/88c72b3e-a013-4f44-ae5f-93e44846f22a-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-xbp5l\" (UID: \"88c72b3e-a013-4f44-ae5f-93e44846f22a\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.322510 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4563a36-0b6c-48df-acc0-c0bcd50eb002-trusted-ca-bundle\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.322577 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4563a36-0b6c-48df-acc0-c0bcd50eb002-console-config\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.322626 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4563a36-0b6c-48df-acc0-c0bcd50eb002-service-ca\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.322662 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4563a36-0b6c-48df-acc0-c0bcd50eb002-console-oauth-config\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.324404 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d4563a36-0b6c-48df-acc0-c0bcd50eb002-console-config\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.324963 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4563a36-0b6c-48df-acc0-c0bcd50eb002-trusted-ca-bundle\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.325440 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d4563a36-0b6c-48df-acc0-c0bcd50eb002-oauth-serving-cert\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.325695 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d4563a36-0b6c-48df-acc0-c0bcd50eb002-service-ca\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.337937 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/88c72b3e-a013-4f44-ae5f-93e44846f22a-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-xbp5l\" (UID: \"88c72b3e-a013-4f44-ae5f-93e44846f22a\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.338017 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d4563a36-0b6c-48df-acc0-c0bcd50eb002-console-oauth-config\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.338097 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4563a36-0b6c-48df-acc0-c0bcd50eb002-console-serving-cert\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.342813 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb8vn\" (UniqueName: \"kubernetes.io/projected/d4563a36-0b6c-48df-acc0-c0bcd50eb002-kube-api-access-wb8vn\") pod \"console-b958d4498-zgmwr\" (UID: \"d4563a36-0b6c-48df-acc0-c0bcd50eb002\") " pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.416758 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.475354 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.498340 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tbvd2" event={"ID":"eea9e7aa-6f24-4b45-b7b4-347a38dccb64","Type":"ContainerStarted","Data":"ad3f5efc34e350384ed2f959d6f4263da7c35529369654591f39e2ca729bdddc"} Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.500489 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nhkfx" event={"ID":"b184bdf0-82cb-428b-96ce-f4ebbada7645","Type":"ContainerStarted","Data":"a523bf4b983d6ab19d440df57410dab76ba94399ed06db56f565faae23c95777"} Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.669564 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b958d4498-zgmwr"] Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.728929 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-pb9jg" Mar 16 15:25:13 crc kubenswrapper[4736]: I0316 15:25:13.752219 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l"] Mar 16 15:25:13 crc kubenswrapper[4736]: E0316 15:25:13.915928 4736 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 16 15:25:13 crc kubenswrapper[4736]: E0316 15:25:13.916057 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-plugin-serving-cert podName:3822e5b4-5129-4b3b-8bf3-7262a5ad4cde nodeName:}" failed. No retries permitted until 2026-03-16 15:25:14.416028575 +0000 UTC m=+716.143418862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-tsmxn" (UID: "3822e5b4-5129-4b3b-8bf3-7262a5ad4cde") : failed to sync secret cache: timed out waiting for the condition Mar 16 15:25:13 crc kubenswrapper[4736]: E0316 15:25:13.916324 4736 configmap.go:193] Couldn't get configMap openshift-nmstate/nginx-conf: failed to sync configmap cache: timed out waiting for the condition Mar 16 15:25:13 crc kubenswrapper[4736]: E0316 15:25:13.916458 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-nginx-conf podName:3822e5b4-5129-4b3b-8bf3-7262a5ad4cde nodeName:}" failed. No retries permitted until 2026-03-16 15:25:14.416431226 +0000 UTC m=+716.143821503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-nginx-conf") pod "nmstate-console-plugin-86f58fcf4-tsmxn" (UID: "3822e5b4-5129-4b3b-8bf3-7262a5ad4cde") : failed to sync configmap cache: timed out waiting for the condition Mar 16 15:25:14 crc kubenswrapper[4736]: I0316 15:25:14.309251 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 16 15:25:14 crc kubenswrapper[4736]: I0316 15:25:14.319274 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 16 15:25:14 crc kubenswrapper[4736]: I0316 15:25:14.444431 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-tsmxn\" (UID: \"3822e5b4-5129-4b3b-8bf3-7262a5ad4cde\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" Mar 16 15:25:14 crc kubenswrapper[4736]: I0316 15:25:14.444554 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-tsmxn\" (UID: \"3822e5b4-5129-4b3b-8bf3-7262a5ad4cde\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" Mar 16 15:25:14 crc kubenswrapper[4736]: I0316 15:25:14.445976 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-tsmxn\" (UID: \"3822e5b4-5129-4b3b-8bf3-7262a5ad4cde\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" Mar 16 15:25:14 crc kubenswrapper[4736]: I0316 15:25:14.455506 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3822e5b4-5129-4b3b-8bf3-7262a5ad4cde-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-tsmxn\" (UID: \"3822e5b4-5129-4b3b-8bf3-7262a5ad4cde\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" Mar 16 15:25:14 crc kubenswrapper[4736]: I0316 15:25:14.509306 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" event={"ID":"88c72b3e-a013-4f44-ae5f-93e44846f22a","Type":"ContainerStarted","Data":"a4d8ed598dcf6376e6df5118c1d789bfa70137d3117e19d87c8d1b0e3afddd51"} Mar 16 15:25:14 crc kubenswrapper[4736]: I0316 15:25:14.512194 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b958d4498-zgmwr" event={"ID":"d4563a36-0b6c-48df-acc0-c0bcd50eb002","Type":"ContainerStarted","Data":"3b860163ecd977ff676beb1c67e322da07c9d69df529e38ff4bb6a0413782245"} Mar 16 15:25:14 crc kubenswrapper[4736]: I0316 15:25:14.512276 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b958d4498-zgmwr" event={"ID":"d4563a36-0b6c-48df-acc0-c0bcd50eb002","Type":"ContainerStarted","Data":"959ac18721e47b61e35b37b7bfe29698fbf534f83df14afd7e0b3029ac0b9c73"} Mar 16 15:25:14 crc kubenswrapper[4736]: I0316 15:25:14.539147 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-b958d4498-zgmwr" podStartSLOduration=1.5390859749999999 podStartE2EDuration="1.539085975s" podCreationTimestamp="2026-03-16 15:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:25:14.531937155 +0000 UTC m=+716.259327452" watchObservedRunningTime="2026-03-16 15:25:14.539085975 +0000 UTC m=+716.266476272" Mar 16 15:25:14 crc kubenswrapper[4736]: I0316 15:25:14.654388 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" Mar 16 15:25:14 crc kubenswrapper[4736]: I0316 15:25:14.910047 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn"] Mar 16 15:25:14 crc kubenswrapper[4736]: W0316 15:25:14.920474 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3822e5b4_5129_4b3b_8bf3_7262a5ad4cde.slice/crio-bac38716477131f669a9172110283c8bbe3860d0551e16b8b5386efcaa60fda6 WatchSource:0}: Error finding container bac38716477131f669a9172110283c8bbe3860d0551e16b8b5386efcaa60fda6: Status 404 returned error can't find the container with id bac38716477131f669a9172110283c8bbe3860d0551e16b8b5386efcaa60fda6 Mar 16 15:25:15 crc kubenswrapper[4736]: I0316 15:25:15.519291 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" event={"ID":"3822e5b4-5129-4b3b-8bf3-7262a5ad4cde","Type":"ContainerStarted","Data":"bac38716477131f669a9172110283c8bbe3860d0551e16b8b5386efcaa60fda6"} Mar 16 15:25:18 crc kubenswrapper[4736]: I0316 15:25:18.585578 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tbvd2" event={"ID":"eea9e7aa-6f24-4b45-b7b4-347a38dccb64","Type":"ContainerStarted","Data":"40a0ec3c9ccd3fef6baab88eb863f6ff261619fb3a734f2e1c25727a596eefdd"} Mar 16 15:25:18 crc kubenswrapper[4736]: I0316 15:25:18.587649 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:18 crc kubenswrapper[4736]: I0316 15:25:18.590181 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" event={"ID":"88c72b3e-a013-4f44-ae5f-93e44846f22a","Type":"ContainerStarted","Data":"baecee11cd2e908d01373b2fb062b1b08c1e6849ec137017f586f470b2d72908"} Mar 16 15:25:18 crc kubenswrapper[4736]: I0316 15:25:18.590705 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" Mar 16 15:25:18 crc kubenswrapper[4736]: I0316 15:25:18.593709 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nhkfx" event={"ID":"b184bdf0-82cb-428b-96ce-f4ebbada7645","Type":"ContainerStarted","Data":"0a3bfb3c3090cac12d693f39b40d9565b01da93be434a61ff9b8108d1a19d7ab"} Mar 16 15:25:18 crc kubenswrapper[4736]: I0316 15:25:18.611551 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tbvd2" podStartSLOduration=2.19201365 podStartE2EDuration="6.611522854s" podCreationTimestamp="2026-03-16 15:25:12 +0000 UTC" firstStartedPulling="2026-03-16 15:25:13.004529914 +0000 UTC m=+714.731920201" lastFinishedPulling="2026-03-16 15:25:17.424039118 +0000 UTC m=+719.151429405" observedRunningTime="2026-03-16 15:25:18.604122867 +0000 UTC m=+720.331513154" watchObservedRunningTime="2026-03-16 15:25:18.611522854 +0000 UTC m=+720.338913141" Mar 16 15:25:18 crc kubenswrapper[4736]: I0316 15:25:18.625506 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" podStartSLOduration=2.975913167 podStartE2EDuration="6.625479982s" podCreationTimestamp="2026-03-16 15:25:12 +0000 UTC" firstStartedPulling="2026-03-16 15:25:13.765675028 +0000 UTC m=+715.493065315" lastFinishedPulling="2026-03-16 15:25:17.415241843 +0000 UTC m=+719.142632130" observedRunningTime="2026-03-16 15:25:18.622716415 +0000 UTC m=+720.350106722" watchObservedRunningTime="2026-03-16 15:25:18.625479982 +0000 UTC m=+720.352870269" Mar 16 15:25:19 crc kubenswrapper[4736]: I0316 15:25:19.605455 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" event={"ID":"3822e5b4-5129-4b3b-8bf3-7262a5ad4cde","Type":"ContainerStarted","Data":"884851f40f6b1afc9518f65790ab92c58463c9aacad7cb46ef1bdab21a979031"} Mar 16 15:25:21 crc kubenswrapper[4736]: I0316 15:25:21.625904 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nhkfx" event={"ID":"b184bdf0-82cb-428b-96ce-f4ebbada7645","Type":"ContainerStarted","Data":"26541d7b5df21b6e0a575af1da71289598d32608ee88223ecd6c6031e1ec87a8"} Mar 16 15:25:21 crc kubenswrapper[4736]: I0316 15:25:21.655094 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-tsmxn" podStartSLOduration=5.942823113 podStartE2EDuration="9.655073762s" podCreationTimestamp="2026-03-16 15:25:12 +0000 UTC" firstStartedPulling="2026-03-16 15:25:14.923270862 +0000 UTC m=+716.650661149" lastFinishedPulling="2026-03-16 15:25:18.635521511 +0000 UTC m=+720.362911798" observedRunningTime="2026-03-16 15:25:19.629904781 +0000 UTC m=+721.357295088" watchObservedRunningTime="2026-03-16 15:25:21.655073762 +0000 UTC m=+723.382464049" Mar 16 15:25:21 crc kubenswrapper[4736]: I0316 15:25:21.656470 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-nhkfx" podStartSLOduration=2.350735189 podStartE2EDuration="9.656464491s" podCreationTimestamp="2026-03-16 15:25:12 +0000 UTC" firstStartedPulling="2026-03-16 15:25:13.228253503 +0000 UTC m=+714.955643790" lastFinishedPulling="2026-03-16 15:25:20.533982785 +0000 UTC m=+722.261373092" observedRunningTime="2026-03-16 15:25:21.651916334 +0000 UTC m=+723.379306621" watchObservedRunningTime="2026-03-16 15:25:21.656464491 +0000 UTC m=+723.383854778" Mar 16 15:25:21 crc kubenswrapper[4736]: I0316 15:25:21.831193 4736 scope.go:117] "RemoveContainer" containerID="91640302b04a74294e374a0d7f0ba157d488b60040d4efcead0a384042cfff37" Mar 16 15:25:22 crc kubenswrapper[4736]: I0316 15:25:22.929736 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tbvd2" Mar 16 15:25:23 crc kubenswrapper[4736]: I0316 15:25:23.417319 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:23 crc kubenswrapper[4736]: I0316 15:25:23.417416 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:23 crc kubenswrapper[4736]: I0316 15:25:23.425355 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:23 crc kubenswrapper[4736]: I0316 15:25:23.642933 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-b958d4498-zgmwr" Mar 16 15:25:23 crc kubenswrapper[4736]: I0316 15:25:23.715008 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-p78nr"] Mar 16 15:25:33 crc kubenswrapper[4736]: I0316 15:25:33.485940 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" Mar 16 15:25:48 crc kubenswrapper[4736]: I0316 15:25:48.778903 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-p78nr" podUID="bdf738c2-dd67-4aea-9d3e-03d68658ee50" containerName="console" containerID="cri-o://c2370959f27f2be328ad11a23448ca0280a8c7e5b2ee0c4253cec9e2358e1ba2" gracePeriod=15 Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.137654 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk"] Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.138900 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.141027 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.151637 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk"] Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.221093 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-p78nr_bdf738c2-dd67-4aea-9d3e-03d68658ee50/console/0.log" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.221410 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.290182 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5b483ba-1563-4424-bd65-5e489514f5e5-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk\" (UID: \"c5b483ba-1563-4424-bd65-5e489514f5e5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.290263 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwmdh\" (UniqueName: \"kubernetes.io/projected/c5b483ba-1563-4424-bd65-5e489514f5e5-kube-api-access-qwmdh\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk\" (UID: \"c5b483ba-1563-4424-bd65-5e489514f5e5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.290290 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5b483ba-1563-4424-bd65-5e489514f5e5-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk\" (UID: \"c5b483ba-1563-4424-bd65-5e489514f5e5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.391244 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-config\") pod \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.391592 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-oauth-config\") pod \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.391749 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-service-ca\") pod \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.391901 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd6lg\" (UniqueName: \"kubernetes.io/projected/bdf738c2-dd67-4aea-9d3e-03d68658ee50-kube-api-access-nd6lg\") pod \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.392095 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-trusted-ca-bundle\") pod \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.392259 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-oauth-serving-cert\") pod \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.392365 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-serving-cert\") pod \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\" (UID: \"bdf738c2-dd67-4aea-9d3e-03d68658ee50\") " Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.392661 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bdf738c2-dd67-4aea-9d3e-03d68658ee50" (UID: "bdf738c2-dd67-4aea-9d3e-03d68658ee50"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.392695 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-config" (OuterVolumeSpecName: "console-config") pod "bdf738c2-dd67-4aea-9d3e-03d68658ee50" (UID: "bdf738c2-dd67-4aea-9d3e-03d68658ee50"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.392705 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bdf738c2-dd67-4aea-9d3e-03d68658ee50" (UID: "bdf738c2-dd67-4aea-9d3e-03d68658ee50"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.392732 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-service-ca" (OuterVolumeSpecName: "service-ca") pod "bdf738c2-dd67-4aea-9d3e-03d68658ee50" (UID: "bdf738c2-dd67-4aea-9d3e-03d68658ee50"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.392683 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5b483ba-1563-4424-bd65-5e489514f5e5-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk\" (UID: \"c5b483ba-1563-4424-bd65-5e489514f5e5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.393073 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwmdh\" (UniqueName: \"kubernetes.io/projected/c5b483ba-1563-4424-bd65-5e489514f5e5-kube-api-access-qwmdh\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk\" (UID: \"c5b483ba-1563-4424-bd65-5e489514f5e5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.393209 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5b483ba-1563-4424-bd65-5e489514f5e5-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk\" (UID: \"c5b483ba-1563-4424-bd65-5e489514f5e5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.393390 4736 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.393504 4736 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-service-ca\") on node \"crc\" DevicePath \"\"" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.393600 4736 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.393682 4736 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bdf738c2-dd67-4aea-9d3e-03d68658ee50-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.393509 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5b483ba-1563-4424-bd65-5e489514f5e5-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk\" (UID: \"c5b483ba-1563-4424-bd65-5e489514f5e5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.393129 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5b483ba-1563-4424-bd65-5e489514f5e5-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk\" (UID: \"c5b483ba-1563-4424-bd65-5e489514f5e5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.398597 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf738c2-dd67-4aea-9d3e-03d68658ee50-kube-api-access-nd6lg" (OuterVolumeSpecName: "kube-api-access-nd6lg") pod "bdf738c2-dd67-4aea-9d3e-03d68658ee50" (UID: "bdf738c2-dd67-4aea-9d3e-03d68658ee50"). InnerVolumeSpecName "kube-api-access-nd6lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.399016 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bdf738c2-dd67-4aea-9d3e-03d68658ee50" (UID: "bdf738c2-dd67-4aea-9d3e-03d68658ee50"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.407086 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bdf738c2-dd67-4aea-9d3e-03d68658ee50" (UID: "bdf738c2-dd67-4aea-9d3e-03d68658ee50"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.414789 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwmdh\" (UniqueName: \"kubernetes.io/projected/c5b483ba-1563-4424-bd65-5e489514f5e5-kube-api-access-qwmdh\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk\" (UID: \"c5b483ba-1563-4424-bd65-5e489514f5e5\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.495654 4736 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.495721 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd6lg\" (UniqueName: \"kubernetes.io/projected/bdf738c2-dd67-4aea-9d3e-03d68658ee50-kube-api-access-nd6lg\") on node \"crc\" DevicePath \"\"" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.495746 4736 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bdf738c2-dd67-4aea-9d3e-03d68658ee50-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.515775 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.727690 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk"] Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.827657 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" event={"ID":"c5b483ba-1563-4424-bd65-5e489514f5e5","Type":"ContainerStarted","Data":"a42215cc45b68378f25b298a30b74a67d85208033689c96ee79adc6b335e1faf"} Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.829420 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-p78nr_bdf738c2-dd67-4aea-9d3e-03d68658ee50/console/0.log" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.829450 4736 generic.go:334] "Generic (PLEG): container finished" podID="bdf738c2-dd67-4aea-9d3e-03d68658ee50" containerID="c2370959f27f2be328ad11a23448ca0280a8c7e5b2ee0c4253cec9e2358e1ba2" exitCode=2 Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.829495 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p78nr" event={"ID":"bdf738c2-dd67-4aea-9d3e-03d68658ee50","Type":"ContainerDied","Data":"c2370959f27f2be328ad11a23448ca0280a8c7e5b2ee0c4253cec9e2358e1ba2"} Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.829515 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-p78nr" event={"ID":"bdf738c2-dd67-4aea-9d3e-03d68658ee50","Type":"ContainerDied","Data":"fe5afa948477d3bb0b33a7b26ffe22214def79f125635dd292c53d100e6eb72f"} Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.829533 4736 scope.go:117] "RemoveContainer" containerID="c2370959f27f2be328ad11a23448ca0280a8c7e5b2ee0c4253cec9e2358e1ba2" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.829667 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-p78nr" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.853032 4736 scope.go:117] "RemoveContainer" containerID="c2370959f27f2be328ad11a23448ca0280a8c7e5b2ee0c4253cec9e2358e1ba2" Mar 16 15:25:49 crc kubenswrapper[4736]: E0316 15:25:49.853744 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2370959f27f2be328ad11a23448ca0280a8c7e5b2ee0c4253cec9e2358e1ba2\": container with ID starting with c2370959f27f2be328ad11a23448ca0280a8c7e5b2ee0c4253cec9e2358e1ba2 not found: ID does not exist" containerID="c2370959f27f2be328ad11a23448ca0280a8c7e5b2ee0c4253cec9e2358e1ba2" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.853780 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2370959f27f2be328ad11a23448ca0280a8c7e5b2ee0c4253cec9e2358e1ba2"} err="failed to get container status \"c2370959f27f2be328ad11a23448ca0280a8c7e5b2ee0c4253cec9e2358e1ba2\": rpc error: code = NotFound desc = could not find container \"c2370959f27f2be328ad11a23448ca0280a8c7e5b2ee0c4253cec9e2358e1ba2\": container with ID starting with c2370959f27f2be328ad11a23448ca0280a8c7e5b2ee0c4253cec9e2358e1ba2 not found: ID does not exist" Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.870942 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-p78nr"] Mar 16 15:25:49 crc kubenswrapper[4736]: I0316 15:25:49.876716 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-p78nr"] Mar 16 15:25:50 crc kubenswrapper[4736]: I0316 15:25:50.838458 4736 generic.go:334] "Generic (PLEG): container finished" podID="c5b483ba-1563-4424-bd65-5e489514f5e5" containerID="51bc2a762df384aadd4ba0dd657cfa492b8db88bfb9fdf34dc9b55a3a8cba6e9" exitCode=0 Mar 16 15:25:50 crc kubenswrapper[4736]: I0316 15:25:50.838514 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" event={"ID":"c5b483ba-1563-4424-bd65-5e489514f5e5","Type":"ContainerDied","Data":"51bc2a762df384aadd4ba0dd657cfa492b8db88bfb9fdf34dc9b55a3a8cba6e9"} Mar 16 15:25:50 crc kubenswrapper[4736]: I0316 15:25:50.989844 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf738c2-dd67-4aea-9d3e-03d68658ee50" path="/var/lib/kubelet/pods/bdf738c2-dd67-4aea-9d3e-03d68658ee50/volumes" Mar 16 15:25:53 crc kubenswrapper[4736]: I0316 15:25:53.863341 4736 generic.go:334] "Generic (PLEG): container finished" podID="c5b483ba-1563-4424-bd65-5e489514f5e5" containerID="17e2b45694f210a3541e5e4f1ea8c85b2c3df8d018ce42ae9a9bdf181da64c20" exitCode=0 Mar 16 15:25:53 crc kubenswrapper[4736]: I0316 15:25:53.863411 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" event={"ID":"c5b483ba-1563-4424-bd65-5e489514f5e5","Type":"ContainerDied","Data":"17e2b45694f210a3541e5e4f1ea8c85b2c3df8d018ce42ae9a9bdf181da64c20"} Mar 16 15:25:54 crc kubenswrapper[4736]: I0316 15:25:54.876336 4736 generic.go:334] "Generic (PLEG): container finished" podID="c5b483ba-1563-4424-bd65-5e489514f5e5" containerID="3b92c9355cf27506e00ecd00550ffd533454d45947bde4b359cbfd81ff4c8c99" exitCode=0 Mar 16 15:25:54 crc kubenswrapper[4736]: I0316 15:25:54.876386 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" event={"ID":"c5b483ba-1563-4424-bd65-5e489514f5e5","Type":"ContainerDied","Data":"3b92c9355cf27506e00ecd00550ffd533454d45947bde4b359cbfd81ff4c8c99"} Mar 16 15:25:56 crc kubenswrapper[4736]: I0316 15:25:56.187746 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" Mar 16 15:25:56 crc kubenswrapper[4736]: I0316 15:25:56.322701 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwmdh\" (UniqueName: \"kubernetes.io/projected/c5b483ba-1563-4424-bd65-5e489514f5e5-kube-api-access-qwmdh\") pod \"c5b483ba-1563-4424-bd65-5e489514f5e5\" (UID: \"c5b483ba-1563-4424-bd65-5e489514f5e5\") " Mar 16 15:25:56 crc kubenswrapper[4736]: I0316 15:25:56.322943 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5b483ba-1563-4424-bd65-5e489514f5e5-bundle\") pod \"c5b483ba-1563-4424-bd65-5e489514f5e5\" (UID: \"c5b483ba-1563-4424-bd65-5e489514f5e5\") " Mar 16 15:25:56 crc kubenswrapper[4736]: I0316 15:25:56.323002 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5b483ba-1563-4424-bd65-5e489514f5e5-util\") pod \"c5b483ba-1563-4424-bd65-5e489514f5e5\" (UID: \"c5b483ba-1563-4424-bd65-5e489514f5e5\") " Mar 16 15:25:56 crc kubenswrapper[4736]: I0316 15:25:56.324583 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5b483ba-1563-4424-bd65-5e489514f5e5-bundle" (OuterVolumeSpecName: "bundle") pod "c5b483ba-1563-4424-bd65-5e489514f5e5" (UID: "c5b483ba-1563-4424-bd65-5e489514f5e5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:25:56 crc kubenswrapper[4736]: I0316 15:25:56.331773 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b483ba-1563-4424-bd65-5e489514f5e5-kube-api-access-qwmdh" (OuterVolumeSpecName: "kube-api-access-qwmdh") pod "c5b483ba-1563-4424-bd65-5e489514f5e5" (UID: "c5b483ba-1563-4424-bd65-5e489514f5e5"). InnerVolumeSpecName "kube-api-access-qwmdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:25:56 crc kubenswrapper[4736]: I0316 15:25:56.333377 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5b483ba-1563-4424-bd65-5e489514f5e5-util" (OuterVolumeSpecName: "util") pod "c5b483ba-1563-4424-bd65-5e489514f5e5" (UID: "c5b483ba-1563-4424-bd65-5e489514f5e5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:25:56 crc kubenswrapper[4736]: I0316 15:25:56.424826 4736 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c5b483ba-1563-4424-bd65-5e489514f5e5-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:25:56 crc kubenswrapper[4736]: I0316 15:25:56.424869 4736 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c5b483ba-1563-4424-bd65-5e489514f5e5-util\") on node \"crc\" DevicePath \"\"" Mar 16 15:25:56 crc kubenswrapper[4736]: I0316 15:25:56.424879 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwmdh\" (UniqueName: \"kubernetes.io/projected/c5b483ba-1563-4424-bd65-5e489514f5e5-kube-api-access-qwmdh\") on node \"crc\" DevicePath \"\"" Mar 16 15:25:56 crc kubenswrapper[4736]: I0316 15:25:56.897925 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" event={"ID":"c5b483ba-1563-4424-bd65-5e489514f5e5","Type":"ContainerDied","Data":"a42215cc45b68378f25b298a30b74a67d85208033689c96ee79adc6b335e1faf"} Mar 16 15:25:56 crc kubenswrapper[4736]: I0316 15:25:56.898972 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a42215cc45b68378f25b298a30b74a67d85208033689c96ee79adc6b335e1faf" Mar 16 15:25:56 crc kubenswrapper[4736]: I0316 15:25:56.898027 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.151514 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561246-2z7qt"] Mar 16 15:26:00 crc kubenswrapper[4736]: E0316 15:26:00.151760 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf738c2-dd67-4aea-9d3e-03d68658ee50" containerName="console" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.151773 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf738c2-dd67-4aea-9d3e-03d68658ee50" containerName="console" Mar 16 15:26:00 crc kubenswrapper[4736]: E0316 15:26:00.151781 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b483ba-1563-4424-bd65-5e489514f5e5" containerName="util" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.151787 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b483ba-1563-4424-bd65-5e489514f5e5" containerName="util" Mar 16 15:26:00 crc kubenswrapper[4736]: E0316 15:26:00.151801 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b483ba-1563-4424-bd65-5e489514f5e5" containerName="pull" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.151808 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b483ba-1563-4424-bd65-5e489514f5e5" containerName="pull" Mar 16 15:26:00 crc kubenswrapper[4736]: E0316 15:26:00.151817 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b483ba-1563-4424-bd65-5e489514f5e5" containerName="extract" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.151823 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b483ba-1563-4424-bd65-5e489514f5e5" containerName="extract" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.151927 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf738c2-dd67-4aea-9d3e-03d68658ee50" containerName="console" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.151939 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5b483ba-1563-4424-bd65-5e489514f5e5" containerName="extract" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.152342 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561246-2z7qt" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.154528 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.154574 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.158804 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.178120 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561246-2z7qt"] Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.287334 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pst6s\" (UniqueName: \"kubernetes.io/projected/5d8fcc47-84e9-4db1-8b0c-d64e06af7733-kube-api-access-pst6s\") pod \"auto-csr-approver-29561246-2z7qt\" (UID: \"5d8fcc47-84e9-4db1-8b0c-d64e06af7733\") " pod="openshift-infra/auto-csr-approver-29561246-2z7qt" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.388696 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pst6s\" (UniqueName: \"kubernetes.io/projected/5d8fcc47-84e9-4db1-8b0c-d64e06af7733-kube-api-access-pst6s\") pod \"auto-csr-approver-29561246-2z7qt\" (UID: \"5d8fcc47-84e9-4db1-8b0c-d64e06af7733\") " pod="openshift-infra/auto-csr-approver-29561246-2z7qt" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.414718 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pst6s\" (UniqueName: \"kubernetes.io/projected/5d8fcc47-84e9-4db1-8b0c-d64e06af7733-kube-api-access-pst6s\") pod \"auto-csr-approver-29561246-2z7qt\" (UID: \"5d8fcc47-84e9-4db1-8b0c-d64e06af7733\") " pod="openshift-infra/auto-csr-approver-29561246-2z7qt" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.467208 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561246-2z7qt" Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.773585 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561246-2z7qt"] Mar 16 15:26:00 crc kubenswrapper[4736]: I0316 15:26:00.927220 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561246-2z7qt" event={"ID":"5d8fcc47-84e9-4db1-8b0c-d64e06af7733","Type":"ContainerStarted","Data":"0022ebdabd14c7709aefa38dcabe2567f6c3cf7d6a29f81824fcf6fe12e4266c"} Mar 16 15:26:02 crc kubenswrapper[4736]: I0316 15:26:02.939080 4736 generic.go:334] "Generic (PLEG): container finished" podID="5d8fcc47-84e9-4db1-8b0c-d64e06af7733" containerID="bef68a4da7a6b16c38e5d85bfec01d2efb557997c8be7ad0c002012a174d8c4f" exitCode=0 Mar 16 15:26:02 crc kubenswrapper[4736]: I0316 15:26:02.939221 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561246-2z7qt" event={"ID":"5d8fcc47-84e9-4db1-8b0c-d64e06af7733","Type":"ContainerDied","Data":"bef68a4da7a6b16c38e5d85bfec01d2efb557997c8be7ad0c002012a174d8c4f"} Mar 16 15:26:04 crc kubenswrapper[4736]: I0316 15:26:04.208687 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561246-2z7qt" Mar 16 15:26:04 crc kubenswrapper[4736]: I0316 15:26:04.271926 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pst6s\" (UniqueName: \"kubernetes.io/projected/5d8fcc47-84e9-4db1-8b0c-d64e06af7733-kube-api-access-pst6s\") pod \"5d8fcc47-84e9-4db1-8b0c-d64e06af7733\" (UID: \"5d8fcc47-84e9-4db1-8b0c-d64e06af7733\") " Mar 16 15:26:04 crc kubenswrapper[4736]: I0316 15:26:04.287410 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d8fcc47-84e9-4db1-8b0c-d64e06af7733-kube-api-access-pst6s" (OuterVolumeSpecName: "kube-api-access-pst6s") pod "5d8fcc47-84e9-4db1-8b0c-d64e06af7733" (UID: "5d8fcc47-84e9-4db1-8b0c-d64e06af7733"). InnerVolumeSpecName "kube-api-access-pst6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:26:04 crc kubenswrapper[4736]: I0316 15:26:04.373877 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pst6s\" (UniqueName: \"kubernetes.io/projected/5d8fcc47-84e9-4db1-8b0c-d64e06af7733-kube-api-access-pst6s\") on node \"crc\" DevicePath \"\"" Mar 16 15:26:04 crc kubenswrapper[4736]: I0316 15:26:04.952244 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561246-2z7qt" event={"ID":"5d8fcc47-84e9-4db1-8b0c-d64e06af7733","Type":"ContainerDied","Data":"0022ebdabd14c7709aefa38dcabe2567f6c3cf7d6a29f81824fcf6fe12e4266c"} Mar 16 15:26:04 crc kubenswrapper[4736]: I0316 15:26:04.952697 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0022ebdabd14c7709aefa38dcabe2567f6c3cf7d6a29f81824fcf6fe12e4266c" Mar 16 15:26:04 crc kubenswrapper[4736]: I0316 15:26:04.952317 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561246-2z7qt" Mar 16 15:26:05 crc kubenswrapper[4736]: I0316 15:26:05.279243 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561240-d98xk"] Mar 16 15:26:05 crc kubenswrapper[4736]: I0316 15:26:05.283360 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561240-d98xk"] Mar 16 15:26:06 crc kubenswrapper[4736]: I0316 15:26:06.985339 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae18c6b9-34d8-48e3-804c-f71bbf6fef7e" path="/var/lib/kubelet/pods/ae18c6b9-34d8-48e3-804c-f71bbf6fef7e/volumes" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.354310 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85"] Mar 16 15:26:07 crc kubenswrapper[4736]: E0316 15:26:07.354567 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d8fcc47-84e9-4db1-8b0c-d64e06af7733" containerName="oc" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.354579 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d8fcc47-84e9-4db1-8b0c-d64e06af7733" containerName="oc" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.354691 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d8fcc47-84e9-4db1-8b0c-d64e06af7733" containerName="oc" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.355326 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.368014 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.368025 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.368233 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.370294 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.370466 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-9v6sj" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.390733 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85"] Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.519481 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjcrm\" (UniqueName: \"kubernetes.io/projected/109b033e-a4ea-474a-9e79-e895cc75666e-kube-api-access-vjcrm\") pod \"metallb-operator-controller-manager-5b95679b96-mfd85\" (UID: \"109b033e-a4ea-474a-9e79-e895cc75666e\") " pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.519584 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/109b033e-a4ea-474a-9e79-e895cc75666e-apiservice-cert\") pod \"metallb-operator-controller-manager-5b95679b96-mfd85\" (UID: \"109b033e-a4ea-474a-9e79-e895cc75666e\") " pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.519666 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/109b033e-a4ea-474a-9e79-e895cc75666e-webhook-cert\") pod \"metallb-operator-controller-manager-5b95679b96-mfd85\" (UID: \"109b033e-a4ea-474a-9e79-e895cc75666e\") " pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.620562 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjcrm\" (UniqueName: \"kubernetes.io/projected/109b033e-a4ea-474a-9e79-e895cc75666e-kube-api-access-vjcrm\") pod \"metallb-operator-controller-manager-5b95679b96-mfd85\" (UID: \"109b033e-a4ea-474a-9e79-e895cc75666e\") " pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.620641 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/109b033e-a4ea-474a-9e79-e895cc75666e-apiservice-cert\") pod \"metallb-operator-controller-manager-5b95679b96-mfd85\" (UID: \"109b033e-a4ea-474a-9e79-e895cc75666e\") " pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.620671 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/109b033e-a4ea-474a-9e79-e895cc75666e-webhook-cert\") pod \"metallb-operator-controller-manager-5b95679b96-mfd85\" (UID: \"109b033e-a4ea-474a-9e79-e895cc75666e\") " pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.643508 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/109b033e-a4ea-474a-9e79-e895cc75666e-webhook-cert\") pod \"metallb-operator-controller-manager-5b95679b96-mfd85\" (UID: \"109b033e-a4ea-474a-9e79-e895cc75666e\") " pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.643591 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/109b033e-a4ea-474a-9e79-e895cc75666e-apiservice-cert\") pod \"metallb-operator-controller-manager-5b95679b96-mfd85\" (UID: \"109b033e-a4ea-474a-9e79-e895cc75666e\") " pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.649818 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjcrm\" (UniqueName: \"kubernetes.io/projected/109b033e-a4ea-474a-9e79-e895cc75666e-kube-api-access-vjcrm\") pod \"metallb-operator-controller-manager-5b95679b96-mfd85\" (UID: \"109b033e-a4ea-474a-9e79-e895cc75666e\") " pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.670780 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.841603 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb"] Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.842406 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.845041 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-n22c5" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.845290 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.845429 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.879210 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb"] Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.924510 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7e58b81-1f06-4844-adbe-ade114adc726-webhook-cert\") pod \"metallb-operator-webhook-server-9c55cfcd7-trkfb\" (UID: \"b7e58b81-1f06-4844-adbe-ade114adc726\") " pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.924571 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7e58b81-1f06-4844-adbe-ade114adc726-apiservice-cert\") pod \"metallb-operator-webhook-server-9c55cfcd7-trkfb\" (UID: \"b7e58b81-1f06-4844-adbe-ade114adc726\") " pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" Mar 16 15:26:07 crc kubenswrapper[4736]: I0316 15:26:07.924617 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b78nm\" (UniqueName: \"kubernetes.io/projected/b7e58b81-1f06-4844-adbe-ade114adc726-kube-api-access-b78nm\") pod \"metallb-operator-webhook-server-9c55cfcd7-trkfb\" (UID: \"b7e58b81-1f06-4844-adbe-ade114adc726\") " pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" Mar 16 15:26:08 crc kubenswrapper[4736]: I0316 15:26:08.026410 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7e58b81-1f06-4844-adbe-ade114adc726-webhook-cert\") pod \"metallb-operator-webhook-server-9c55cfcd7-trkfb\" (UID: \"b7e58b81-1f06-4844-adbe-ade114adc726\") " pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" Mar 16 15:26:08 crc kubenswrapper[4736]: I0316 15:26:08.026894 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7e58b81-1f06-4844-adbe-ade114adc726-apiservice-cert\") pod \"metallb-operator-webhook-server-9c55cfcd7-trkfb\" (UID: \"b7e58b81-1f06-4844-adbe-ade114adc726\") " pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" Mar 16 15:26:08 crc kubenswrapper[4736]: I0316 15:26:08.026948 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b78nm\" (UniqueName: \"kubernetes.io/projected/b7e58b81-1f06-4844-adbe-ade114adc726-kube-api-access-b78nm\") pod \"metallb-operator-webhook-server-9c55cfcd7-trkfb\" (UID: \"b7e58b81-1f06-4844-adbe-ade114adc726\") " pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" Mar 16 15:26:08 crc kubenswrapper[4736]: I0316 15:26:08.036439 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7e58b81-1f06-4844-adbe-ade114adc726-webhook-cert\") pod \"metallb-operator-webhook-server-9c55cfcd7-trkfb\" (UID: \"b7e58b81-1f06-4844-adbe-ade114adc726\") " pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" Mar 16 15:26:08 crc kubenswrapper[4736]: I0316 15:26:08.055612 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7e58b81-1f06-4844-adbe-ade114adc726-apiservice-cert\") pod \"metallb-operator-webhook-server-9c55cfcd7-trkfb\" (UID: \"b7e58b81-1f06-4844-adbe-ade114adc726\") " pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" Mar 16 15:26:08 crc kubenswrapper[4736]: I0316 15:26:08.063869 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b78nm\" (UniqueName: \"kubernetes.io/projected/b7e58b81-1f06-4844-adbe-ade114adc726-kube-api-access-b78nm\") pod \"metallb-operator-webhook-server-9c55cfcd7-trkfb\" (UID: \"b7e58b81-1f06-4844-adbe-ade114adc726\") " pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" Mar 16 15:26:08 crc kubenswrapper[4736]: I0316 15:26:08.105882 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85"] Mar 16 15:26:08 crc kubenswrapper[4736]: I0316 15:26:08.183486 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" Mar 16 15:26:08 crc kubenswrapper[4736]: I0316 15:26:08.417753 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb"] Mar 16 15:26:08 crc kubenswrapper[4736]: W0316 15:26:08.423841 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7e58b81_1f06_4844_adbe_ade114adc726.slice/crio-4e6e67d920ad7f04342b3063ebd71966ace82e1adf0adc62453c1a9c233cb1d9 WatchSource:0}: Error finding container 4e6e67d920ad7f04342b3063ebd71966ace82e1adf0adc62453c1a9c233cb1d9: Status 404 returned error can't find the container with id 4e6e67d920ad7f04342b3063ebd71966ace82e1adf0adc62453c1a9c233cb1d9 Mar 16 15:26:08 crc kubenswrapper[4736]: I0316 15:26:08.507882 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:26:08 crc kubenswrapper[4736]: I0316 15:26:08.507952 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:26:08 crc kubenswrapper[4736]: I0316 15:26:08.988186 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" event={"ID":"109b033e-a4ea-474a-9e79-e895cc75666e","Type":"ContainerStarted","Data":"389f12c5c275a3193e848130b16afaf8b48770e2380157b1c3270d53207b3550"} Mar 16 15:26:08 crc kubenswrapper[4736]: I0316 15:26:08.988248 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" event={"ID":"b7e58b81-1f06-4844-adbe-ade114adc726","Type":"ContainerStarted","Data":"4e6e67d920ad7f04342b3063ebd71966ace82e1adf0adc62453c1a9c233cb1d9"} Mar 16 15:26:15 crc kubenswrapper[4736]: I0316 15:26:15.037635 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" event={"ID":"b7e58b81-1f06-4844-adbe-ade114adc726","Type":"ContainerStarted","Data":"fada5b1ce19c68de8a0cd03d80283f12a68a86448a03f3eb95955397edba2fa2"} Mar 16 15:26:15 crc kubenswrapper[4736]: I0316 15:26:15.038611 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" Mar 16 15:26:15 crc kubenswrapper[4736]: I0316 15:26:15.042716 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" event={"ID":"109b033e-a4ea-474a-9e79-e895cc75666e","Type":"ContainerStarted","Data":"235604721faa4da85fdc762017843230aed85818eda0aecd11b5814aefb2913f"} Mar 16 15:26:15 crc kubenswrapper[4736]: I0316 15:26:15.043071 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" Mar 16 15:26:15 crc kubenswrapper[4736]: I0316 15:26:15.069871 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" podStartSLOduration=2.088428592 podStartE2EDuration="8.069845787s" podCreationTimestamp="2026-03-16 15:26:07 +0000 UTC" firstStartedPulling="2026-03-16 15:26:08.427170199 +0000 UTC m=+770.154560486" lastFinishedPulling="2026-03-16 15:26:14.408587394 +0000 UTC m=+776.135977681" observedRunningTime="2026-03-16 15:26:15.062834602 +0000 UTC m=+776.790224889" watchObservedRunningTime="2026-03-16 15:26:15.069845787 +0000 UTC m=+776.797236094" Mar 16 15:26:15 crc kubenswrapper[4736]: I0316 15:26:15.099212 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" podStartSLOduration=1.827939748 podStartE2EDuration="8.099193275s" podCreationTimestamp="2026-03-16 15:26:07 +0000 UTC" firstStartedPulling="2026-03-16 15:26:08.115755087 +0000 UTC m=+769.843145374" lastFinishedPulling="2026-03-16 15:26:14.387008614 +0000 UTC m=+776.114398901" observedRunningTime="2026-03-16 15:26:15.094512124 +0000 UTC m=+776.821902411" watchObservedRunningTime="2026-03-16 15:26:15.099193275 +0000 UTC m=+776.826583562" Mar 16 15:26:21 crc kubenswrapper[4736]: I0316 15:26:21.906814 4736 scope.go:117] "RemoveContainer" containerID="8aedc307974f86293ae2757cc01a47771f84ff5c449c7a6c07565bd14805b8fc" Mar 16 15:26:28 crc kubenswrapper[4736]: I0316 15:26:28.191944 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" Mar 16 15:26:38 crc kubenswrapper[4736]: I0316 15:26:38.507957 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:26:38 crc kubenswrapper[4736]: I0316 15:26:38.509860 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:26:40 crc kubenswrapper[4736]: I0316 15:26:40.651352 4736 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 16 15:26:47 crc kubenswrapper[4736]: I0316 15:26:47.676169 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.364278 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hsz7s"] Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.367048 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.377576 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7"] Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.378729 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.378793 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8xwg9" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.379031 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.379331 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.397386 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.411622 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7"] Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.417751 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b8fd5e0d-983e-4780-9a84-dc84a9766804-metrics\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.417874 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5svp\" (UniqueName: \"kubernetes.io/projected/b8fd5e0d-983e-4780-9a84-dc84a9766804-kube-api-access-f5svp\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.417972 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b8fd5e0d-983e-4780-9a84-dc84a9766804-frr-startup\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.417994 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b8fd5e0d-983e-4780-9a84-dc84a9766804-reloader\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.418022 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21bc5f54-2767-431f-add2-433724ea4408-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-vxgc7\" (UID: \"21bc5f54-2767-431f-add2-433724ea4408\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.418054 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b8fd5e0d-983e-4780-9a84-dc84a9766804-frr-sockets\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.418112 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b8fd5e0d-983e-4780-9a84-dc84a9766804-frr-conf\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.418156 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84wcp\" (UniqueName: \"kubernetes.io/projected/21bc5f54-2767-431f-add2-433724ea4408-kube-api-access-84wcp\") pod \"frr-k8s-webhook-server-bcc4b6f68-vxgc7\" (UID: \"21bc5f54-2767-431f-add2-433724ea4408\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.418197 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8fd5e0d-983e-4780-9a84-dc84a9766804-metrics-certs\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.519044 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84wcp\" (UniqueName: \"kubernetes.io/projected/21bc5f54-2767-431f-add2-433724ea4408-kube-api-access-84wcp\") pod \"frr-k8s-webhook-server-bcc4b6f68-vxgc7\" (UID: \"21bc5f54-2767-431f-add2-433724ea4408\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.519097 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8fd5e0d-983e-4780-9a84-dc84a9766804-metrics-certs\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.519149 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b8fd5e0d-983e-4780-9a84-dc84a9766804-metrics\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.519179 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5svp\" (UniqueName: \"kubernetes.io/projected/b8fd5e0d-983e-4780-9a84-dc84a9766804-kube-api-access-f5svp\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.519215 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b8fd5e0d-983e-4780-9a84-dc84a9766804-frr-startup\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.519229 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b8fd5e0d-983e-4780-9a84-dc84a9766804-reloader\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.519247 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21bc5f54-2767-431f-add2-433724ea4408-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-vxgc7\" (UID: \"21bc5f54-2767-431f-add2-433724ea4408\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.519266 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b8fd5e0d-983e-4780-9a84-dc84a9766804-frr-sockets\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.519287 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b8fd5e0d-983e-4780-9a84-dc84a9766804-frr-conf\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: E0316 15:26:48.519804 4736 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Mar 16 15:26:48 crc kubenswrapper[4736]: E0316 15:26:48.519884 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8fd5e0d-983e-4780-9a84-dc84a9766804-metrics-certs podName:b8fd5e0d-983e-4780-9a84-dc84a9766804 nodeName:}" failed. No retries permitted until 2026-03-16 15:26:49.019858848 +0000 UTC m=+810.747249135 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b8fd5e0d-983e-4780-9a84-dc84a9766804-metrics-certs") pod "frr-k8s-hsz7s" (UID: "b8fd5e0d-983e-4780-9a84-dc84a9766804") : secret "frr-k8s-certs-secret" not found Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.520046 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b8fd5e0d-983e-4780-9a84-dc84a9766804-reloader\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.520295 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b8fd5e0d-983e-4780-9a84-dc84a9766804-frr-conf\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.520332 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b8fd5e0d-983e-4780-9a84-dc84a9766804-frr-sockets\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.520473 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b8fd5e0d-983e-4780-9a84-dc84a9766804-frr-startup\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.520655 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b8fd5e0d-983e-4780-9a84-dc84a9766804-metrics\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: E0316 15:26:48.520786 4736 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 16 15:26:48 crc kubenswrapper[4736]: E0316 15:26:48.520926 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21bc5f54-2767-431f-add2-433724ea4408-cert podName:21bc5f54-2767-431f-add2-433724ea4408 nodeName:}" failed. No retries permitted until 2026-03-16 15:26:49.020899726 +0000 UTC m=+810.748290013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/21bc5f54-2767-431f-add2-433724ea4408-cert") pod "frr-k8s-webhook-server-bcc4b6f68-vxgc7" (UID: "21bc5f54-2767-431f-add2-433724ea4408") : secret "frr-k8s-webhook-server-cert" not found Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.550677 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5svp\" (UniqueName: \"kubernetes.io/projected/b8fd5e0d-983e-4780-9a84-dc84a9766804-kube-api-access-f5svp\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.556935 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84wcp\" (UniqueName: \"kubernetes.io/projected/21bc5f54-2767-431f-add2-433724ea4408-kube-api-access-84wcp\") pod \"frr-k8s-webhook-server-bcc4b6f68-vxgc7\" (UID: \"21bc5f54-2767-431f-add2-433724ea4408\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.570057 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-djs8w"] Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.572153 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-djs8w" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.576124 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.576127 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.576684 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.576863 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-pqv5k" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.607059 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-fxq9c"] Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.608348 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-fxq9c" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.611710 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.620381 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-metrics-certs\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.620454 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5ab46a17-c761-4952-b743-9ede5877674a-metallb-excludel2\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.620484 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh9lf\" (UniqueName: \"kubernetes.io/projected/29db2924-7903-45c8-9f87-a4e3e070a4a3-kube-api-access-wh9lf\") pod \"controller-7bb4cc7c98-fxq9c\" (UID: \"29db2924-7903-45c8-9f87-a4e3e070a4a3\") " pod="metallb-system/controller-7bb4cc7c98-fxq9c" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.620708 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29db2924-7903-45c8-9f87-a4e3e070a4a3-cert\") pod \"controller-7bb4cc7c98-fxq9c\" (UID: \"29db2924-7903-45c8-9f87-a4e3e070a4a3\") " pod="metallb-system/controller-7bb4cc7c98-fxq9c" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.620738 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-memberlist\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.620756 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69qdx\" (UniqueName: \"kubernetes.io/projected/5ab46a17-c761-4952-b743-9ede5877674a-kube-api-access-69qdx\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.621022 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29db2924-7903-45c8-9f87-a4e3e070a4a3-metrics-certs\") pod \"controller-7bb4cc7c98-fxq9c\" (UID: \"29db2924-7903-45c8-9f87-a4e3e070a4a3\") " pod="metallb-system/controller-7bb4cc7c98-fxq9c" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.700442 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-fxq9c"] Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.721902 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-metrics-certs\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.721991 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5ab46a17-c761-4952-b743-9ede5877674a-metallb-excludel2\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.722029 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh9lf\" (UniqueName: \"kubernetes.io/projected/29db2924-7903-45c8-9f87-a4e3e070a4a3-kube-api-access-wh9lf\") pod \"controller-7bb4cc7c98-fxq9c\" (UID: \"29db2924-7903-45c8-9f87-a4e3e070a4a3\") " pod="metallb-system/controller-7bb4cc7c98-fxq9c" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.722080 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29db2924-7903-45c8-9f87-a4e3e070a4a3-cert\") pod \"controller-7bb4cc7c98-fxq9c\" (UID: \"29db2924-7903-45c8-9f87-a4e3e070a4a3\") " pod="metallb-system/controller-7bb4cc7c98-fxq9c" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.722113 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-memberlist\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.722132 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69qdx\" (UniqueName: \"kubernetes.io/projected/5ab46a17-c761-4952-b743-9ede5877674a-kube-api-access-69qdx\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.722162 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29db2924-7903-45c8-9f87-a4e3e070a4a3-metrics-certs\") pod \"controller-7bb4cc7c98-fxq9c\" (UID: \"29db2924-7903-45c8-9f87-a4e3e070a4a3\") " pod="metallb-system/controller-7bb4cc7c98-fxq9c" Mar 16 15:26:48 crc kubenswrapper[4736]: E0316 15:26:48.722291 4736 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 16 15:26:48 crc kubenswrapper[4736]: E0316 15:26:48.722352 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29db2924-7903-45c8-9f87-a4e3e070a4a3-metrics-certs podName:29db2924-7903-45c8-9f87-a4e3e070a4a3 nodeName:}" failed. No retries permitted until 2026-03-16 15:26:49.222334521 +0000 UTC m=+810.949724808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/29db2924-7903-45c8-9f87-a4e3e070a4a3-metrics-certs") pod "controller-7bb4cc7c98-fxq9c" (UID: "29db2924-7903-45c8-9f87-a4e3e070a4a3") : secret "controller-certs-secret" not found Mar 16 15:26:48 crc kubenswrapper[4736]: E0316 15:26:48.722683 4736 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 16 15:26:48 crc kubenswrapper[4736]: E0316 15:26:48.722707 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-metrics-certs podName:5ab46a17-c761-4952-b743-9ede5877674a nodeName:}" failed. No retries permitted until 2026-03-16 15:26:49.222700301 +0000 UTC m=+810.950090588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-metrics-certs") pod "speaker-djs8w" (UID: "5ab46a17-c761-4952-b743-9ede5877674a") : secret "speaker-certs-secret" not found Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.723410 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5ab46a17-c761-4952-b743-9ede5877674a-metallb-excludel2\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:48 crc kubenswrapper[4736]: E0316 15:26:48.723629 4736 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 16 15:26:48 crc kubenswrapper[4736]: E0316 15:26:48.723807 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-memberlist podName:5ab46a17-c761-4952-b743-9ede5877674a nodeName:}" failed. No retries permitted until 2026-03-16 15:26:49.223774012 +0000 UTC m=+810.951164519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-memberlist") pod "speaker-djs8w" (UID: "5ab46a17-c761-4952-b743-9ede5877674a") : secret "metallb-memberlist" not found Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.729630 4736 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.737992 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29db2924-7903-45c8-9f87-a4e3e070a4a3-cert\") pod \"controller-7bb4cc7c98-fxq9c\" (UID: \"29db2924-7903-45c8-9f87-a4e3e070a4a3\") " pod="metallb-system/controller-7bb4cc7c98-fxq9c" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.754815 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69qdx\" (UniqueName: \"kubernetes.io/projected/5ab46a17-c761-4952-b743-9ede5877674a-kube-api-access-69qdx\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:48 crc kubenswrapper[4736]: I0316 15:26:48.756613 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh9lf\" (UniqueName: \"kubernetes.io/projected/29db2924-7903-45c8-9f87-a4e3e070a4a3-kube-api-access-wh9lf\") pod \"controller-7bb4cc7c98-fxq9c\" (UID: \"29db2924-7903-45c8-9f87-a4e3e070a4a3\") " pod="metallb-system/controller-7bb4cc7c98-fxq9c" Mar 16 15:26:49 crc kubenswrapper[4736]: I0316 15:26:49.027805 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21bc5f54-2767-431f-add2-433724ea4408-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-vxgc7\" (UID: \"21bc5f54-2767-431f-add2-433724ea4408\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" Mar 16 15:26:49 crc kubenswrapper[4736]: I0316 15:26:49.028094 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8fd5e0d-983e-4780-9a84-dc84a9766804-metrics-certs\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:49 crc kubenswrapper[4736]: I0316 15:26:49.032639 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b8fd5e0d-983e-4780-9a84-dc84a9766804-metrics-certs\") pod \"frr-k8s-hsz7s\" (UID: \"b8fd5e0d-983e-4780-9a84-dc84a9766804\") " pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:49 crc kubenswrapper[4736]: I0316 15:26:49.032865 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21bc5f54-2767-431f-add2-433724ea4408-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-vxgc7\" (UID: \"21bc5f54-2767-431f-add2-433724ea4408\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" Mar 16 15:26:49 crc kubenswrapper[4736]: I0316 15:26:49.231392 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-metrics-certs\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:49 crc kubenswrapper[4736]: I0316 15:26:49.231578 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-memberlist\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:49 crc kubenswrapper[4736]: E0316 15:26:49.231710 4736 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 16 15:26:49 crc kubenswrapper[4736]: E0316 15:26:49.231782 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-memberlist podName:5ab46a17-c761-4952-b743-9ede5877674a nodeName:}" failed. No retries permitted until 2026-03-16 15:26:50.231762917 +0000 UTC m=+811.959153214 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-memberlist") pod "speaker-djs8w" (UID: "5ab46a17-c761-4952-b743-9ede5877674a") : secret "metallb-memberlist" not found Mar 16 15:26:49 crc kubenswrapper[4736]: I0316 15:26:49.232312 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29db2924-7903-45c8-9f87-a4e3e070a4a3-metrics-certs\") pod \"controller-7bb4cc7c98-fxq9c\" (UID: \"29db2924-7903-45c8-9f87-a4e3e070a4a3\") " pod="metallb-system/controller-7bb4cc7c98-fxq9c" Mar 16 15:26:49 crc kubenswrapper[4736]: I0316 15:26:49.234806 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-metrics-certs\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:49 crc kubenswrapper[4736]: I0316 15:26:49.235328 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29db2924-7903-45c8-9f87-a4e3e070a4a3-metrics-certs\") pod \"controller-7bb4cc7c98-fxq9c\" (UID: \"29db2924-7903-45c8-9f87-a4e3e070a4a3\") " pod="metallb-system/controller-7bb4cc7c98-fxq9c" Mar 16 15:26:49 crc kubenswrapper[4736]: I0316 15:26:49.293865 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:26:49 crc kubenswrapper[4736]: I0316 15:26:49.300063 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" Mar 16 15:26:49 crc kubenswrapper[4736]: I0316 15:26:49.531296 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-fxq9c" Mar 16 15:26:49 crc kubenswrapper[4736]: I0316 15:26:49.552745 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7"] Mar 16 15:26:49 crc kubenswrapper[4736]: W0316 15:26:49.557248 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21bc5f54_2767_431f_add2_433724ea4408.slice/crio-172dce3cf9d843b89fa0d96a97b823fae973269de1d05c947a6658deb126276d WatchSource:0}: Error finding container 172dce3cf9d843b89fa0d96a97b823fae973269de1d05c947a6658deb126276d: Status 404 returned error can't find the container with id 172dce3cf9d843b89fa0d96a97b823fae973269de1d05c947a6658deb126276d Mar 16 15:26:49 crc kubenswrapper[4736]: I0316 15:26:49.806359 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-fxq9c"] Mar 16 15:26:50 crc kubenswrapper[4736]: I0316 15:26:50.249875 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-memberlist\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:50 crc kubenswrapper[4736]: I0316 15:26:50.259259 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5ab46a17-c761-4952-b743-9ede5877674a-memberlist\") pod \"speaker-djs8w\" (UID: \"5ab46a17-c761-4952-b743-9ede5877674a\") " pod="metallb-system/speaker-djs8w" Mar 16 15:26:50 crc kubenswrapper[4736]: I0316 15:26:50.267037 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hsz7s" event={"ID":"b8fd5e0d-983e-4780-9a84-dc84a9766804","Type":"ContainerStarted","Data":"bda3eda53e037259098d5719d27f86baab712f2b5c70d14a8621471524e3e855"} Mar 16 15:26:50 crc kubenswrapper[4736]: I0316 15:26:50.268675 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" event={"ID":"21bc5f54-2767-431f-add2-433724ea4408","Type":"ContainerStarted","Data":"172dce3cf9d843b89fa0d96a97b823fae973269de1d05c947a6658deb126276d"} Mar 16 15:26:50 crc kubenswrapper[4736]: I0316 15:26:50.271058 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-fxq9c" event={"ID":"29db2924-7903-45c8-9f87-a4e3e070a4a3","Type":"ContainerStarted","Data":"900c9e802899a750470f05609fcb46a8e885107483de68ddc8d6de25c806af38"} Mar 16 15:26:50 crc kubenswrapper[4736]: I0316 15:26:50.271087 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-fxq9c" event={"ID":"29db2924-7903-45c8-9f87-a4e3e070a4a3","Type":"ContainerStarted","Data":"56edd40a6e6156fd31ec8f1516afd14d64f45a3305ef9f5c6d2aba97d3c3c5d4"} Mar 16 15:26:50 crc kubenswrapper[4736]: I0316 15:26:50.271099 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-fxq9c" event={"ID":"29db2924-7903-45c8-9f87-a4e3e070a4a3","Type":"ContainerStarted","Data":"66c75c734a4e2f5be9130c945545e9537561b57c449a1ed9e8da70dac58b69df"} Mar 16 15:26:50 crc kubenswrapper[4736]: I0316 15:26:50.272402 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-fxq9c" Mar 16 15:26:50 crc kubenswrapper[4736]: I0316 15:26:50.297797 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-fxq9c" podStartSLOduration=2.297770435 podStartE2EDuration="2.297770435s" podCreationTimestamp="2026-03-16 15:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:26:50.297630571 +0000 UTC m=+812.025020868" watchObservedRunningTime="2026-03-16 15:26:50.297770435 +0000 UTC m=+812.025160712" Mar 16 15:26:50 crc kubenswrapper[4736]: I0316 15:26:50.399380 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-djs8w" Mar 16 15:26:50 crc kubenswrapper[4736]: W0316 15:26:50.427887 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ab46a17_c761_4952_b743_9ede5877674a.slice/crio-73f95f8533dcdd2c6ce37ef93b5e75b04ae268b288c81b0c9d454ec559e5b81c WatchSource:0}: Error finding container 73f95f8533dcdd2c6ce37ef93b5e75b04ae268b288c81b0c9d454ec559e5b81c: Status 404 returned error can't find the container with id 73f95f8533dcdd2c6ce37ef93b5e75b04ae268b288c81b0c9d454ec559e5b81c Mar 16 15:26:51 crc kubenswrapper[4736]: I0316 15:26:51.281011 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-djs8w" event={"ID":"5ab46a17-c761-4952-b743-9ede5877674a","Type":"ContainerStarted","Data":"35f64c65e399b8ef8b68a05ec8f0f934fb26be5395a2a36da92fffa22d9b91d4"} Mar 16 15:26:51 crc kubenswrapper[4736]: I0316 15:26:51.281453 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-djs8w" event={"ID":"5ab46a17-c761-4952-b743-9ede5877674a","Type":"ContainerStarted","Data":"bf8331cbd1463251fed9b21501cfe1ee8aba09021f56bc725c3f25c122591919"} Mar 16 15:26:51 crc kubenswrapper[4736]: I0316 15:26:51.281469 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-djs8w" event={"ID":"5ab46a17-c761-4952-b743-9ede5877674a","Type":"ContainerStarted","Data":"73f95f8533dcdd2c6ce37ef93b5e75b04ae268b288c81b0c9d454ec559e5b81c"} Mar 16 15:26:51 crc kubenswrapper[4736]: I0316 15:26:51.281903 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-djs8w" Mar 16 15:26:51 crc kubenswrapper[4736]: I0316 15:26:51.304755 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-djs8w" podStartSLOduration=3.304730961 podStartE2EDuration="3.304730961s" podCreationTimestamp="2026-03-16 15:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:26:51.302212251 +0000 UTC m=+813.029602538" watchObservedRunningTime="2026-03-16 15:26:51.304730961 +0000 UTC m=+813.032121248" Mar 16 15:26:58 crc kubenswrapper[4736]: I0316 15:26:58.347139 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" event={"ID":"21bc5f54-2767-431f-add2-433724ea4408","Type":"ContainerStarted","Data":"f9aefe1ee580a90f8992887978c2eece33d6439d3b718f927138af87b5394107"} Mar 16 15:26:58 crc kubenswrapper[4736]: I0316 15:26:58.347770 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" Mar 16 15:26:58 crc kubenswrapper[4736]: I0316 15:26:58.349719 4736 generic.go:334] "Generic (PLEG): container finished" podID="b8fd5e0d-983e-4780-9a84-dc84a9766804" containerID="fb3f9b5fdafcc1a16fba5d8b5c531c6988f8a66fac0d834113f659341fd0efaa" exitCode=0 Mar 16 15:26:58 crc kubenswrapper[4736]: I0316 15:26:58.349786 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hsz7s" event={"ID":"b8fd5e0d-983e-4780-9a84-dc84a9766804","Type":"ContainerDied","Data":"fb3f9b5fdafcc1a16fba5d8b5c531c6988f8a66fac0d834113f659341fd0efaa"} Mar 16 15:26:58 crc kubenswrapper[4736]: I0316 15:26:58.388774 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" podStartSLOduration=2.038439172 podStartE2EDuration="10.388749823s" podCreationTimestamp="2026-03-16 15:26:48 +0000 UTC" firstStartedPulling="2026-03-16 15:26:49.559994817 +0000 UTC m=+811.287385094" lastFinishedPulling="2026-03-16 15:26:57.910305458 +0000 UTC m=+819.637695745" observedRunningTime="2026-03-16 15:26:58.383397842 +0000 UTC m=+820.110788139" watchObservedRunningTime="2026-03-16 15:26:58.388749823 +0000 UTC m=+820.116140110" Mar 16 15:26:59 crc kubenswrapper[4736]: I0316 15:26:59.359075 4736 generic.go:334] "Generic (PLEG): container finished" podID="b8fd5e0d-983e-4780-9a84-dc84a9766804" containerID="2396f07868a5ed27f5fe7a2686e6d6caf1bcd4ae19defdd09389df3c2869f358" exitCode=0 Mar 16 15:26:59 crc kubenswrapper[4736]: I0316 15:26:59.359159 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hsz7s" event={"ID":"b8fd5e0d-983e-4780-9a84-dc84a9766804","Type":"ContainerDied","Data":"2396f07868a5ed27f5fe7a2686e6d6caf1bcd4ae19defdd09389df3c2869f358"} Mar 16 15:26:59 crc kubenswrapper[4736]: I0316 15:26:59.573448 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-fxq9c" Mar 16 15:27:00 crc kubenswrapper[4736]: I0316 15:27:00.370015 4736 generic.go:334] "Generic (PLEG): container finished" podID="b8fd5e0d-983e-4780-9a84-dc84a9766804" containerID="bb0ecb2b81062a87bcb734d9d34c191d88aff826efdc5ce4b494237884eb33e4" exitCode=0 Mar 16 15:27:00 crc kubenswrapper[4736]: I0316 15:27:00.370070 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hsz7s" event={"ID":"b8fd5e0d-983e-4780-9a84-dc84a9766804","Type":"ContainerDied","Data":"bb0ecb2b81062a87bcb734d9d34c191d88aff826efdc5ce4b494237884eb33e4"} Mar 16 15:27:00 crc kubenswrapper[4736]: I0316 15:27:00.404898 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-djs8w" Mar 16 15:27:02 crc kubenswrapper[4736]: I0316 15:27:02.390661 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hsz7s" event={"ID":"b8fd5e0d-983e-4780-9a84-dc84a9766804","Type":"ContainerStarted","Data":"03086fb1ef6c13c908c4877f00b6e52015ba38092bdc65c6392445ef97578a15"} Mar 16 15:27:02 crc kubenswrapper[4736]: I0316 15:27:02.391194 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hsz7s" event={"ID":"b8fd5e0d-983e-4780-9a84-dc84a9766804","Type":"ContainerStarted","Data":"8092cd424cf4478838eddb816579d256867e446174ea6b5265c96e82f99f31b4"} Mar 16 15:27:02 crc kubenswrapper[4736]: I0316 15:27:02.391213 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hsz7s" event={"ID":"b8fd5e0d-983e-4780-9a84-dc84a9766804","Type":"ContainerStarted","Data":"f4ae4ad2e31b5559edfcc56e1e145535c1f222a1d91bee2c4a0202abca23ea4f"} Mar 16 15:27:02 crc kubenswrapper[4736]: I0316 15:27:02.391224 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hsz7s" event={"ID":"b8fd5e0d-983e-4780-9a84-dc84a9766804","Type":"ContainerStarted","Data":"c7a10ab660a1f06be57f3a3de187f2c5127f5f2f0fb46e446813c5266bc49990"} Mar 16 15:27:02 crc kubenswrapper[4736]: I0316 15:27:02.391236 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hsz7s" event={"ID":"b8fd5e0d-983e-4780-9a84-dc84a9766804","Type":"ContainerStarted","Data":"dee946ef5dc1dfdbca295fb412e0f417a21eed589c078f402355ad33b5f70141"} Mar 16 15:27:03 crc kubenswrapper[4736]: I0316 15:27:03.093703 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ljzvr"] Mar 16 15:27:03 crc kubenswrapper[4736]: I0316 15:27:03.095342 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ljzvr" Mar 16 15:27:03 crc kubenswrapper[4736]: I0316 15:27:03.107506 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 16 15:27:03 crc kubenswrapper[4736]: I0316 15:27:03.107897 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nd824" Mar 16 15:27:03 crc kubenswrapper[4736]: I0316 15:27:03.108127 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 16 15:27:03 crc kubenswrapper[4736]: I0316 15:27:03.121581 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ljzvr"] Mar 16 15:27:03 crc kubenswrapper[4736]: I0316 15:27:03.311977 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h9th\" (UniqueName: \"kubernetes.io/projected/6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db-kube-api-access-5h9th\") pod \"openstack-operator-index-ljzvr\" (UID: \"6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db\") " pod="openstack-operators/openstack-operator-index-ljzvr" Mar 16 15:27:03 crc kubenswrapper[4736]: I0316 15:27:03.401612 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hsz7s" event={"ID":"b8fd5e0d-983e-4780-9a84-dc84a9766804","Type":"ContainerStarted","Data":"c3fcc4bc646dabe79ee8491edbaff9edd69641db0a52a7d157fd544b77f79b80"} Mar 16 15:27:03 crc kubenswrapper[4736]: I0316 15:27:03.402784 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:27:03 crc kubenswrapper[4736]: I0316 15:27:03.413132 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h9th\" (UniqueName: \"kubernetes.io/projected/6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db-kube-api-access-5h9th\") pod \"openstack-operator-index-ljzvr\" (UID: \"6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db\") " pod="openstack-operators/openstack-operator-index-ljzvr" Mar 16 15:27:03 crc kubenswrapper[4736]: I0316 15:27:03.429496 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hsz7s" podStartSLOduration=6.944044504 podStartE2EDuration="15.429472974s" podCreationTimestamp="2026-03-16 15:26:48 +0000 UTC" firstStartedPulling="2026-03-16 15:26:49.456353813 +0000 UTC m=+811.183744100" lastFinishedPulling="2026-03-16 15:26:57.941782273 +0000 UTC m=+819.669172570" observedRunningTime="2026-03-16 15:27:03.423724812 +0000 UTC m=+825.151115099" watchObservedRunningTime="2026-03-16 15:27:03.429472974 +0000 UTC m=+825.156863261" Mar 16 15:27:03 crc kubenswrapper[4736]: I0316 15:27:03.442754 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h9th\" (UniqueName: \"kubernetes.io/projected/6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db-kube-api-access-5h9th\") pod \"openstack-operator-index-ljzvr\" (UID: \"6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db\") " pod="openstack-operators/openstack-operator-index-ljzvr" Mar 16 15:27:03 crc kubenswrapper[4736]: I0316 15:27:03.713213 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ljzvr" Mar 16 15:27:04 crc kubenswrapper[4736]: I0316 15:27:04.157221 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ljzvr"] Mar 16 15:27:04 crc kubenswrapper[4736]: I0316 15:27:04.173574 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 15:27:04 crc kubenswrapper[4736]: I0316 15:27:04.295225 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:27:04 crc kubenswrapper[4736]: I0316 15:27:04.380312 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:27:04 crc kubenswrapper[4736]: I0316 15:27:04.427214 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ljzvr" event={"ID":"6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db","Type":"ContainerStarted","Data":"073015188ae0a901fe48d6319a4db314e51bcdf9f4e0349be1e0833e9a3cd67a"} Mar 16 15:27:06 crc kubenswrapper[4736]: I0316 15:27:06.461675 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-ljzvr"] Mar 16 15:27:07 crc kubenswrapper[4736]: I0316 15:27:07.103359 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ztkrd"] Mar 16 15:27:07 crc kubenswrapper[4736]: I0316 15:27:07.104335 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ztkrd" Mar 16 15:27:07 crc kubenswrapper[4736]: I0316 15:27:07.108383 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ztkrd"] Mar 16 15:27:07 crc kubenswrapper[4736]: I0316 15:27:07.275368 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rtrr\" (UniqueName: \"kubernetes.io/projected/aeb1e197-872b-4ade-b3e4-425a5e52433f-kube-api-access-6rtrr\") pod \"openstack-operator-index-ztkrd\" (UID: \"aeb1e197-872b-4ade-b3e4-425a5e52433f\") " pod="openstack-operators/openstack-operator-index-ztkrd" Mar 16 15:27:07 crc kubenswrapper[4736]: I0316 15:27:07.377411 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rtrr\" (UniqueName: \"kubernetes.io/projected/aeb1e197-872b-4ade-b3e4-425a5e52433f-kube-api-access-6rtrr\") pod \"openstack-operator-index-ztkrd\" (UID: \"aeb1e197-872b-4ade-b3e4-425a5e52433f\") " pod="openstack-operators/openstack-operator-index-ztkrd" Mar 16 15:27:07 crc kubenswrapper[4736]: I0316 15:27:07.412345 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rtrr\" (UniqueName: \"kubernetes.io/projected/aeb1e197-872b-4ade-b3e4-425a5e52433f-kube-api-access-6rtrr\") pod \"openstack-operator-index-ztkrd\" (UID: \"aeb1e197-872b-4ade-b3e4-425a5e52433f\") " pod="openstack-operators/openstack-operator-index-ztkrd" Mar 16 15:27:07 crc kubenswrapper[4736]: I0316 15:27:07.435857 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ztkrd" Mar 16 15:27:07 crc kubenswrapper[4736]: I0316 15:27:07.450914 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ljzvr" event={"ID":"6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db","Type":"ContainerStarted","Data":"5b10912ee59fc9fae7bd5385f75b57f274bed2ad505bccb1df16c9f2c3119939"} Mar 16 15:27:07 crc kubenswrapper[4736]: I0316 15:27:07.451360 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-ljzvr" podUID="6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db" containerName="registry-server" containerID="cri-o://5b10912ee59fc9fae7bd5385f75b57f274bed2ad505bccb1df16c9f2c3119939" gracePeriod=2 Mar 16 15:27:07 crc kubenswrapper[4736]: I0316 15:27:07.478862 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ljzvr" podStartSLOduration=1.4125191990000001 podStartE2EDuration="4.478832877s" podCreationTimestamp="2026-03-16 15:27:03 +0000 UTC" firstStartedPulling="2026-03-16 15:27:04.172920961 +0000 UTC m=+825.900311298" lastFinishedPulling="2026-03-16 15:27:07.239234689 +0000 UTC m=+828.966624976" observedRunningTime="2026-03-16 15:27:07.473782065 +0000 UTC m=+829.201172362" watchObservedRunningTime="2026-03-16 15:27:07.478832877 +0000 UTC m=+829.206223164" Mar 16 15:27:07 crc kubenswrapper[4736]: I0316 15:27:07.817799 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ljzvr" Mar 16 15:27:07 crc kubenswrapper[4736]: I0316 15:27:07.885303 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ztkrd"] Mar 16 15:27:07 crc kubenswrapper[4736]: W0316 15:27:07.893937 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaeb1e197_872b_4ade_b3e4_425a5e52433f.slice/crio-1dea06a7c0148681b42f6aa8f5356d2ef59727e45c0eac750b434dfb9583f64c WatchSource:0}: Error finding container 1dea06a7c0148681b42f6aa8f5356d2ef59727e45c0eac750b434dfb9583f64c: Status 404 returned error can't find the container with id 1dea06a7c0148681b42f6aa8f5356d2ef59727e45c0eac750b434dfb9583f64c Mar 16 15:27:07 crc kubenswrapper[4736]: I0316 15:27:07.988635 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h9th\" (UniqueName: \"kubernetes.io/projected/6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db-kube-api-access-5h9th\") pod \"6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db\" (UID: \"6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db\") " Mar 16 15:27:07 crc kubenswrapper[4736]: I0316 15:27:07.994116 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db-kube-api-access-5h9th" (OuterVolumeSpecName: "kube-api-access-5h9th") pod "6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db" (UID: "6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db"). InnerVolumeSpecName "kube-api-access-5h9th". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.090820 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h9th\" (UniqueName: \"kubernetes.io/projected/6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db-kube-api-access-5h9th\") on node \"crc\" DevicePath \"\"" Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.460862 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ztkrd" event={"ID":"aeb1e197-872b-4ade-b3e4-425a5e52433f","Type":"ContainerStarted","Data":"ba2d864e567ab957fa4050d1df9fcb7ee15a313dcc5ef34a20c7bfeba11d75c8"} Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.461374 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ztkrd" event={"ID":"aeb1e197-872b-4ade-b3e4-425a5e52433f","Type":"ContainerStarted","Data":"1dea06a7c0148681b42f6aa8f5356d2ef59727e45c0eac750b434dfb9583f64c"} Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.463564 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ljzvr" Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.463628 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ljzvr" event={"ID":"6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db","Type":"ContainerDied","Data":"5b10912ee59fc9fae7bd5385f75b57f274bed2ad505bccb1df16c9f2c3119939"} Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.463633 4736 generic.go:334] "Generic (PLEG): container finished" podID="6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db" containerID="5b10912ee59fc9fae7bd5385f75b57f274bed2ad505bccb1df16c9f2c3119939" exitCode=0 Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.463691 4736 scope.go:117] "RemoveContainer" containerID="5b10912ee59fc9fae7bd5385f75b57f274bed2ad505bccb1df16c9f2c3119939" Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.463732 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ljzvr" event={"ID":"6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db","Type":"ContainerDied","Data":"073015188ae0a901fe48d6319a4db314e51bcdf9f4e0349be1e0833e9a3cd67a"} Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.477855 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ztkrd" podStartSLOduration=1.423236124 podStartE2EDuration="1.47782854s" podCreationTimestamp="2026-03-16 15:27:07 +0000 UTC" firstStartedPulling="2026-03-16 15:27:07.898405385 +0000 UTC m=+829.625795672" lastFinishedPulling="2026-03-16 15:27:07.952997801 +0000 UTC m=+829.680388088" observedRunningTime="2026-03-16 15:27:08.476648827 +0000 UTC m=+830.204039164" watchObservedRunningTime="2026-03-16 15:27:08.47782854 +0000 UTC m=+830.205218827" Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.500785 4736 scope.go:117] "RemoveContainer" containerID="5b10912ee59fc9fae7bd5385f75b57f274bed2ad505bccb1df16c9f2c3119939" Mar 16 15:27:08 crc kubenswrapper[4736]: E0316 15:27:08.503244 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b10912ee59fc9fae7bd5385f75b57f274bed2ad505bccb1df16c9f2c3119939\": container with ID starting with 5b10912ee59fc9fae7bd5385f75b57f274bed2ad505bccb1df16c9f2c3119939 not found: ID does not exist" containerID="5b10912ee59fc9fae7bd5385f75b57f274bed2ad505bccb1df16c9f2c3119939" Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.503280 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b10912ee59fc9fae7bd5385f75b57f274bed2ad505bccb1df16c9f2c3119939"} err="failed to get container status \"5b10912ee59fc9fae7bd5385f75b57f274bed2ad505bccb1df16c9f2c3119939\": rpc error: code = NotFound desc = could not find container \"5b10912ee59fc9fae7bd5385f75b57f274bed2ad505bccb1df16c9f2c3119939\": container with ID starting with 5b10912ee59fc9fae7bd5385f75b57f274bed2ad505bccb1df16c9f2c3119939 not found: ID does not exist" Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.507814 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.507899 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.507963 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.508895 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc3be125f2287a40d18c8298f349a4df97007877194d3db241a15fe3b7bae6c8"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.508974 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://fc3be125f2287a40d18c8298f349a4df97007877194d3db241a15fe3b7bae6c8" gracePeriod=600 Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.509375 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-ljzvr"] Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.516707 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-ljzvr"] Mar 16 15:27:08 crc kubenswrapper[4736]: I0316 15:27:08.992230 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db" path="/var/lib/kubelet/pods/6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db/volumes" Mar 16 15:27:09 crc kubenswrapper[4736]: I0316 15:27:09.305868 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" Mar 16 15:27:09 crc kubenswrapper[4736]: I0316 15:27:09.478266 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="fc3be125f2287a40d18c8298f349a4df97007877194d3db241a15fe3b7bae6c8" exitCode=0 Mar 16 15:27:09 crc kubenswrapper[4736]: I0316 15:27:09.478374 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"fc3be125f2287a40d18c8298f349a4df97007877194d3db241a15fe3b7bae6c8"} Mar 16 15:27:09 crc kubenswrapper[4736]: I0316 15:27:09.478458 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"d913506ed85bd453632f44b051d57a127bbfa0326b3ffb2cb8d536a6f22597ce"} Mar 16 15:27:09 crc kubenswrapper[4736]: I0316 15:27:09.478496 4736 scope.go:117] "RemoveContainer" containerID="6b9240748b433f9c88fc04b11becfe88b0e7f67784da8195763a15bd86d7528d" Mar 16 15:27:17 crc kubenswrapper[4736]: I0316 15:27:17.436468 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ztkrd" Mar 16 15:27:17 crc kubenswrapper[4736]: I0316 15:27:17.437178 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ztkrd" Mar 16 15:27:17 crc kubenswrapper[4736]: I0316 15:27:17.465779 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ztkrd" Mar 16 15:27:18 crc kubenswrapper[4736]: I0316 15:27:18.078260 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ztkrd" Mar 16 15:27:19 crc kubenswrapper[4736]: I0316 15:27:19.298755 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hsz7s" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.118415 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt"] Mar 16 15:27:31 crc kubenswrapper[4736]: E0316 15:27:31.120890 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db" containerName="registry-server" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.127916 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db" containerName="registry-server" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.128651 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6560641c-9e3d-46a8-b5fc-4ff4bd8ea6db" containerName="registry-server" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.129696 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt"] Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.129820 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.134965 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-dtxtz" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.242905 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e002ccd-bc1b-4542-9e87-4086de4291c9-bundle\") pod \"bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt\" (UID: \"6e002ccd-bc1b-4542-9e87-4086de4291c9\") " pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.243026 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25bgd\" (UniqueName: \"kubernetes.io/projected/6e002ccd-bc1b-4542-9e87-4086de4291c9-kube-api-access-25bgd\") pod \"bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt\" (UID: \"6e002ccd-bc1b-4542-9e87-4086de4291c9\") " pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.243077 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e002ccd-bc1b-4542-9e87-4086de4291c9-util\") pod \"bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt\" (UID: \"6e002ccd-bc1b-4542-9e87-4086de4291c9\") " pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.345387 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e002ccd-bc1b-4542-9e87-4086de4291c9-bundle\") pod \"bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt\" (UID: \"6e002ccd-bc1b-4542-9e87-4086de4291c9\") " pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.346255 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25bgd\" (UniqueName: \"kubernetes.io/projected/6e002ccd-bc1b-4542-9e87-4086de4291c9-kube-api-access-25bgd\") pod \"bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt\" (UID: \"6e002ccd-bc1b-4542-9e87-4086de4291c9\") " pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.346317 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e002ccd-bc1b-4542-9e87-4086de4291c9-util\") pod \"bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt\" (UID: \"6e002ccd-bc1b-4542-9e87-4086de4291c9\") " pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.346206 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e002ccd-bc1b-4542-9e87-4086de4291c9-bundle\") pod \"bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt\" (UID: \"6e002ccd-bc1b-4542-9e87-4086de4291c9\") " pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.346638 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e002ccd-bc1b-4542-9e87-4086de4291c9-util\") pod \"bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt\" (UID: \"6e002ccd-bc1b-4542-9e87-4086de4291c9\") " pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.369034 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25bgd\" (UniqueName: \"kubernetes.io/projected/6e002ccd-bc1b-4542-9e87-4086de4291c9-kube-api-access-25bgd\") pod \"bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt\" (UID: \"6e002ccd-bc1b-4542-9e87-4086de4291c9\") " pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.456215 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" Mar 16 15:27:31 crc kubenswrapper[4736]: I0316 15:27:31.668968 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt"] Mar 16 15:27:32 crc kubenswrapper[4736]: I0316 15:27:32.154614 4736 generic.go:334] "Generic (PLEG): container finished" podID="6e002ccd-bc1b-4542-9e87-4086de4291c9" containerID="fb6b274ede16c4365dd3034c2f208b1904673a64a1684756f55bde567221cd24" exitCode=0 Mar 16 15:27:32 crc kubenswrapper[4736]: I0316 15:27:32.154674 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" event={"ID":"6e002ccd-bc1b-4542-9e87-4086de4291c9","Type":"ContainerDied","Data":"fb6b274ede16c4365dd3034c2f208b1904673a64a1684756f55bde567221cd24"} Mar 16 15:27:32 crc kubenswrapper[4736]: I0316 15:27:32.154978 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" event={"ID":"6e002ccd-bc1b-4542-9e87-4086de4291c9","Type":"ContainerStarted","Data":"2a6dccf74bf16ca3a8b765e96755d30a3d34ebfdfd891ebaa83716651ebcded7"} Mar 16 15:27:33 crc kubenswrapper[4736]: I0316 15:27:33.163513 4736 generic.go:334] "Generic (PLEG): container finished" podID="6e002ccd-bc1b-4542-9e87-4086de4291c9" containerID="41554304ea1e3e7333dad2f1701ba01fd214603fda6c7d111bb7cb06b973c621" exitCode=0 Mar 16 15:27:33 crc kubenswrapper[4736]: I0316 15:27:33.163690 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" event={"ID":"6e002ccd-bc1b-4542-9e87-4086de4291c9","Type":"ContainerDied","Data":"41554304ea1e3e7333dad2f1701ba01fd214603fda6c7d111bb7cb06b973c621"} Mar 16 15:27:34 crc kubenswrapper[4736]: I0316 15:27:34.174223 4736 generic.go:334] "Generic (PLEG): container finished" podID="6e002ccd-bc1b-4542-9e87-4086de4291c9" containerID="946ccc4a48c05209ea75daf775eefad3594e0d928f815f84dbd7e8628f3b57e6" exitCode=0 Mar 16 15:27:34 crc kubenswrapper[4736]: I0316 15:27:34.174320 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" event={"ID":"6e002ccd-bc1b-4542-9e87-4086de4291c9","Type":"ContainerDied","Data":"946ccc4a48c05209ea75daf775eefad3594e0d928f815f84dbd7e8628f3b57e6"} Mar 16 15:27:35 crc kubenswrapper[4736]: I0316 15:27:35.422746 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" Mar 16 15:27:35 crc kubenswrapper[4736]: I0316 15:27:35.609976 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e002ccd-bc1b-4542-9e87-4086de4291c9-bundle\") pod \"6e002ccd-bc1b-4542-9e87-4086de4291c9\" (UID: \"6e002ccd-bc1b-4542-9e87-4086de4291c9\") " Mar 16 15:27:35 crc kubenswrapper[4736]: I0316 15:27:35.610063 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e002ccd-bc1b-4542-9e87-4086de4291c9-util\") pod \"6e002ccd-bc1b-4542-9e87-4086de4291c9\" (UID: \"6e002ccd-bc1b-4542-9e87-4086de4291c9\") " Mar 16 15:27:35 crc kubenswrapper[4736]: I0316 15:27:35.610188 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25bgd\" (UniqueName: \"kubernetes.io/projected/6e002ccd-bc1b-4542-9e87-4086de4291c9-kube-api-access-25bgd\") pod \"6e002ccd-bc1b-4542-9e87-4086de4291c9\" (UID: \"6e002ccd-bc1b-4542-9e87-4086de4291c9\") " Mar 16 15:27:35 crc kubenswrapper[4736]: I0316 15:27:35.611053 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e002ccd-bc1b-4542-9e87-4086de4291c9-bundle" (OuterVolumeSpecName: "bundle") pod "6e002ccd-bc1b-4542-9e87-4086de4291c9" (UID: "6e002ccd-bc1b-4542-9e87-4086de4291c9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:27:35 crc kubenswrapper[4736]: I0316 15:27:35.622383 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e002ccd-bc1b-4542-9e87-4086de4291c9-kube-api-access-25bgd" (OuterVolumeSpecName: "kube-api-access-25bgd") pod "6e002ccd-bc1b-4542-9e87-4086de4291c9" (UID: "6e002ccd-bc1b-4542-9e87-4086de4291c9"). InnerVolumeSpecName "kube-api-access-25bgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:27:35 crc kubenswrapper[4736]: I0316 15:27:35.624524 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e002ccd-bc1b-4542-9e87-4086de4291c9-util" (OuterVolumeSpecName: "util") pod "6e002ccd-bc1b-4542-9e87-4086de4291c9" (UID: "6e002ccd-bc1b-4542-9e87-4086de4291c9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:27:35 crc kubenswrapper[4736]: I0316 15:27:35.711438 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25bgd\" (UniqueName: \"kubernetes.io/projected/6e002ccd-bc1b-4542-9e87-4086de4291c9-kube-api-access-25bgd\") on node \"crc\" DevicePath \"\"" Mar 16 15:27:35 crc kubenswrapper[4736]: I0316 15:27:35.711470 4736 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6e002ccd-bc1b-4542-9e87-4086de4291c9-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:27:35 crc kubenswrapper[4736]: I0316 15:27:35.711479 4736 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6e002ccd-bc1b-4542-9e87-4086de4291c9-util\") on node \"crc\" DevicePath \"\"" Mar 16 15:27:36 crc kubenswrapper[4736]: I0316 15:27:36.202532 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" event={"ID":"6e002ccd-bc1b-4542-9e87-4086de4291c9","Type":"ContainerDied","Data":"2a6dccf74bf16ca3a8b765e96755d30a3d34ebfdfd891ebaa83716651ebcded7"} Mar 16 15:27:36 crc kubenswrapper[4736]: I0316 15:27:36.203049 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a6dccf74bf16ca3a8b765e96755d30a3d34ebfdfd891ebaa83716651ebcded7" Mar 16 15:27:36 crc kubenswrapper[4736]: I0316 15:27:36.202806 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt" Mar 16 15:27:38 crc kubenswrapper[4736]: I0316 15:27:38.041589 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x"] Mar 16 15:27:38 crc kubenswrapper[4736]: E0316 15:27:38.042306 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e002ccd-bc1b-4542-9e87-4086de4291c9" containerName="util" Mar 16 15:27:38 crc kubenswrapper[4736]: I0316 15:27:38.042326 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e002ccd-bc1b-4542-9e87-4086de4291c9" containerName="util" Mar 16 15:27:38 crc kubenswrapper[4736]: E0316 15:27:38.042345 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e002ccd-bc1b-4542-9e87-4086de4291c9" containerName="pull" Mar 16 15:27:38 crc kubenswrapper[4736]: I0316 15:27:38.042354 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e002ccd-bc1b-4542-9e87-4086de4291c9" containerName="pull" Mar 16 15:27:38 crc kubenswrapper[4736]: E0316 15:27:38.042367 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e002ccd-bc1b-4542-9e87-4086de4291c9" containerName="extract" Mar 16 15:27:38 crc kubenswrapper[4736]: I0316 15:27:38.042375 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e002ccd-bc1b-4542-9e87-4086de4291c9" containerName="extract" Mar 16 15:27:38 crc kubenswrapper[4736]: I0316 15:27:38.042523 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e002ccd-bc1b-4542-9e87-4086de4291c9" containerName="extract" Mar 16 15:27:38 crc kubenswrapper[4736]: I0316 15:27:38.043088 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x" Mar 16 15:27:38 crc kubenswrapper[4736]: I0316 15:27:38.047954 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-wv6x4" Mar 16 15:27:38 crc kubenswrapper[4736]: I0316 15:27:38.056408 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpnz8\" (UniqueName: \"kubernetes.io/projected/34b67803-050a-457b-80ff-64455949a26d-kube-api-access-lpnz8\") pod \"openstack-operator-controller-init-5dbd94f64-hsp7x\" (UID: \"34b67803-050a-457b-80ff-64455949a26d\") " pod="openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x" Mar 16 15:27:38 crc kubenswrapper[4736]: I0316 15:27:38.069671 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x"] Mar 16 15:27:38 crc kubenswrapper[4736]: I0316 15:27:38.157780 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpnz8\" (UniqueName: \"kubernetes.io/projected/34b67803-050a-457b-80ff-64455949a26d-kube-api-access-lpnz8\") pod \"openstack-operator-controller-init-5dbd94f64-hsp7x\" (UID: \"34b67803-050a-457b-80ff-64455949a26d\") " pod="openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x" Mar 16 15:27:38 crc kubenswrapper[4736]: I0316 15:27:38.198074 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpnz8\" (UniqueName: \"kubernetes.io/projected/34b67803-050a-457b-80ff-64455949a26d-kube-api-access-lpnz8\") pod \"openstack-operator-controller-init-5dbd94f64-hsp7x\" (UID: \"34b67803-050a-457b-80ff-64455949a26d\") " pod="openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x" Mar 16 15:27:38 crc kubenswrapper[4736]: I0316 15:27:38.360340 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x" Mar 16 15:27:38 crc kubenswrapper[4736]: I0316 15:27:38.656828 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x"] Mar 16 15:27:39 crc kubenswrapper[4736]: I0316 15:27:39.231363 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x" event={"ID":"34b67803-050a-457b-80ff-64455949a26d","Type":"ContainerStarted","Data":"46b8ea7a6edf4752132deb99f89c4595a7776a667111b5bbc47fbe90d113c15a"} Mar 16 15:27:44 crc kubenswrapper[4736]: I0316 15:27:44.289857 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x" event={"ID":"34b67803-050a-457b-80ff-64455949a26d","Type":"ContainerStarted","Data":"61601d9e1d0dc1436d129bc4fd7f0767193e800a25e4252517bc6d188cc61c83"} Mar 16 15:27:44 crc kubenswrapper[4736]: I0316 15:27:44.290653 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x" Mar 16 15:27:44 crc kubenswrapper[4736]: I0316 15:27:44.328574 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x" podStartSLOduration=1.073079719 podStartE2EDuration="6.328551402s" podCreationTimestamp="2026-03-16 15:27:38 +0000 UTC" firstStartedPulling="2026-03-16 15:27:38.663646704 +0000 UTC m=+860.391036981" lastFinishedPulling="2026-03-16 15:27:43.919118377 +0000 UTC m=+865.646508664" observedRunningTime="2026-03-16 15:27:44.324441273 +0000 UTC m=+866.051831560" watchObservedRunningTime="2026-03-16 15:27:44.328551402 +0000 UTC m=+866.055941689" Mar 16 15:27:58 crc kubenswrapper[4736]: I0316 15:27:58.363263 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x" Mar 16 15:28:00 crc kubenswrapper[4736]: I0316 15:28:00.145842 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561248-5nwht"] Mar 16 15:28:00 crc kubenswrapper[4736]: I0316 15:28:00.147695 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561248-5nwht" Mar 16 15:28:00 crc kubenswrapper[4736]: I0316 15:28:00.156511 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561248-5nwht"] Mar 16 15:28:00 crc kubenswrapper[4736]: I0316 15:28:00.157956 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:28:00 crc kubenswrapper[4736]: I0316 15:28:00.158495 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:28:00 crc kubenswrapper[4736]: I0316 15:28:00.163622 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:28:00 crc kubenswrapper[4736]: I0316 15:28:00.307063 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlcng\" (UniqueName: \"kubernetes.io/projected/d3c3b2e2-c7c8-4879-8b27-1f379491c363-kube-api-access-mlcng\") pod \"auto-csr-approver-29561248-5nwht\" (UID: \"d3c3b2e2-c7c8-4879-8b27-1f379491c363\") " pod="openshift-infra/auto-csr-approver-29561248-5nwht" Mar 16 15:28:00 crc kubenswrapper[4736]: I0316 15:28:00.409050 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlcng\" (UniqueName: \"kubernetes.io/projected/d3c3b2e2-c7c8-4879-8b27-1f379491c363-kube-api-access-mlcng\") pod \"auto-csr-approver-29561248-5nwht\" (UID: \"d3c3b2e2-c7c8-4879-8b27-1f379491c363\") " pod="openshift-infra/auto-csr-approver-29561248-5nwht" Mar 16 15:28:00 crc kubenswrapper[4736]: I0316 15:28:00.433821 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlcng\" (UniqueName: \"kubernetes.io/projected/d3c3b2e2-c7c8-4879-8b27-1f379491c363-kube-api-access-mlcng\") pod \"auto-csr-approver-29561248-5nwht\" (UID: \"d3c3b2e2-c7c8-4879-8b27-1f379491c363\") " pod="openshift-infra/auto-csr-approver-29561248-5nwht" Mar 16 15:28:00 crc kubenswrapper[4736]: I0316 15:28:00.515908 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561248-5nwht" Mar 16 15:28:00 crc kubenswrapper[4736]: I0316 15:28:00.772936 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561248-5nwht"] Mar 16 15:28:01 crc kubenswrapper[4736]: I0316 15:28:01.436881 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561248-5nwht" event={"ID":"d3c3b2e2-c7c8-4879-8b27-1f379491c363","Type":"ContainerStarted","Data":"f5995766228b02238ca43a936002d701bf9811156d632e97043ef7cae16c71fd"} Mar 16 15:28:02 crc kubenswrapper[4736]: I0316 15:28:02.446692 4736 generic.go:334] "Generic (PLEG): container finished" podID="d3c3b2e2-c7c8-4879-8b27-1f379491c363" containerID="68b3e22b6bcecfec9f9712dbe7b564ffabed1fcd1e9a892dd27040f7a30c5290" exitCode=0 Mar 16 15:28:02 crc kubenswrapper[4736]: I0316 15:28:02.446781 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561248-5nwht" event={"ID":"d3c3b2e2-c7c8-4879-8b27-1f379491c363","Type":"ContainerDied","Data":"68b3e22b6bcecfec9f9712dbe7b564ffabed1fcd1e9a892dd27040f7a30c5290"} Mar 16 15:28:03 crc kubenswrapper[4736]: I0316 15:28:03.686840 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561248-5nwht" Mar 16 15:28:03 crc kubenswrapper[4736]: I0316 15:28:03.859721 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlcng\" (UniqueName: \"kubernetes.io/projected/d3c3b2e2-c7c8-4879-8b27-1f379491c363-kube-api-access-mlcng\") pod \"d3c3b2e2-c7c8-4879-8b27-1f379491c363\" (UID: \"d3c3b2e2-c7c8-4879-8b27-1f379491c363\") " Mar 16 15:28:03 crc kubenswrapper[4736]: I0316 15:28:03.871184 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c3b2e2-c7c8-4879-8b27-1f379491c363-kube-api-access-mlcng" (OuterVolumeSpecName: "kube-api-access-mlcng") pod "d3c3b2e2-c7c8-4879-8b27-1f379491c363" (UID: "d3c3b2e2-c7c8-4879-8b27-1f379491c363"). InnerVolumeSpecName "kube-api-access-mlcng". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:28:03 crc kubenswrapper[4736]: I0316 15:28:03.962085 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlcng\" (UniqueName: \"kubernetes.io/projected/d3c3b2e2-c7c8-4879-8b27-1f379491c363-kube-api-access-mlcng\") on node \"crc\" DevicePath \"\"" Mar 16 15:28:04 crc kubenswrapper[4736]: I0316 15:28:04.461571 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561248-5nwht" event={"ID":"d3c3b2e2-c7c8-4879-8b27-1f379491c363","Type":"ContainerDied","Data":"f5995766228b02238ca43a936002d701bf9811156d632e97043ef7cae16c71fd"} Mar 16 15:28:04 crc kubenswrapper[4736]: I0316 15:28:04.461614 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561248-5nwht" Mar 16 15:28:04 crc kubenswrapper[4736]: I0316 15:28:04.461617 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5995766228b02238ca43a936002d701bf9811156d632e97043ef7cae16c71fd" Mar 16 15:28:04 crc kubenswrapper[4736]: I0316 15:28:04.743383 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561242-k42xz"] Mar 16 15:28:04 crc kubenswrapper[4736]: I0316 15:28:04.750421 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561242-k42xz"] Mar 16 15:28:04 crc kubenswrapper[4736]: I0316 15:28:04.988966 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d1657e3-98d0-4f79-9e5b-27a428d12c85" path="/var/lib/kubelet/pods/8d1657e3-98d0-4f79-9e5b-27a428d12c85/volumes" Mar 16 15:28:22 crc kubenswrapper[4736]: I0316 15:28:22.000055 4736 scope.go:117] "RemoveContainer" containerID="02ffa8fbd7725bd1f500fbbad2ca7ed5e168b976674a32288d293bab1b1bcc25" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.433615 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm"] Mar 16 15:28:25 crc kubenswrapper[4736]: E0316 15:28:25.435348 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c3b2e2-c7c8-4879-8b27-1f379491c363" containerName="oc" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.435393 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c3b2e2-c7c8-4879-8b27-1f379491c363" containerName="oc" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.436151 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c3b2e2-c7c8-4879-8b27-1f379491c363" containerName="oc" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.437551 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.443157 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-jjst2" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.459577 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.463778 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.472073 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-sqpft" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.506472 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.517377 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.518683 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.529443 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-w955q" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.534291 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.555828 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.556974 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.563123 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-pbv4z" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.588435 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.596302 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg9s2\" (UniqueName: \"kubernetes.io/projected/99a35a5a-103f-4e00-9b39-d4f86531f5f7-kube-api-access-sg9s2\") pod \"barbican-operator-controller-manager-59bc569d95-w4ppt\" (UID: \"99a35a5a-103f-4e00-9b39-d4f86531f5f7\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.596662 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx6jc\" (UniqueName: \"kubernetes.io/projected/8163ef92-862a-4de1-a443-8ac84a5ba0c9-kube-api-access-dx6jc\") pod \"cinder-operator-controller-manager-8d58dc466-tgqsm\" (UID: \"8163ef92-862a-4de1-a443-8ac84a5ba0c9\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.605814 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.607032 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.615897 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-nqbt2" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.623864 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.630378 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.652785 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.654272 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.661274 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.662467 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.663170 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.663503 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-297gh" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.666346 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-9g78b" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.672683 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.685775 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.701659 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg9s2\" (UniqueName: \"kubernetes.io/projected/99a35a5a-103f-4e00-9b39-d4f86531f5f7-kube-api-access-sg9s2\") pod \"barbican-operator-controller-manager-59bc569d95-w4ppt\" (UID: \"99a35a5a-103f-4e00-9b39-d4f86531f5f7\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.701721 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5szk\" (UniqueName: \"kubernetes.io/projected/9d7909e1-3088-4a9e-b2ac-286927abd741-kube-api-access-j5szk\") pod \"glance-operator-controller-manager-79df6bcc97-z9l9q\" (UID: \"9d7909e1-3088-4a9e-b2ac-286927abd741\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.701770 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hm2g\" (UniqueName: \"kubernetes.io/projected/1ae22b3c-97a5-4592-b263-557131818155-kube-api-access-2hm2g\") pod \"heat-operator-controller-manager-67dd5f86f5-d6m9n\" (UID: \"1ae22b3c-97a5-4592-b263-557131818155\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.701810 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89nw4\" (UniqueName: \"kubernetes.io/projected/2d48b057-960e-445a-bc66-b6d3dbfb56f9-kube-api-access-89nw4\") pod \"designate-operator-controller-manager-588d4d986b-c6tc2\" (UID: \"2d48b057-960e-445a-bc66-b6d3dbfb56f9\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.701835 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx6jc\" (UniqueName: \"kubernetes.io/projected/8163ef92-862a-4de1-a443-8ac84a5ba0c9-kube-api-access-dx6jc\") pod \"cinder-operator-controller-manager-8d58dc466-tgqsm\" (UID: \"8163ef92-862a-4de1-a443-8ac84a5ba0c9\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.722595 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.723513 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.731140 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.740196 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-9k2rs" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.740289 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg9s2\" (UniqueName: \"kubernetes.io/projected/99a35a5a-103f-4e00-9b39-d4f86531f5f7-kube-api-access-sg9s2\") pod \"barbican-operator-controller-manager-59bc569d95-w4ppt\" (UID: \"99a35a5a-103f-4e00-9b39-d4f86531f5f7\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.758671 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.759827 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.762681 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gvtk4" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.775934 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.777658 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.784860 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-wb5tc" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.786358 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx6jc\" (UniqueName: \"kubernetes.io/projected/8163ef92-862a-4de1-a443-8ac84a5ba0c9-kube-api-access-dx6jc\") pod \"cinder-operator-controller-manager-8d58dc466-tgqsm\" (UID: \"8163ef92-862a-4de1-a443-8ac84a5ba0c9\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.790934 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.803316 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5szk\" (UniqueName: \"kubernetes.io/projected/9d7909e1-3088-4a9e-b2ac-286927abd741-kube-api-access-j5szk\") pod \"glance-operator-controller-manager-79df6bcc97-z9l9q\" (UID: \"9d7909e1-3088-4a9e-b2ac-286927abd741\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.803386 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hm2g\" (UniqueName: \"kubernetes.io/projected/1ae22b3c-97a5-4592-b263-557131818155-kube-api-access-2hm2g\") pod \"heat-operator-controller-manager-67dd5f86f5-d6m9n\" (UID: \"1ae22b3c-97a5-4592-b263-557131818155\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.803419 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfhb5\" (UniqueName: \"kubernetes.io/projected/634ac783-1fe6-4191-b432-f22ad5d84357-kube-api-access-lfhb5\") pod \"infra-operator-controller-manager-7b9c774f96-9b78c\" (UID: \"634ac783-1fe6-4191-b432-f22ad5d84357\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.803449 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89nw4\" (UniqueName: \"kubernetes.io/projected/2d48b057-960e-445a-bc66-b6d3dbfb56f9-kube-api-access-89nw4\") pod \"designate-operator-controller-manager-588d4d986b-c6tc2\" (UID: \"2d48b057-960e-445a-bc66-b6d3dbfb56f9\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.803470 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4zkr\" (UniqueName: \"kubernetes.io/projected/aac26090-af84-496a-afdf-efdb24694811-kube-api-access-x4zkr\") pod \"horizon-operator-controller-manager-8464cc45fb-fd7xj\" (UID: \"aac26090-af84-496a-afdf-efdb24694811\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.803494 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert\") pod \"infra-operator-controller-manager-7b9c774f96-9b78c\" (UID: \"634ac783-1fe6-4191-b432-f22ad5d84357\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.804186 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.807301 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.827525 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.827704 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.829760 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.833228 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.843769 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-mrkg5" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.867081 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-bqghw"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.870635 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.873787 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hm2g\" (UniqueName: \"kubernetes.io/projected/1ae22b3c-97a5-4592-b263-557131818155-kube-api-access-2hm2g\") pod \"heat-operator-controller-manager-67dd5f86f5-d6m9n\" (UID: \"1ae22b3c-97a5-4592-b263-557131818155\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.879070 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.880332 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.889227 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-hvc6s" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.902549 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5szk\" (UniqueName: \"kubernetes.io/projected/9d7909e1-3088-4a9e-b2ac-286927abd741-kube-api-access-j5szk\") pod \"glance-operator-controller-manager-79df6bcc97-z9l9q\" (UID: \"9d7909e1-3088-4a9e-b2ac-286927abd741\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.906825 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk2sd\" (UniqueName: \"kubernetes.io/projected/f8308a1a-301e-40b9-8a0e-b7e267e74a10-kube-api-access-hk2sd\") pod \"ironic-operator-controller-manager-6f787dddc9-9kzj2\" (UID: \"f8308a1a-301e-40b9-8a0e-b7e267e74a10\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.906905 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfhb5\" (UniqueName: \"kubernetes.io/projected/634ac783-1fe6-4191-b432-f22ad5d84357-kube-api-access-lfhb5\") pod \"infra-operator-controller-manager-7b9c774f96-9b78c\" (UID: \"634ac783-1fe6-4191-b432-f22ad5d84357\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.906947 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4zkr\" (UniqueName: \"kubernetes.io/projected/aac26090-af84-496a-afdf-efdb24694811-kube-api-access-x4zkr\") pod \"horizon-operator-controller-manager-8464cc45fb-fd7xj\" (UID: \"aac26090-af84-496a-afdf-efdb24694811\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.906971 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert\") pod \"infra-operator-controller-manager-7b9c774f96-9b78c\" (UID: \"634ac783-1fe6-4191-b432-f22ad5d84357\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.907353 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnfmk\" (UniqueName: \"kubernetes.io/projected/569449b8-1135-4dd6-b6fe-ad66844b413e-kube-api-access-tnfmk\") pod \"manila-operator-controller-manager-55f864c847-sjpl5\" (UID: \"569449b8-1135-4dd6-b6fe-ad66844b413e\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.907442 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5f8\" (UniqueName: \"kubernetes.io/projected/d77bc7ac-fb08-4603-8453-677c6be6916d-kube-api-access-hr5f8\") pod \"keystone-operator-controller-manager-768b96df4c-7gd6n\" (UID: \"d77bc7ac-fb08-4603-8453-677c6be6916d\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" Mar 16 15:28:25 crc kubenswrapper[4736]: E0316 15:28:25.907620 4736 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 16 15:28:25 crc kubenswrapper[4736]: E0316 15:28:25.907678 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert podName:634ac783-1fe6-4191-b432-f22ad5d84357 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:26.407655665 +0000 UTC m=+908.135045952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert") pod "infra-operator-controller-manager-7b9c774f96-9b78c" (UID: "634ac783-1fe6-4191-b432-f22ad5d84357") : secret "infra-operator-webhook-server-cert" not found Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.919708 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-db65n" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.940425 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-bqghw"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.940501 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.941893 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.954244 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4zkr\" (UniqueName: \"kubernetes.io/projected/aac26090-af84-496a-afdf-efdb24694811-kube-api-access-x4zkr\") pod \"horizon-operator-controller-manager-8464cc45fb-fd7xj\" (UID: \"aac26090-af84-496a-afdf-efdb24694811\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.954520 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg"] Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.955928 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.959459 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89nw4\" (UniqueName: \"kubernetes.io/projected/2d48b057-960e-445a-bc66-b6d3dbfb56f9-kube-api-access-89nw4\") pod \"designate-operator-controller-manager-588d4d986b-c6tc2\" (UID: \"2d48b057-960e-445a-bc66-b6d3dbfb56f9\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.966008 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-wzkst" Mar 16 15:28:25 crc kubenswrapper[4736]: I0316 15:28:25.969228 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfhb5\" (UniqueName: \"kubernetes.io/projected/634ac783-1fe6-4191-b432-f22ad5d84357-kube-api-access-lfhb5\") pod \"infra-operator-controller-manager-7b9c774f96-9b78c\" (UID: \"634ac783-1fe6-4191-b432-f22ad5d84357\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.009932 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.010524 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxq46\" (UniqueName: \"kubernetes.io/projected/285f243f-b886-440f-8a92-b1ddf60bf6e6-kube-api-access-lxq46\") pod \"mariadb-operator-controller-manager-67ccfc9778-bgvjq\" (UID: \"285f243f-b886-440f-8a92-b1ddf60bf6e6\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.010612 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvf8h\" (UniqueName: \"kubernetes.io/projected/b1ae843c-f1b5-4ee2-8300-55f93941ba2b-kube-api-access-cvf8h\") pod \"nova-operator-controller-manager-5d488d59fb-7hrfc\" (UID: \"b1ae843c-f1b5-4ee2-8300-55f93941ba2b\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.010694 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnfmk\" (UniqueName: \"kubernetes.io/projected/569449b8-1135-4dd6-b6fe-ad66844b413e-kube-api-access-tnfmk\") pod \"manila-operator-controller-manager-55f864c847-sjpl5\" (UID: \"569449b8-1135-4dd6-b6fe-ad66844b413e\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.010780 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr5f8\" (UniqueName: \"kubernetes.io/projected/d77bc7ac-fb08-4603-8453-677c6be6916d-kube-api-access-hr5f8\") pod \"keystone-operator-controller-manager-768b96df4c-7gd6n\" (UID: \"d77bc7ac-fb08-4603-8453-677c6be6916d\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.010818 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8p94\" (UniqueName: \"kubernetes.io/projected/99d86cbe-cf17-42a7-bc5b-d692609fff64-kube-api-access-q8p94\") pod \"neutron-operator-controller-manager-767865f676-bqghw\" (UID: \"99d86cbe-cf17-42a7-bc5b-d692609fff64\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.010866 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk2sd\" (UniqueName: \"kubernetes.io/projected/f8308a1a-301e-40b9-8a0e-b7e267e74a10-kube-api-access-hk2sd\") pod \"ironic-operator-controller-manager-6f787dddc9-9kzj2\" (UID: \"f8308a1a-301e-40b9-8a0e-b7e267e74a10\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.064494 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.094963 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk2sd\" (UniqueName: \"kubernetes.io/projected/f8308a1a-301e-40b9-8a0e-b7e267e74a10-kube-api-access-hk2sd\") pod \"ironic-operator-controller-manager-6f787dddc9-9kzj2\" (UID: \"f8308a1a-301e-40b9-8a0e-b7e267e74a10\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.115124 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvf8h\" (UniqueName: \"kubernetes.io/projected/b1ae843c-f1b5-4ee2-8300-55f93941ba2b-kube-api-access-cvf8h\") pod \"nova-operator-controller-manager-5d488d59fb-7hrfc\" (UID: \"b1ae843c-f1b5-4ee2-8300-55f93941ba2b\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.119732 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsr2n\" (UniqueName: \"kubernetes.io/projected/e7971b38-1b13-4984-a055-2cc52b34bf6b-kube-api-access-wsr2n\") pod \"octavia-operator-controller-manager-5b9f45d989-47kkg\" (UID: \"e7971b38-1b13-4984-a055-2cc52b34bf6b\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.119892 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8p94\" (UniqueName: \"kubernetes.io/projected/99d86cbe-cf17-42a7-bc5b-d692609fff64-kube-api-access-q8p94\") pod \"neutron-operator-controller-manager-767865f676-bqghw\" (UID: \"99d86cbe-cf17-42a7-bc5b-d692609fff64\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.120024 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxq46\" (UniqueName: \"kubernetes.io/projected/285f243f-b886-440f-8a92-b1ddf60bf6e6-kube-api-access-lxq46\") pod \"mariadb-operator-controller-manager-67ccfc9778-bgvjq\" (UID: \"285f243f-b886-440f-8a92-b1ddf60bf6e6\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.133311 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnfmk\" (UniqueName: \"kubernetes.io/projected/569449b8-1135-4dd6-b6fe-ad66844b413e-kube-api-access-tnfmk\") pod \"manila-operator-controller-manager-55f864c847-sjpl5\" (UID: \"569449b8-1135-4dd6-b6fe-ad66844b413e\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.142465 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.158230 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.165949 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.175035 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr5f8\" (UniqueName: \"kubernetes.io/projected/d77bc7ac-fb08-4603-8453-677c6be6916d-kube-api-access-hr5f8\") pod \"keystone-operator-controller-manager-768b96df4c-7gd6n\" (UID: \"d77bc7ac-fb08-4603-8453-677c6be6916d\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.180921 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.182287 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tt7jg" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.184900 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.193922 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvf8h\" (UniqueName: \"kubernetes.io/projected/b1ae843c-f1b5-4ee2-8300-55f93941ba2b-kube-api-access-cvf8h\") pod \"nova-operator-controller-manager-5d488d59fb-7hrfc\" (UID: \"b1ae843c-f1b5-4ee2-8300-55f93941ba2b\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.226119 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsr2n\" (UniqueName: \"kubernetes.io/projected/e7971b38-1b13-4984-a055-2cc52b34bf6b-kube-api-access-wsr2n\") pod \"octavia-operator-controller-manager-5b9f45d989-47kkg\" (UID: \"e7971b38-1b13-4984-a055-2cc52b34bf6b\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.226196 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md2vk\" (UniqueName: \"kubernetes.io/projected/62d536a1-c184-4077-a6f8-4285c3ebe5db-kube-api-access-md2vk\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-pc5vv\" (UID: \"62d536a1-c184-4077-a6f8-4285c3ebe5db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.226258 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-pc5vv\" (UID: \"62d536a1-c184-4077-a6f8-4285c3ebe5db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.245195 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.246474 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.248691 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.265065 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.279967 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8p94\" (UniqueName: \"kubernetes.io/projected/99d86cbe-cf17-42a7-bc5b-d692609fff64-kube-api-access-q8p94\") pod \"neutron-operator-controller-manager-767865f676-bqghw\" (UID: \"99d86cbe-cf17-42a7-bc5b-d692609fff64\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.281427 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxq46\" (UniqueName: \"kubernetes.io/projected/285f243f-b886-440f-8a92-b1ddf60bf6e6-kube-api-access-lxq46\") pod \"mariadb-operator-controller-manager-67ccfc9778-bgvjq\" (UID: \"285f243f-b886-440f-8a92-b1ddf60bf6e6\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.292961 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.299232 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-5kldp" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.341825 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.346643 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsr2n\" (UniqueName: \"kubernetes.io/projected/e7971b38-1b13-4984-a055-2cc52b34bf6b-kube-api-access-wsr2n\") pod \"octavia-operator-controller-manager-5b9f45d989-47kkg\" (UID: \"e7971b38-1b13-4984-a055-2cc52b34bf6b\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.342655 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.348369 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md2vk\" (UniqueName: \"kubernetes.io/projected/62d536a1-c184-4077-a6f8-4285c3ebe5db-kube-api-access-md2vk\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-pc5vv\" (UID: \"62d536a1-c184-4077-a6f8-4285c3ebe5db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.348455 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-pc5vv\" (UID: \"62d536a1-c184-4077-a6f8-4285c3ebe5db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.348489 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46nmf\" (UniqueName: \"kubernetes.io/projected/40be2c61-bd71-46b6-b837-abf09d8d5aeb-kube-api-access-46nmf\") pod \"ovn-operator-controller-manager-884679f54-vfqg8\" (UID: \"40be2c61-bd71-46b6-b837-abf09d8d5aeb\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" Mar 16 15:28:26 crc kubenswrapper[4736]: E0316 15:28:26.349000 4736 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 16 15:28:26 crc kubenswrapper[4736]: E0316 15:28:26.349041 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert podName:62d536a1-c184-4077-a6f8-4285c3ebe5db nodeName:}" failed. No retries permitted until 2026-03-16 15:28:26.84902685 +0000 UTC m=+908.576417137 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" (UID: "62d536a1-c184-4077-a6f8-4285c3ebe5db") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.349534 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.350858 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.365542 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.366642 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.376714 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-6f56q" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.377528 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-sksxp" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.397693 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.415514 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md2vk\" (UniqueName: \"kubernetes.io/projected/62d536a1-c184-4077-a6f8-4285c3ebe5db-kube-api-access-md2vk\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-pc5vv\" (UID: \"62d536a1-c184-4077-a6f8-4285c3ebe5db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.428235 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.428593 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.441029 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.452536 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46nmf\" (UniqueName: \"kubernetes.io/projected/40be2c61-bd71-46b6-b837-abf09d8d5aeb-kube-api-access-46nmf\") pod \"ovn-operator-controller-manager-884679f54-vfqg8\" (UID: \"40be2c61-bd71-46b6-b837-abf09d8d5aeb\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.452613 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert\") pod \"infra-operator-controller-manager-7b9c774f96-9b78c\" (UID: \"634ac783-1fe6-4191-b432-f22ad5d84357\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.452657 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6hcm\" (UniqueName: \"kubernetes.io/projected/534a3ae8-6587-4e8a-b454-b084edbfeb21-kube-api-access-k6hcm\") pod \"swift-operator-controller-manager-c674c5965-7lk9g\" (UID: \"534a3ae8-6587-4e8a-b454-b084edbfeb21\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.452680 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clc27\" (UniqueName: \"kubernetes.io/projected/6cbcdd30-245d-4732-8986-77f861f1f568-kube-api-access-clc27\") pod \"placement-operator-controller-manager-5784578c99-6fjhm\" (UID: \"6cbcdd30-245d-4732-8986-77f861f1f568\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm" Mar 16 15:28:26 crc kubenswrapper[4736]: E0316 15:28:26.453453 4736 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 16 15:28:26 crc kubenswrapper[4736]: E0316 15:28:26.453511 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert podName:634ac783-1fe6-4191-b432-f22ad5d84357 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:27.453490817 +0000 UTC m=+909.180881104 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert") pod "infra-operator-controller-manager-7b9c774f96-9b78c" (UID: "634ac783-1fe6-4191-b432-f22ad5d84357") : secret "infra-operator-webhook-server-cert" not found Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.462480 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.463824 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.468678 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.482234 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8wgcm" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.494263 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.517554 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.518730 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.523982 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46nmf\" (UniqueName: \"kubernetes.io/projected/40be2c61-bd71-46b6-b837-abf09d8d5aeb-kube-api-access-46nmf\") pod \"ovn-operator-controller-manager-884679f54-vfqg8\" (UID: \"40be2c61-bd71-46b6-b837-abf09d8d5aeb\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.523993 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-vm8hb" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.554911 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjjj2\" (UniqueName: \"kubernetes.io/projected/fff6882e-3a77-462f-b12e-25192ea56328-kube-api-access-xjjj2\") pod \"telemetry-operator-controller-manager-d6b694c5-6z9rj\" (UID: \"fff6882e-3a77-462f-b12e-25192ea56328\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.555029 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6hcm\" (UniqueName: \"kubernetes.io/projected/534a3ae8-6587-4e8a-b454-b084edbfeb21-kube-api-access-k6hcm\") pod \"swift-operator-controller-manager-c674c5965-7lk9g\" (UID: \"534a3ae8-6587-4e8a-b454-b084edbfeb21\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.555060 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clc27\" (UniqueName: \"kubernetes.io/projected/6cbcdd30-245d-4732-8986-77f861f1f568-kube-api-access-clc27\") pod \"placement-operator-controller-manager-5784578c99-6fjhm\" (UID: \"6cbcdd30-245d-4732-8986-77f861f1f568\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.560281 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.566028 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.575216 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.584213 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.592744 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6mf97" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.599723 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clc27\" (UniqueName: \"kubernetes.io/projected/6cbcdd30-245d-4732-8986-77f861f1f568-kube-api-access-clc27\") pod \"placement-operator-controller-manager-5784578c99-6fjhm\" (UID: \"6cbcdd30-245d-4732-8986-77f861f1f568\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.600550 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6hcm\" (UniqueName: \"kubernetes.io/projected/534a3ae8-6587-4e8a-b454-b084edbfeb21-kube-api-access-k6hcm\") pod \"swift-operator-controller-manager-c674c5965-7lk9g\" (UID: \"534a3ae8-6587-4e8a-b454-b084edbfeb21\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.614357 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.634211 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.663254 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tj5g\" (UniqueName: \"kubernetes.io/projected/93d0e3bc-0e33-4254-b52e-31f28fdff357-kube-api-access-7tj5g\") pod \"test-operator-controller-manager-5c5cb9c4d7-5ngf9\" (UID: \"93d0e3bc-0e33-4254-b52e-31f28fdff357\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.663361 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjjj2\" (UniqueName: \"kubernetes.io/projected/fff6882e-3a77-462f-b12e-25192ea56328-kube-api-access-xjjj2\") pod \"telemetry-operator-controller-manager-d6b694c5-6z9rj\" (UID: \"fff6882e-3a77-462f-b12e-25192ea56328\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.726900 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjjj2\" (UniqueName: \"kubernetes.io/projected/fff6882e-3a77-462f-b12e-25192ea56328-kube-api-access-xjjj2\") pod \"telemetry-operator-controller-manager-d6b694c5-6z9rj\" (UID: \"fff6882e-3a77-462f-b12e-25192ea56328\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.764722 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz8sk\" (UniqueName: \"kubernetes.io/projected/bdcce941-5cae-42fe-9dc5-a71e1e55790e-kube-api-access-xz8sk\") pod \"watcher-operator-controller-manager-6c4d75f7f9-pj69z\" (UID: \"bdcce941-5cae-42fe-9dc5-a71e1e55790e\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.764819 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tj5g\" (UniqueName: \"kubernetes.io/projected/93d0e3bc-0e33-4254-b52e-31f28fdff357-kube-api-access-7tj5g\") pod \"test-operator-controller-manager-5c5cb9c4d7-5ngf9\" (UID: \"93d0e3bc-0e33-4254-b52e-31f28fdff357\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.776594 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.810387 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.855336 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.866778 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.868198 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz8sk\" (UniqueName: \"kubernetes.io/projected/bdcce941-5cae-42fe-9dc5-a71e1e55790e-kube-api-access-xz8sk\") pod \"watcher-operator-controller-manager-6c4d75f7f9-pj69z\" (UID: \"bdcce941-5cae-42fe-9dc5-a71e1e55790e\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.868267 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-pc5vv\" (UID: \"62d536a1-c184-4077-a6f8-4285c3ebe5db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:28:26 crc kubenswrapper[4736]: E0316 15:28:26.868421 4736 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 16 15:28:26 crc kubenswrapper[4736]: E0316 15:28:26.868485 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert podName:62d536a1-c184-4077-a6f8-4285c3ebe5db nodeName:}" failed. No retries permitted until 2026-03-16 15:28:27.868454359 +0000 UTC m=+909.595844646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" (UID: "62d536a1-c184-4077-a6f8-4285c3ebe5db") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.884595 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tj5g\" (UniqueName: \"kubernetes.io/projected/93d0e3bc-0e33-4254-b52e-31f28fdff357-kube-api-access-7tj5g\") pod \"test-operator-controller-manager-5c5cb9c4d7-5ngf9\" (UID: \"93d0e3bc-0e33-4254-b52e-31f28fdff357\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.893118 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5467877-vhgh7"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.894029 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.912578 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.912928 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.913662 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-ln4qm" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.914129 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.929670 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5467877-vhgh7"] Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.969701 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.969823 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.969872 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj6d2\" (UniqueName: \"kubernetes.io/projected/0a9b1e66-192c-4eab-a960-7fbd08759f54-kube-api-access-sj6d2\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:26 crc kubenswrapper[4736]: I0316 15:28:26.972892 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz8sk\" (UniqueName: \"kubernetes.io/projected/bdcce941-5cae-42fe-9dc5-a71e1e55790e-kube-api-access-xz8sk\") pod \"watcher-operator-controller-manager-6c4d75f7f9-pj69z\" (UID: \"bdcce941-5cae-42fe-9dc5-a71e1e55790e\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z" Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.079418 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.079488 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.079532 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj6d2\" (UniqueName: \"kubernetes.io/projected/0a9b1e66-192c-4eab-a960-7fbd08759f54-kube-api-access-sj6d2\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:27 crc kubenswrapper[4736]: E0316 15:28:27.079981 4736 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 16 15:28:27 crc kubenswrapper[4736]: E0316 15:28:27.080028 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs podName:0a9b1e66-192c-4eab-a960-7fbd08759f54 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:27.580009514 +0000 UTC m=+909.307399801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs") pod "openstack-operator-controller-manager-5467877-vhgh7" (UID: "0a9b1e66-192c-4eab-a960-7fbd08759f54") : secret "webhook-server-cert" not found Mar 16 15:28:27 crc kubenswrapper[4736]: E0316 15:28:27.080158 4736 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 16 15:28:27 crc kubenswrapper[4736]: E0316 15:28:27.080250 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs podName:0a9b1e66-192c-4eab-a960-7fbd08759f54 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:27.58022391 +0000 UTC m=+909.307614197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs") pod "openstack-operator-controller-manager-5467877-vhgh7" (UID: "0a9b1e66-192c-4eab-a960-7fbd08759f54") : secret "metrics-server-cert" not found Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.130412 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dblvg"] Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.131750 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dblvg" Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.140555 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-n67qk" Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.168292 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj6d2\" (UniqueName: \"kubernetes.io/projected/0a9b1e66-192c-4eab-a960-7fbd08759f54-kube-api-access-sj6d2\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.195874 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dblvg"] Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.198703 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms7dn\" (UniqueName: \"kubernetes.io/projected/0a609c84-6f6b-48ae-a12b-d604e7b91c36-kube-api-access-ms7dn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dblvg\" (UID: \"0a609c84-6f6b-48ae-a12b-d604e7b91c36\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dblvg" Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.259759 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z" Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.302057 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms7dn\" (UniqueName: \"kubernetes.io/projected/0a609c84-6f6b-48ae-a12b-d604e7b91c36-kube-api-access-ms7dn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dblvg\" (UID: \"0a609c84-6f6b-48ae-a12b-d604e7b91c36\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dblvg" Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.311432 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj"] Mar 16 15:28:27 crc kubenswrapper[4736]: W0316 15:28:27.427197 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaac26090_af84_496a_afdf_efdb24694811.slice/crio-48f4745a3d0df23c421103bba6630e1bb9358f2f76619817f95920aa871b4470 WatchSource:0}: Error finding container 48f4745a3d0df23c421103bba6630e1bb9358f2f76619817f95920aa871b4470: Status 404 returned error can't find the container with id 48f4745a3d0df23c421103bba6630e1bb9358f2f76619817f95920aa871b4470 Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.428805 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms7dn\" (UniqueName: \"kubernetes.io/projected/0a609c84-6f6b-48ae-a12b-d604e7b91c36-kube-api-access-ms7dn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dblvg\" (UID: \"0a609c84-6f6b-48ae-a12b-d604e7b91c36\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dblvg" Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.531208 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert\") pod \"infra-operator-controller-manager-7b9c774f96-9b78c\" (UID: \"634ac783-1fe6-4191-b432-f22ad5d84357\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:28:27 crc kubenswrapper[4736]: E0316 15:28:27.531392 4736 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 16 15:28:27 crc kubenswrapper[4736]: E0316 15:28:27.531449 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert podName:634ac783-1fe6-4191-b432-f22ad5d84357 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:29.531429175 +0000 UTC m=+911.258819462 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert") pod "infra-operator-controller-manager-7b9c774f96-9b78c" (UID: "634ac783-1fe6-4191-b432-f22ad5d84357") : secret "infra-operator-webhook-server-cert" not found Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.536698 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dblvg" Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.632803 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.632897 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:27 crc kubenswrapper[4736]: E0316 15:28:27.633151 4736 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 16 15:28:27 crc kubenswrapper[4736]: E0316 15:28:27.633213 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs podName:0a9b1e66-192c-4eab-a960-7fbd08759f54 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:28.633193931 +0000 UTC m=+910.360584218 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs") pod "openstack-operator-controller-manager-5467877-vhgh7" (UID: "0a9b1e66-192c-4eab-a960-7fbd08759f54") : secret "metrics-server-cert" not found Mar 16 15:28:27 crc kubenswrapper[4736]: E0316 15:28:27.633245 4736 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 16 15:28:27 crc kubenswrapper[4736]: E0316 15:28:27.633348 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs podName:0a9b1e66-192c-4eab-a960-7fbd08759f54 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:28.633323164 +0000 UTC m=+910.360713451 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs") pod "openstack-operator-controller-manager-5467877-vhgh7" (UID: "0a9b1e66-192c-4eab-a960-7fbd08759f54") : secret "webhook-server-cert" not found Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.784300 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj" event={"ID":"aac26090-af84-496a-afdf-efdb24694811","Type":"ContainerStarted","Data":"48f4745a3d0df23c421103bba6630e1bb9358f2f76619817f95920aa871b4470"} Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.800943 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm" event={"ID":"8163ef92-862a-4de1-a443-8ac84a5ba0c9","Type":"ContainerStarted","Data":"39aaba90f14ef640ccde077ac707873af2bccab6a332d18b5ab99fe5b7ea2e5f"} Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.876945 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n"] Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.899638 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt"] Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.937030 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2"] Mar 16 15:28:27 crc kubenswrapper[4736]: I0316 15:28:27.939119 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-pc5vv\" (UID: \"62d536a1-c184-4077-a6f8-4285c3ebe5db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:28:27 crc kubenswrapper[4736]: E0316 15:28:27.939295 4736 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 16 15:28:27 crc kubenswrapper[4736]: E0316 15:28:27.939363 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert podName:62d536a1-c184-4077-a6f8-4285c3ebe5db nodeName:}" failed. No retries permitted until 2026-03-16 15:28:29.93934306 +0000 UTC m=+911.666733347 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" (UID: "62d536a1-c184-4077-a6f8-4285c3ebe5db") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.406168 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5"] Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.412145 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q"] Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.423267 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc"] Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.428551 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-bqghw"] Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.434723 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2"] Mar 16 15:28:28 crc kubenswrapper[4736]: W0316 15:28:28.474489 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99d86cbe_cf17_42a7_bc5b_d692609fff64.slice/crio-3356c488eb89c38d466d7814d25e43531c9eba4f60494f19c039393577f7aa18 WatchSource:0}: Error finding container 3356c488eb89c38d466d7814d25e43531c9eba4f60494f19c039393577f7aa18: Status 404 returned error can't find the container with id 3356c488eb89c38d466d7814d25e43531c9eba4f60494f19c039393577f7aa18 Mar 16 15:28:28 crc kubenswrapper[4736]: W0316 15:28:28.482917 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1ae843c_f1b5_4ee2_8300_55f93941ba2b.slice/crio-5ad682ec6d270e5cb3c29b62a791b90592a140399b9bf56b52b412a69c1139f3 WatchSource:0}: Error finding container 5ad682ec6d270e5cb3c29b62a791b90592a140399b9bf56b52b412a69c1139f3: Status 404 returned error can't find the container with id 5ad682ec6d270e5cb3c29b62a791b90592a140399b9bf56b52b412a69c1139f3 Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.488395 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj"] Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.508254 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9"] Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.644308 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g"] Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.653759 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg"] Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.660523 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.660595 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.660733 4736 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.660785 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs podName:0a9b1e66-192c-4eab-a960-7fbd08759f54 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:30.6607689 +0000 UTC m=+912.388159177 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs") pod "openstack-operator-controller-manager-5467877-vhgh7" (UID: "0a9b1e66-192c-4eab-a960-7fbd08759f54") : secret "metrics-server-cert" not found Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.663025 4736 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.663086 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs podName:0a9b1e66-192c-4eab-a960-7fbd08759f54 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:30.663073802 +0000 UTC m=+912.390464089 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs") pod "openstack-operator-controller-manager-5467877-vhgh7" (UID: "0a9b1e66-192c-4eab-a960-7fbd08759f54") : secret "webhook-server-cert" not found Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.693034 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8"] Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.713519 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm"] Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.729132 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq"] Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.737786 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z"] Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.745964 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dblvg"] Mar 16 15:28:28 crc kubenswrapper[4736]: W0316 15:28:28.755972 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40be2c61_bd71_46b6_b837_abf09d8d5aeb.slice/crio-ea217a4319c75f2fb1974b4a8fde08c607da7cbbde25f877334bce4b5470f062 WatchSource:0}: Error finding container ea217a4319c75f2fb1974b4a8fde08c607da7cbbde25f877334bce4b5470f062: Status 404 returned error can't find the container with id ea217a4319c75f2fb1974b4a8fde08c607da7cbbde25f877334bce4b5470f062 Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.759903 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-46nmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-884679f54-vfqg8_openstack-operators(40be2c61-bd71-46b6-b837-abf09d8d5aeb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.760172 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k6hcm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-c674c5965-7lk9g_openstack-operators(534a3ae8-6587-4e8a-b454-b084edbfeb21): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.763867 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" podUID="40be2c61-bd71-46b6-b837-abf09d8d5aeb" Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.764884 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" podUID="534a3ae8-6587-4e8a-b454-b084edbfeb21" Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.767870 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n"] Mar 16 15:28:28 crc kubenswrapper[4736]: W0316 15:28:28.781913 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdcce941_5cae_42fe_9dc5_a71e1e55790e.slice/crio-cc67f2b13418aa98947457711252f23815cce3bdc2c7be9d8f9058aa73fffa39 WatchSource:0}: Error finding container cc67f2b13418aa98947457711252f23815cce3bdc2c7be9d8f9058aa73fffa39: Status 404 returned error can't find the container with id cc67f2b13418aa98947457711252f23815cce3bdc2c7be9d8f9058aa73fffa39 Mar 16 15:28:28 crc kubenswrapper[4736]: W0316 15:28:28.790940 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a609c84_6f6b_48ae_a12b_d604e7b91c36.slice/crio-f0e5c946f9e2bbc027c80a6883aefb635b74f9e05fdca2c230ed9065ae83e540 WatchSource:0}: Error finding container f0e5c946f9e2bbc027c80a6883aefb635b74f9e05fdca2c230ed9065ae83e540: Status 404 returned error can't find the container with id f0e5c946f9e2bbc027c80a6883aefb635b74f9e05fdca2c230ed9065ae83e540 Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.801535 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hr5f8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-768b96df4c-7gd6n_openstack-operators(d77bc7ac-fb08-4603-8453-677c6be6916d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.802948 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" podUID="d77bc7ac-fb08-4603-8453-677c6be6916d" Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.809303 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ms7dn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-dblvg_openstack-operators(0a609c84-6f6b-48ae-a12b-d604e7b91c36): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.810388 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dblvg" podUID="0a609c84-6f6b-48ae-a12b-d604e7b91c36" Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.810718 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" event={"ID":"99d86cbe-cf17-42a7-bc5b-d692609fff64","Type":"ContainerStarted","Data":"3356c488eb89c38d466d7814d25e43531c9eba4f60494f19c039393577f7aa18"} Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.811515 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" event={"ID":"d77bc7ac-fb08-4603-8453-677c6be6916d","Type":"ContainerStarted","Data":"67beae1ad4bb4cd2a4ff55b27284b42d84cc4a977a4ac43508c4a65d71b19a90"} Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.847394 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" podUID="d77bc7ac-fb08-4603-8453-677c6be6916d" Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.852584 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dblvg" event={"ID":"0a609c84-6f6b-48ae-a12b-d604e7b91c36","Type":"ContainerStarted","Data":"f0e5c946f9e2bbc027c80a6883aefb635b74f9e05fdca2c230ed9065ae83e540"} Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.856240 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dblvg" podUID="0a609c84-6f6b-48ae-a12b-d604e7b91c36" Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.870633 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" event={"ID":"b1ae843c-f1b5-4ee2-8300-55f93941ba2b","Type":"ContainerStarted","Data":"5ad682ec6d270e5cb3c29b62a791b90592a140399b9bf56b52b412a69c1139f3"} Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.881505 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj" event={"ID":"fff6882e-3a77-462f-b12e-25192ea56328","Type":"ContainerStarted","Data":"d35906fb5ae118d8b5db9db3e8a8dfaeeaf028141df9c1e108d2c72de07b8aa1"} Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.890567 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2" event={"ID":"f8308a1a-301e-40b9-8a0e-b7e267e74a10","Type":"ContainerStarted","Data":"a60d24db897fbe5a7597a5b7a31cadd9d932bb1aa6933c0b39613ba2e1ff6923"} Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.895794 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n" event={"ID":"1ae22b3c-97a5-4592-b263-557131818155","Type":"ContainerStarted","Data":"8094d350eee8afdf4a468cede466da47d967873cbed0ad5c1f6ae94ed1e1ddd7"} Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.898253 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2" event={"ID":"2d48b057-960e-445a-bc66-b6d3dbfb56f9","Type":"ContainerStarted","Data":"fe83fb67f3ee3ff4ebfcadaa663055ecd3633877ef607f7fa9200165c4997c77"} Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.902598 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" event={"ID":"40be2c61-bd71-46b6-b837-abf09d8d5aeb","Type":"ContainerStarted","Data":"ea217a4319c75f2fb1974b4a8fde08c607da7cbbde25f877334bce4b5470f062"} Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.914095 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm" event={"ID":"6cbcdd30-245d-4732-8986-77f861f1f568","Type":"ContainerStarted","Data":"d60513896b153a5f5ac90c5a7400e08da9a526df89e29c8a73b1e0bfdf919617"} Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.914168 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" podUID="40be2c61-bd71-46b6-b837-abf09d8d5aeb" Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.928852 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5" event={"ID":"569449b8-1135-4dd6-b6fe-ad66844b413e","Type":"ContainerStarted","Data":"b761836f8ccd7dbdd6bd7c5bb1cd486973a412bf2972c6eb78624862deb0368a"} Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.932849 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9" event={"ID":"93d0e3bc-0e33-4254-b52e-31f28fdff357","Type":"ContainerStarted","Data":"7f034b45a603cc4d07580d666df39a5ce4055666c06adb2a7f99c04da65a93d3"} Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.945726 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q" event={"ID":"9d7909e1-3088-4a9e-b2ac-286927abd741","Type":"ContainerStarted","Data":"1086b6175a4ac99281a5af30e5939dee34433cf0c93217e61ac06f304649a01c"} Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.947610 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt" event={"ID":"99a35a5a-103f-4e00-9b39-d4f86531f5f7","Type":"ContainerStarted","Data":"6d0618ab8bee35846a8b6acc23c5ae121c2af06d986912ea8f44f0898f946970"} Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.951614 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg" event={"ID":"e7971b38-1b13-4984-a055-2cc52b34bf6b","Type":"ContainerStarted","Data":"ff6d094495b82efb74b66af90ea1190522cd50872f567b9cd6fa957d52d9443b"} Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.955027 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" event={"ID":"534a3ae8-6587-4e8a-b454-b084edbfeb21","Type":"ContainerStarted","Data":"b474fa3619487bc8b74a1b423f8356cf6ddbee9490d33791d89f19c0f9a69ccb"} Mar 16 15:28:28 crc kubenswrapper[4736]: E0316 15:28:28.956764 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" podUID="534a3ae8-6587-4e8a-b454-b084edbfeb21" Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.959683 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq" event={"ID":"285f243f-b886-440f-8a92-b1ddf60bf6e6","Type":"ContainerStarted","Data":"986bb0049d9693e48adb5cb6966fef713ca6647cbb8fe68081581aa954d7c9aa"} Mar 16 15:28:28 crc kubenswrapper[4736]: I0316 15:28:28.975719 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z" event={"ID":"bdcce941-5cae-42fe-9dc5-a71e1e55790e","Type":"ContainerStarted","Data":"cc67f2b13418aa98947457711252f23815cce3bdc2c7be9d8f9058aa73fffa39"} Mar 16 15:28:29 crc kubenswrapper[4736]: I0316 15:28:29.596871 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert\") pod \"infra-operator-controller-manager-7b9c774f96-9b78c\" (UID: \"634ac783-1fe6-4191-b432-f22ad5d84357\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:28:29 crc kubenswrapper[4736]: E0316 15:28:29.597299 4736 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 16 15:28:29 crc kubenswrapper[4736]: E0316 15:28:29.597369 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert podName:634ac783-1fe6-4191-b432-f22ad5d84357 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:33.59735113 +0000 UTC m=+915.324741417 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert") pod "infra-operator-controller-manager-7b9c774f96-9b78c" (UID: "634ac783-1fe6-4191-b432-f22ad5d84357") : secret "infra-operator-webhook-server-cert" not found Mar 16 15:28:30 crc kubenswrapper[4736]: I0316 15:28:30.005898 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-pc5vv\" (UID: \"62d536a1-c184-4077-a6f8-4285c3ebe5db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:28:30 crc kubenswrapper[4736]: E0316 15:28:30.006089 4736 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 16 15:28:30 crc kubenswrapper[4736]: E0316 15:28:30.006161 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert podName:62d536a1-c184-4077-a6f8-4285c3ebe5db nodeName:}" failed. No retries permitted until 2026-03-16 15:28:34.006139308 +0000 UTC m=+915.733529595 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" (UID: "62d536a1-c184-4077-a6f8-4285c3ebe5db") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 16 15:28:30 crc kubenswrapper[4736]: E0316 15:28:30.018725 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:866844c5b88e1e0518ceb7490cac9d093da3fb8b2f27ba7bd9bd89f946b9ee6e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" podUID="534a3ae8-6587-4e8a-b454-b084edbfeb21" Mar 16 15:28:30 crc kubenswrapper[4736]: E0316 15:28:30.025073 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:ec36a9083657587022f8471c9d5a71b87a7895398496e7fc546c73aa1eae4b56\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" podUID="d77bc7ac-fb08-4603-8453-677c6be6916d" Mar 16 15:28:30 crc kubenswrapper[4736]: E0316 15:28:30.025368 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" podUID="40be2c61-bd71-46b6-b837-abf09d8d5aeb" Mar 16 15:28:30 crc kubenswrapper[4736]: E0316 15:28:30.025384 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dblvg" podUID="0a609c84-6f6b-48ae-a12b-d604e7b91c36" Mar 16 15:28:30 crc kubenswrapper[4736]: I0316 15:28:30.722871 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:30 crc kubenswrapper[4736]: E0316 15:28:30.723579 4736 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 16 15:28:30 crc kubenswrapper[4736]: E0316 15:28:30.724046 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs podName:0a9b1e66-192c-4eab-a960-7fbd08759f54 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:34.724012244 +0000 UTC m=+916.451402531 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs") pod "openstack-operator-controller-manager-5467877-vhgh7" (UID: "0a9b1e66-192c-4eab-a960-7fbd08759f54") : secret "webhook-server-cert" not found Mar 16 15:28:30 crc kubenswrapper[4736]: I0316 15:28:30.724097 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:30 crc kubenswrapper[4736]: E0316 15:28:30.725386 4736 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 16 15:28:30 crc kubenswrapper[4736]: E0316 15:28:30.725426 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs podName:0a9b1e66-192c-4eab-a960-7fbd08759f54 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:34.725414541 +0000 UTC m=+916.452804828 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs") pod "openstack-operator-controller-manager-5467877-vhgh7" (UID: "0a9b1e66-192c-4eab-a960-7fbd08759f54") : secret "metrics-server-cert" not found Mar 16 15:28:33 crc kubenswrapper[4736]: I0316 15:28:33.609943 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert\") pod \"infra-operator-controller-manager-7b9c774f96-9b78c\" (UID: \"634ac783-1fe6-4191-b432-f22ad5d84357\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:28:33 crc kubenswrapper[4736]: E0316 15:28:33.610240 4736 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 16 15:28:33 crc kubenswrapper[4736]: E0316 15:28:33.610702 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert podName:634ac783-1fe6-4191-b432-f22ad5d84357 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:41.6106819 +0000 UTC m=+923.338072187 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert") pod "infra-operator-controller-manager-7b9c774f96-9b78c" (UID: "634ac783-1fe6-4191-b432-f22ad5d84357") : secret "infra-operator-webhook-server-cert" not found Mar 16 15:28:34 crc kubenswrapper[4736]: I0316 15:28:34.021897 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-pc5vv\" (UID: \"62d536a1-c184-4077-a6f8-4285c3ebe5db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:28:34 crc kubenswrapper[4736]: E0316 15:28:34.022052 4736 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 16 15:28:34 crc kubenswrapper[4736]: E0316 15:28:34.022139 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert podName:62d536a1-c184-4077-a6f8-4285c3ebe5db nodeName:}" failed. No retries permitted until 2026-03-16 15:28:42.022117338 +0000 UTC m=+923.749507625 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" (UID: "62d536a1-c184-4077-a6f8-4285c3ebe5db") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 16 15:28:34 crc kubenswrapper[4736]: I0316 15:28:34.733576 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:34 crc kubenswrapper[4736]: I0316 15:28:34.733720 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:34 crc kubenswrapper[4736]: E0316 15:28:34.733868 4736 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 16 15:28:34 crc kubenswrapper[4736]: E0316 15:28:34.733927 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs podName:0a9b1e66-192c-4eab-a960-7fbd08759f54 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:42.733909592 +0000 UTC m=+924.461299879 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs") pod "openstack-operator-controller-manager-5467877-vhgh7" (UID: "0a9b1e66-192c-4eab-a960-7fbd08759f54") : secret "webhook-server-cert" not found Mar 16 15:28:34 crc kubenswrapper[4736]: E0316 15:28:34.734678 4736 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 16 15:28:34 crc kubenswrapper[4736]: E0316 15:28:34.734789 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs podName:0a9b1e66-192c-4eab-a960-7fbd08759f54 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:42.734779485 +0000 UTC m=+924.462169772 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs") pod "openstack-operator-controller-manager-5467877-vhgh7" (UID: "0a9b1e66-192c-4eab-a960-7fbd08759f54") : secret "metrics-server-cert" not found Mar 16 15:28:41 crc kubenswrapper[4736]: I0316 15:28:41.673097 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert\") pod \"infra-operator-controller-manager-7b9c774f96-9b78c\" (UID: \"634ac783-1fe6-4191-b432-f22ad5d84357\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:28:41 crc kubenswrapper[4736]: E0316 15:28:41.673464 4736 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 16 15:28:41 crc kubenswrapper[4736]: E0316 15:28:41.674677 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert podName:634ac783-1fe6-4191-b432-f22ad5d84357 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:57.67463264 +0000 UTC m=+939.402022947 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert") pod "infra-operator-controller-manager-7b9c774f96-9b78c" (UID: "634ac783-1fe6-4191-b432-f22ad5d84357") : secret "infra-operator-webhook-server-cert" not found Mar 16 15:28:42 crc kubenswrapper[4736]: I0316 15:28:42.082059 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-pc5vv\" (UID: \"62d536a1-c184-4077-a6f8-4285c3ebe5db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:28:42 crc kubenswrapper[4736]: E0316 15:28:42.082415 4736 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 16 15:28:42 crc kubenswrapper[4736]: E0316 15:28:42.082555 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert podName:62d536a1-c184-4077-a6f8-4285c3ebe5db nodeName:}" failed. No retries permitted until 2026-03-16 15:28:58.082520114 +0000 UTC m=+939.809910491 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" (UID: "62d536a1-c184-4077-a6f8-4285c3ebe5db") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 16 15:28:42 crc kubenswrapper[4736]: E0316 15:28:42.116815 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113" Mar 16 15:28:42 crc kubenswrapper[4736]: E0316 15:28:42.117059 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x4zkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-8464cc45fb-fd7xj_openstack-operators(aac26090-af84-496a-afdf-efdb24694811): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:28:42 crc kubenswrapper[4736]: E0316 15:28:42.118308 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj" podUID="aac26090-af84-496a-afdf-efdb24694811" Mar 16 15:28:42 crc kubenswrapper[4736]: E0316 15:28:42.143619 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:703ad3a2b749bce100f1e2a445312b65dc3b8b45e8c8ba59f311d3f8f3368113\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj" podUID="aac26090-af84-496a-afdf-efdb24694811" Mar 16 15:28:42 crc kubenswrapper[4736]: I0316 15:28:42.809000 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:42 crc kubenswrapper[4736]: I0316 15:28:42.809476 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:42 crc kubenswrapper[4736]: E0316 15:28:42.809287 4736 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 16 15:28:42 crc kubenswrapper[4736]: E0316 15:28:42.810032 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs podName:0a9b1e66-192c-4eab-a960-7fbd08759f54 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:58.810008466 +0000 UTC m=+940.537398753 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs") pod "openstack-operator-controller-manager-5467877-vhgh7" (UID: "0a9b1e66-192c-4eab-a960-7fbd08759f54") : secret "webhook-server-cert" not found Mar 16 15:28:42 crc kubenswrapper[4736]: E0316 15:28:42.809942 4736 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 16 15:28:42 crc kubenswrapper[4736]: E0316 15:28:42.810086 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs podName:0a9b1e66-192c-4eab-a960-7fbd08759f54 nodeName:}" failed. No retries permitted until 2026-03-16 15:28:58.810079018 +0000 UTC m=+940.537469305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs") pod "openstack-operator-controller-manager-5467877-vhgh7" (UID: "0a9b1e66-192c-4eab-a960-7fbd08759f54") : secret "metrics-server-cert" not found Mar 16 15:28:43 crc kubenswrapper[4736]: E0316 15:28:43.726361 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444" Mar 16 15:28:43 crc kubenswrapper[4736]: E0316 15:28:43.726716 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xjjj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-d6b694c5-6z9rj_openstack-operators(fff6882e-3a77-462f-b12e-25192ea56328): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:28:43 crc kubenswrapper[4736]: E0316 15:28:43.727939 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj" podUID="fff6882e-3a77-462f-b12e-25192ea56328" Mar 16 15:28:44 crc kubenswrapper[4736]: E0316 15:28:44.178249 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:c500fa7080b94105e85eeced772d8872e4168904e74ba02116e15ab66f522444\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj" podUID="fff6882e-3a77-462f-b12e-25192ea56328" Mar 16 15:28:46 crc kubenswrapper[4736]: E0316 15:28:46.518348 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a" Mar 16 15:28:46 crc kubenswrapper[4736]: E0316 15:28:46.521906 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wsr2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5b9f45d989-47kkg_openstack-operators(e7971b38-1b13-4984-a055-2cc52b34bf6b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:28:46 crc kubenswrapper[4736]: E0316 15:28:46.523179 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg" podUID="e7971b38-1b13-4984-a055-2cc52b34bf6b" Mar 16 15:28:47 crc kubenswrapper[4736]: E0316 15:28:47.200228 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:425fd66675becbe0ca2b2fe1a5a6694ac6e0b1cdce9a77a7a37f99785eadc74a\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg" podUID="e7971b38-1b13-4984-a055-2cc52b34bf6b" Mar 16 15:28:47 crc kubenswrapper[4736]: E0316 15:28:47.621785 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42" Mar 16 15:28:47 crc kubenswrapper[4736]: E0316 15:28:47.622446 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7tj5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5c5cb9c4d7-5ngf9_openstack-operators(93d0e3bc-0e33-4254-b52e-31f28fdff357): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:28:47 crc kubenswrapper[4736]: E0316 15:28:47.623700 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9" podUID="93d0e3bc-0e33-4254-b52e-31f28fdff357" Mar 16 15:28:48 crc kubenswrapper[4736]: E0316 15:28:48.206318 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9" podUID="93d0e3bc-0e33-4254-b52e-31f28fdff357" Mar 16 15:28:48 crc kubenswrapper[4736]: E0316 15:28:48.326481 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1" Mar 16 15:28:48 crc kubenswrapper[4736]: E0316 15:28:48.326952 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lxq46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67ccfc9778-bgvjq_openstack-operators(285f243f-b886-440f-8a92-b1ddf60bf6e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:28:48 crc kubenswrapper[4736]: E0316 15:28:48.328366 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq" podUID="285f243f-b886-440f-8a92-b1ddf60bf6e6" Mar 16 15:28:48 crc kubenswrapper[4736]: E0316 15:28:48.905809 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a" Mar 16 15:28:48 crc kubenswrapper[4736]: E0316 15:28:48.906221 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q8p94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-767865f676-bqghw_openstack-operators(99d86cbe-cf17-42a7-bc5b-d692609fff64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:28:48 crc kubenswrapper[4736]: E0316 15:28:48.910852 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" podUID="99d86cbe-cf17-42a7-bc5b-d692609fff64" Mar 16 15:28:49 crc kubenswrapper[4736]: E0316 15:28:49.220943 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" podUID="99d86cbe-cf17-42a7-bc5b-d692609fff64" Mar 16 15:28:49 crc kubenswrapper[4736]: E0316 15:28:49.221365 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:6e7552996253fc66667eaa3eb0e11b4e97145efa2ae577155ceabf8e9913ddc1\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq" podUID="285f243f-b886-440f-8a92-b1ddf60bf6e6" Mar 16 15:28:51 crc kubenswrapper[4736]: E0316 15:28:51.703015 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622" Mar 16 15:28:51 crc kubenswrapper[4736]: E0316 15:28:51.703924 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-clc27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5784578c99-6fjhm_openstack-operators(6cbcdd30-245d-4732-8986-77f861f1f568): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:28:51 crc kubenswrapper[4736]: E0316 15:28:51.705162 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm" podUID="6cbcdd30-245d-4732-8986-77f861f1f568" Mar 16 15:28:52 crc kubenswrapper[4736]: E0316 15:28:52.235471 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a" Mar 16 15:28:52 crc kubenswrapper[4736]: E0316 15:28:52.235765 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cvf8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5d488d59fb-7hrfc_openstack-operators(b1ae843c-f1b5-4ee2-8300-55f93941ba2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:28:52 crc kubenswrapper[4736]: E0316 15:28:52.237101 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" podUID="b1ae843c-f1b5-4ee2-8300-55f93941ba2b" Mar 16 15:28:52 crc kubenswrapper[4736]: E0316 15:28:52.242037 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm" podUID="6cbcdd30-245d-4732-8986-77f861f1f568" Mar 16 15:28:53 crc kubenswrapper[4736]: E0316 15:28:53.250024 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:7398eb8fa5a4844d3326a5dff759d17199870c389b3ce3011a038b27bf95512a\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" podUID="b1ae843c-f1b5-4ee2-8300-55f93941ba2b" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.270210 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" event={"ID":"534a3ae8-6587-4e8a-b454-b084edbfeb21","Type":"ContainerStarted","Data":"16bcd52a20eda248f8aed0a39717fc011f15cef75cf337d3dc5bce6bf76c57c9"} Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.271625 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.273169 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm" event={"ID":"8163ef92-862a-4de1-a443-8ac84a5ba0c9","Type":"ContainerStarted","Data":"7bc53d7bd531a627c3ba278d0fb61518da9ac68a272a30d32e320b8088979079"} Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.273547 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.274847 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n" event={"ID":"1ae22b3c-97a5-4592-b263-557131818155","Type":"ContainerStarted","Data":"11915e0b8d8f2408973b6b1b95cfbf9e35234b54444cce673aa84b14ea1bbfca"} Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.275214 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.276362 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt" event={"ID":"99a35a5a-103f-4e00-9b39-d4f86531f5f7","Type":"ContainerStarted","Data":"1a8eecb3e11750fc2a736a2f470c7a34028f04bd04f7586a3f74eb2e2522af7e"} Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.276744 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.278529 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z" event={"ID":"bdcce941-5cae-42fe-9dc5-a71e1e55790e","Type":"ContainerStarted","Data":"cab560ebd9bde5b645c1ba405e24a0ed8578c5625aea383e5f1b507e659184fe"} Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.279059 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.280858 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q" event={"ID":"9d7909e1-3088-4a9e-b2ac-286927abd741","Type":"ContainerStarted","Data":"c117d835bcf22de2fa7cbd46d04b0d6905a237e63b9e90bf6439ccd4268ea05d"} Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.281298 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.282896 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2" event={"ID":"2d48b057-960e-445a-bc66-b6d3dbfb56f9","Type":"ContainerStarted","Data":"2549e78fb139c12379fe6da69b5a9c4067758feb5ea485341870931ab8900d33"} Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.283167 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.284348 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj" event={"ID":"aac26090-af84-496a-afdf-efdb24694811","Type":"ContainerStarted","Data":"7840f3bb8d9b3253d88ad38efe7e7569da52c2ab991bbd980d078fe03768f7ee"} Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.284734 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.287172 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj" event={"ID":"fff6882e-3a77-462f-b12e-25192ea56328","Type":"ContainerStarted","Data":"b93fe639efb4224114e76d2ae719045046a8340edcac725f094a939049b897ac"} Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.287413 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.288999 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2" event={"ID":"f8308a1a-301e-40b9-8a0e-b7e267e74a10","Type":"ContainerStarted","Data":"12adced333b6fed21bad9e4b619aedb6514d5852188cde3c5482c4dadd2722e0"} Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.289715 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.291002 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" event={"ID":"d77bc7ac-fb08-4603-8453-677c6be6916d","Type":"ContainerStarted","Data":"7cf6fe6a9ea131331b74232f56b9c6a78dede41823e8ce9302ed04fccbb9219e"} Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.291410 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.292649 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dblvg" event={"ID":"0a609c84-6f6b-48ae-a12b-d604e7b91c36","Type":"ContainerStarted","Data":"c117e451f82410bc44ce964ee4408b0aad49884b9380e6412aef58a555696e4e"} Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.295187 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" event={"ID":"40be2c61-bd71-46b6-b837-abf09d8d5aeb","Type":"ContainerStarted","Data":"d8c9db7501138355338f606ef99774eb48428311a621a6050c44e430b9663edb"} Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.295501 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.296780 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5" event={"ID":"569449b8-1135-4dd6-b6fe-ad66844b413e","Type":"ContainerStarted","Data":"0d0f926a2e0ed9f999ae0871d2dbe6d99185e8b34dd21be7d9bcf924bd37c64d"} Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.297260 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.316063 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" podStartSLOduration=3.399489239 podStartE2EDuration="30.316039201s" podCreationTimestamp="2026-03-16 15:28:26 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.76008201 +0000 UTC m=+910.487472297" lastFinishedPulling="2026-03-16 15:28:55.676631982 +0000 UTC m=+937.404022259" observedRunningTime="2026-03-16 15:28:56.311089109 +0000 UTC m=+938.038479396" watchObservedRunningTime="2026-03-16 15:28:56.316039201 +0000 UTC m=+938.043429508" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.353460 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj" podStartSLOduration=3.091506914 podStartE2EDuration="31.353441035s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:27.437867757 +0000 UTC m=+909.165258044" lastFinishedPulling="2026-03-16 15:28:55.699801878 +0000 UTC m=+937.427192165" observedRunningTime="2026-03-16 15:28:56.347893347 +0000 UTC m=+938.075283634" watchObservedRunningTime="2026-03-16 15:28:56.353441035 +0000 UTC m=+938.080831322" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.371682 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" podStartSLOduration=4.521232886 podStartE2EDuration="31.37165871s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.759751391 +0000 UTC m=+910.487141678" lastFinishedPulling="2026-03-16 15:28:55.610177205 +0000 UTC m=+937.337567502" observedRunningTime="2026-03-16 15:28:56.3660215 +0000 UTC m=+938.093411787" watchObservedRunningTime="2026-03-16 15:28:56.37165871 +0000 UTC m=+938.099048997" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.478469 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5" podStartSLOduration=4.912428798 podStartE2EDuration="31.478445939s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.437254528 +0000 UTC m=+910.164644815" lastFinishedPulling="2026-03-16 15:28:55.003271669 +0000 UTC m=+936.730661956" observedRunningTime="2026-03-16 15:28:56.43489158 +0000 UTC m=+938.162281867" watchObservedRunningTime="2026-03-16 15:28:56.478445939 +0000 UTC m=+938.205836226" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.704481 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm" podStartSLOduration=6.513185557 podStartE2EDuration="31.704454798s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:27.024261032 +0000 UTC m=+908.751651309" lastFinishedPulling="2026-03-16 15:28:52.215530273 +0000 UTC m=+933.942920550" observedRunningTime="2026-03-16 15:28:56.703666756 +0000 UTC m=+938.431057053" watchObservedRunningTime="2026-03-16 15:28:56.704454798 +0000 UTC m=+938.431845075" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.704643 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2" podStartSLOduration=5.770314334 podStartE2EDuration="31.704628362s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.467524202 +0000 UTC m=+910.194914489" lastFinishedPulling="2026-03-16 15:28:54.40183823 +0000 UTC m=+936.129228517" observedRunningTime="2026-03-16 15:28:56.497570497 +0000 UTC m=+938.224960784" watchObservedRunningTime="2026-03-16 15:28:56.704628362 +0000 UTC m=+938.432018649" Mar 16 15:28:56 crc kubenswrapper[4736]: I0316 15:28:56.889693 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z" podStartSLOduration=5.272226178 podStartE2EDuration="30.889664531s" podCreationTimestamp="2026-03-16 15:28:26 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.785356712 +0000 UTC m=+910.512746999" lastFinishedPulling="2026-03-16 15:28:54.402795065 +0000 UTC m=+936.130185352" observedRunningTime="2026-03-16 15:28:56.87157326 +0000 UTC m=+938.598963547" watchObservedRunningTime="2026-03-16 15:28:56.889664531 +0000 UTC m=+938.617054818" Mar 16 15:28:57 crc kubenswrapper[4736]: I0316 15:28:57.033887 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt" podStartSLOduration=5.596355439 podStartE2EDuration="32.033849314s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:27.964282173 +0000 UTC m=+909.691672460" lastFinishedPulling="2026-03-16 15:28:54.401776048 +0000 UTC m=+936.129166335" observedRunningTime="2026-03-16 15:28:57.026601392 +0000 UTC m=+938.753991679" watchObservedRunningTime="2026-03-16 15:28:57.033849314 +0000 UTC m=+938.761239601" Mar 16 15:28:57 crc kubenswrapper[4736]: I0316 15:28:57.465440 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dblvg" podStartSLOduration=3.588031029 podStartE2EDuration="30.465417218s" podCreationTimestamp="2026-03-16 15:28:27 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.809207647 +0000 UTC m=+910.536597934" lastFinishedPulling="2026-03-16 15:28:55.686593796 +0000 UTC m=+937.413984123" observedRunningTime="2026-03-16 15:28:57.365260065 +0000 UTC m=+939.092650352" watchObservedRunningTime="2026-03-16 15:28:57.465417218 +0000 UTC m=+939.192807505" Mar 16 15:28:57 crc kubenswrapper[4736]: I0316 15:28:57.470904 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n" podStartSLOduration=6.0473168 podStartE2EDuration="32.470889614s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:27.97918046 +0000 UTC m=+909.706570747" lastFinishedPulling="2026-03-16 15:28:54.402753274 +0000 UTC m=+936.130143561" observedRunningTime="2026-03-16 15:28:57.453527922 +0000 UTC m=+939.180918209" watchObservedRunningTime="2026-03-16 15:28:57.470889614 +0000 UTC m=+939.198279901" Mar 16 15:28:57 crc kubenswrapper[4736]: I0316 15:28:57.524577 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q" podStartSLOduration=5.958987361 podStartE2EDuration="32.52456098s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.437300909 +0000 UTC m=+910.164691196" lastFinishedPulling="2026-03-16 15:28:55.002874528 +0000 UTC m=+936.730264815" observedRunningTime="2026-03-16 15:28:57.516727452 +0000 UTC m=+939.244117739" watchObservedRunningTime="2026-03-16 15:28:57.52456098 +0000 UTC m=+939.251951267" Mar 16 15:28:57 crc kubenswrapper[4736]: I0316 15:28:57.619569 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" podStartSLOduration=5.743894042 podStartE2EDuration="32.619546006s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.801404649 +0000 UTC m=+910.528794936" lastFinishedPulling="2026-03-16 15:28:55.677056593 +0000 UTC m=+937.404446900" observedRunningTime="2026-03-16 15:28:57.615271743 +0000 UTC m=+939.342662030" watchObservedRunningTime="2026-03-16 15:28:57.619546006 +0000 UTC m=+939.346936283" Mar 16 15:28:57 crc kubenswrapper[4736]: I0316 15:28:57.687444 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj" podStartSLOduration=4.506658134 podStartE2EDuration="31.68742518s" podCreationTimestamp="2026-03-16 15:28:26 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.504510496 +0000 UTC m=+910.231900783" lastFinishedPulling="2026-03-16 15:28:55.685277542 +0000 UTC m=+937.412667829" observedRunningTime="2026-03-16 15:28:57.680992139 +0000 UTC m=+939.408382426" watchObservedRunningTime="2026-03-16 15:28:57.68742518 +0000 UTC m=+939.414815467" Mar 16 15:28:57 crc kubenswrapper[4736]: I0316 15:28:57.709342 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2" podStartSLOduration=6.276153773 podStartE2EDuration="32.709320592s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:27.968620979 +0000 UTC m=+909.696011266" lastFinishedPulling="2026-03-16 15:28:54.401787798 +0000 UTC m=+936.129178085" observedRunningTime="2026-03-16 15:28:57.705987024 +0000 UTC m=+939.433377301" watchObservedRunningTime="2026-03-16 15:28:57.709320592 +0000 UTC m=+939.436710879" Mar 16 15:28:57 crc kubenswrapper[4736]: I0316 15:28:57.722740 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert\") pod \"infra-operator-controller-manager-7b9c774f96-9b78c\" (UID: \"634ac783-1fe6-4191-b432-f22ad5d84357\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:28:57 crc kubenswrapper[4736]: I0316 15:28:57.743376 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634ac783-1fe6-4191-b432-f22ad5d84357-cert\") pod \"infra-operator-controller-manager-7b9c774f96-9b78c\" (UID: \"634ac783-1fe6-4191-b432-f22ad5d84357\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:28:57 crc kubenswrapper[4736]: I0316 15:28:57.779901 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:28:58 crc kubenswrapper[4736]: I0316 15:28:58.134033 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-pc5vv\" (UID: \"62d536a1-c184-4077-a6f8-4285c3ebe5db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:28:58 crc kubenswrapper[4736]: I0316 15:28:58.160398 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62d536a1-c184-4077-a6f8-4285c3ebe5db-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-pc5vv\" (UID: \"62d536a1-c184-4077-a6f8-4285c3ebe5db\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:28:58 crc kubenswrapper[4736]: I0316 15:28:58.365610 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:28:58 crc kubenswrapper[4736]: I0316 15:28:58.496777 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c"] Mar 16 15:28:58 crc kubenswrapper[4736]: I0316 15:28:58.846039 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:58 crc kubenswrapper[4736]: I0316 15:28:58.846178 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:58 crc kubenswrapper[4736]: I0316 15:28:58.850892 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-webhook-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:58 crc kubenswrapper[4736]: I0316 15:28:58.855902 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a9b1e66-192c-4eab-a960-7fbd08759f54-metrics-certs\") pod \"openstack-operator-controller-manager-5467877-vhgh7\" (UID: \"0a9b1e66-192c-4eab-a960-7fbd08759f54\") " pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:59 crc kubenswrapper[4736]: I0316 15:28:59.096617 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:28:59 crc kubenswrapper[4736]: I0316 15:28:59.331243 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" event={"ID":"634ac783-1fe6-4191-b432-f22ad5d84357","Type":"ContainerStarted","Data":"2d5b48d5b5608730e61989dd155d698ef564db9df2819a7b5bf6cbffa33314b5"} Mar 16 15:28:59 crc kubenswrapper[4736]: I0316 15:28:59.419578 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv"] Mar 16 15:28:59 crc kubenswrapper[4736]: I0316 15:28:59.674293 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5467877-vhgh7"] Mar 16 15:29:00 crc kubenswrapper[4736]: I0316 15:29:00.342687 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" event={"ID":"0a9b1e66-192c-4eab-a960-7fbd08759f54","Type":"ContainerStarted","Data":"1d644cb37032c38861a93e3c21c712f838fab2659fdc1ede6a660774ecae352e"} Mar 16 15:29:00 crc kubenswrapper[4736]: I0316 15:29:00.343313 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" event={"ID":"0a9b1e66-192c-4eab-a960-7fbd08759f54","Type":"ContainerStarted","Data":"8384e06980c5b71e06b6cb16798ea6275c0506a2aaaa54ddf0086e8730e70a7b"} Mar 16 15:29:00 crc kubenswrapper[4736]: I0316 15:29:00.343381 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:29:00 crc kubenswrapper[4736]: I0316 15:29:00.345604 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" event={"ID":"62d536a1-c184-4077-a6f8-4285c3ebe5db","Type":"ContainerStarted","Data":"6919f80304590b0fa21e24446efb79e96a8e3c7543a0fc548adb7c0e5c37bedd"} Mar 16 15:29:00 crc kubenswrapper[4736]: I0316 15:29:00.402571 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" podStartSLOduration=34.402541256 podStartE2EDuration="34.402541256s" podCreationTimestamp="2026-03-16 15:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:29:00.383837998 +0000 UTC m=+942.111228305" watchObservedRunningTime="2026-03-16 15:29:00.402541256 +0000 UTC m=+942.129931543" Mar 16 15:29:01 crc kubenswrapper[4736]: I0316 15:29:01.364728 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9" event={"ID":"93d0e3bc-0e33-4254-b52e-31f28fdff357","Type":"ContainerStarted","Data":"887b25e6e1369b4d94120c18e97fa3fa50c3fce0006cad7dcf80904ce4e4c29b"} Mar 16 15:29:01 crc kubenswrapper[4736]: I0316 15:29:01.366840 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9" Mar 16 15:29:01 crc kubenswrapper[4736]: I0316 15:29:01.374578 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" event={"ID":"99d86cbe-cf17-42a7-bc5b-d692609fff64","Type":"ContainerStarted","Data":"07113e7d1d1e77f843eea2dc984efb84cc1cc0248c7f757c1dbdce4a3acdc1d4"} Mar 16 15:29:01 crc kubenswrapper[4736]: I0316 15:29:01.374974 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" Mar 16 15:29:01 crc kubenswrapper[4736]: I0316 15:29:01.378145 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq" event={"ID":"285f243f-b886-440f-8a92-b1ddf60bf6e6","Type":"ContainerStarted","Data":"3c081131ba2323f6e15b17db0200f16a9be5ee89438d39647a9be94f8cd9174c"} Mar 16 15:29:01 crc kubenswrapper[4736]: I0316 15:29:01.378456 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq" Mar 16 15:29:01 crc kubenswrapper[4736]: I0316 15:29:01.388698 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9" podStartSLOduration=3.412726862 podStartE2EDuration="35.388678244s" podCreationTimestamp="2026-03-16 15:28:26 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.513963647 +0000 UTC m=+910.241353944" lastFinishedPulling="2026-03-16 15:29:00.489915039 +0000 UTC m=+942.217305326" observedRunningTime="2026-03-16 15:29:01.385842558 +0000 UTC m=+943.113232845" watchObservedRunningTime="2026-03-16 15:29:01.388678244 +0000 UTC m=+943.116068531" Mar 16 15:29:01 crc kubenswrapper[4736]: I0316 15:29:01.416419 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" podStartSLOduration=4.443467789 podStartE2EDuration="36.41639329s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.483864857 +0000 UTC m=+910.211255144" lastFinishedPulling="2026-03-16 15:29:00.456790348 +0000 UTC m=+942.184180645" observedRunningTime="2026-03-16 15:29:01.411181712 +0000 UTC m=+943.138571999" watchObservedRunningTime="2026-03-16 15:29:01.41639329 +0000 UTC m=+943.143783577" Mar 16 15:29:01 crc kubenswrapper[4736]: I0316 15:29:01.437376 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq" podStartSLOduration=4.710386096 podStartE2EDuration="36.437328717s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.728326006 +0000 UTC m=+910.455716293" lastFinishedPulling="2026-03-16 15:29:00.455268627 +0000 UTC m=+942.182658914" observedRunningTime="2026-03-16 15:29:01.435991181 +0000 UTC m=+943.163381488" watchObservedRunningTime="2026-03-16 15:29:01.437328717 +0000 UTC m=+943.164719004" Mar 16 15:29:04 crc kubenswrapper[4736]: I0316 15:29:04.409351 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg" event={"ID":"e7971b38-1b13-4984-a055-2cc52b34bf6b","Type":"ContainerStarted","Data":"7edabfddc0b03c00a52eff13fae3f273c30888b3ac50739fa8bf6c202c603965"} Mar 16 15:29:04 crc kubenswrapper[4736]: I0316 15:29:04.410218 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg" Mar 16 15:29:04 crc kubenswrapper[4736]: I0316 15:29:04.412087 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" event={"ID":"634ac783-1fe6-4191-b432-f22ad5d84357","Type":"ContainerStarted","Data":"3b80f587a5d6754a7eefee2cd3151bbf7e674bedf56f4d1a5e0af5fe941321be"} Mar 16 15:29:04 crc kubenswrapper[4736]: I0316 15:29:04.412206 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:29:04 crc kubenswrapper[4736]: I0316 15:29:04.413806 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" event={"ID":"62d536a1-c184-4077-a6f8-4285c3ebe5db","Type":"ContainerStarted","Data":"72e616de1acefd4fce8a25ab59d60844569fb5b744f07b13b49ddfc6d2ec452a"} Mar 16 15:29:04 crc kubenswrapper[4736]: I0316 15:29:04.413961 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:29:04 crc kubenswrapper[4736]: I0316 15:29:04.435807 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg" podStartSLOduration=4.821123429 podStartE2EDuration="39.435780484s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.728683315 +0000 UTC m=+910.456073602" lastFinishedPulling="2026-03-16 15:29:03.34334037 +0000 UTC m=+945.070730657" observedRunningTime="2026-03-16 15:29:04.432707872 +0000 UTC m=+946.160098169" watchObservedRunningTime="2026-03-16 15:29:04.435780484 +0000 UTC m=+946.163170771" Mar 16 15:29:04 crc kubenswrapper[4736]: I0316 15:29:04.473165 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" podStartSLOduration=35.530289263 podStartE2EDuration="39.473135348s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:59.416222703 +0000 UTC m=+941.143612990" lastFinishedPulling="2026-03-16 15:29:03.359068778 +0000 UTC m=+945.086459075" observedRunningTime="2026-03-16 15:29:04.462226137 +0000 UTC m=+946.189616424" watchObservedRunningTime="2026-03-16 15:29:04.473135348 +0000 UTC m=+946.200525635" Mar 16 15:29:04 crc kubenswrapper[4736]: I0316 15:29:04.490227 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" podStartSLOduration=35.220592148 podStartE2EDuration="39.490204361s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:59.071662963 +0000 UTC m=+940.799053250" lastFinishedPulling="2026-03-16 15:29:03.341275176 +0000 UTC m=+945.068665463" observedRunningTime="2026-03-16 15:29:04.489568865 +0000 UTC m=+946.216959162" watchObservedRunningTime="2026-03-16 15:29:04.490204361 +0000 UTC m=+946.217594648" Mar 16 15:29:05 crc kubenswrapper[4736]: I0316 15:29:05.422704 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" event={"ID":"b1ae843c-f1b5-4ee2-8300-55f93941ba2b","Type":"ContainerStarted","Data":"186be315bf123f471408c0690602259505410ca5f7023b330ea03fe148cdc34f"} Mar 16 15:29:05 crc kubenswrapper[4736]: I0316 15:29:05.423598 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" Mar 16 15:29:05 crc kubenswrapper[4736]: I0316 15:29:05.443928 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" podStartSLOduration=4.558063056 podStartE2EDuration="40.443916057s" podCreationTimestamp="2026-03-16 15:28:25 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.504037813 +0000 UTC m=+910.231428100" lastFinishedPulling="2026-03-16 15:29:04.389890814 +0000 UTC m=+946.117281101" observedRunningTime="2026-03-16 15:29:05.441880603 +0000 UTC m=+947.169270900" watchObservedRunningTime="2026-03-16 15:29:05.443916057 +0000 UTC m=+947.171306344" Mar 16 15:29:05 crc kubenswrapper[4736]: I0316 15:29:05.812495 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm" Mar 16 15:29:05 crc kubenswrapper[4736]: I0316 15:29:05.831504 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt" Mar 16 15:29:05 crc kubenswrapper[4736]: I0316 15:29:05.946763 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n" Mar 16 15:29:06 crc kubenswrapper[4736]: I0316 15:29:06.034680 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj" Mar 16 15:29:06 crc kubenswrapper[4736]: I0316 15:29:06.168940 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2" Mar 16 15:29:06 crc kubenswrapper[4736]: I0316 15:29:06.188600 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q" Mar 16 15:29:06 crc kubenswrapper[4736]: I0316 15:29:06.255966 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5" Mar 16 15:29:06 crc kubenswrapper[4736]: I0316 15:29:06.346560 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2" Mar 16 15:29:06 crc kubenswrapper[4736]: I0316 15:29:06.347600 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" Mar 16 15:29:06 crc kubenswrapper[4736]: I0316 15:29:06.444634 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" Mar 16 15:29:06 crc kubenswrapper[4736]: I0316 15:29:06.570264 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq" Mar 16 15:29:06 crc kubenswrapper[4736]: I0316 15:29:06.618559 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" Mar 16 15:29:06 crc kubenswrapper[4736]: I0316 15:29:06.781012 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" Mar 16 15:29:06 crc kubenswrapper[4736]: I0316 15:29:06.881703 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj" Mar 16 15:29:06 crc kubenswrapper[4736]: I0316 15:29:06.919841 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9" Mar 16 15:29:07 crc kubenswrapper[4736]: I0316 15:29:07.264160 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z" Mar 16 15:29:07 crc kubenswrapper[4736]: I0316 15:29:07.444058 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm" event={"ID":"6cbcdd30-245d-4732-8986-77f861f1f568","Type":"ContainerStarted","Data":"73938ab84816c45d22e078263491fdc2f87df2dfa7b674468b92e630de434fee"} Mar 16 15:29:07 crc kubenswrapper[4736]: I0316 15:29:07.445266 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm" Mar 16 15:29:07 crc kubenswrapper[4736]: I0316 15:29:07.468819 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm" podStartSLOduration=3.004948171 podStartE2EDuration="41.468790161s" podCreationTimestamp="2026-03-16 15:28:26 +0000 UTC" firstStartedPulling="2026-03-16 15:28:28.728745707 +0000 UTC m=+910.456135994" lastFinishedPulling="2026-03-16 15:29:07.192587667 +0000 UTC m=+948.919977984" observedRunningTime="2026-03-16 15:29:07.460050508 +0000 UTC m=+949.187440805" watchObservedRunningTime="2026-03-16 15:29:07.468790161 +0000 UTC m=+949.196180468" Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.264816 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cg2j4"] Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.267608 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.288505 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cg2j4"] Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.374948 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.427565 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe1b92d-5b14-4646-b9db-134dcce720cb-catalog-content\") pod \"community-operators-cg2j4\" (UID: \"bbe1b92d-5b14-4646-b9db-134dcce720cb\") " pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.427640 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe1b92d-5b14-4646-b9db-134dcce720cb-utilities\") pod \"community-operators-cg2j4\" (UID: \"bbe1b92d-5b14-4646-b9db-134dcce720cb\") " pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.427674 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5bl5\" (UniqueName: \"kubernetes.io/projected/bbe1b92d-5b14-4646-b9db-134dcce720cb-kube-api-access-c5bl5\") pod \"community-operators-cg2j4\" (UID: \"bbe1b92d-5b14-4646-b9db-134dcce720cb\") " pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.508228 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.509140 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.529151 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe1b92d-5b14-4646-b9db-134dcce720cb-catalog-content\") pod \"community-operators-cg2j4\" (UID: \"bbe1b92d-5b14-4646-b9db-134dcce720cb\") " pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.529552 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe1b92d-5b14-4646-b9db-134dcce720cb-utilities\") pod \"community-operators-cg2j4\" (UID: \"bbe1b92d-5b14-4646-b9db-134dcce720cb\") " pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.529641 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5bl5\" (UniqueName: \"kubernetes.io/projected/bbe1b92d-5b14-4646-b9db-134dcce720cb-kube-api-access-c5bl5\") pod \"community-operators-cg2j4\" (UID: \"bbe1b92d-5b14-4646-b9db-134dcce720cb\") " pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.529788 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe1b92d-5b14-4646-b9db-134dcce720cb-catalog-content\") pod \"community-operators-cg2j4\" (UID: \"bbe1b92d-5b14-4646-b9db-134dcce720cb\") " pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.530185 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe1b92d-5b14-4646-b9db-134dcce720cb-utilities\") pod \"community-operators-cg2j4\" (UID: \"bbe1b92d-5b14-4646-b9db-134dcce720cb\") " pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.555420 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5bl5\" (UniqueName: \"kubernetes.io/projected/bbe1b92d-5b14-4646-b9db-134dcce720cb-kube-api-access-c5bl5\") pod \"community-operators-cg2j4\" (UID: \"bbe1b92d-5b14-4646-b9db-134dcce720cb\") " pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:08 crc kubenswrapper[4736]: I0316 15:29:08.596163 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:09 crc kubenswrapper[4736]: I0316 15:29:09.105794 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" Mar 16 15:29:09 crc kubenswrapper[4736]: I0316 15:29:09.141268 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cg2j4"] Mar 16 15:29:09 crc kubenswrapper[4736]: I0316 15:29:09.462509 4736 generic.go:334] "Generic (PLEG): container finished" podID="bbe1b92d-5b14-4646-b9db-134dcce720cb" containerID="b3ede3f3b9fadf19f419a464b03ab2ce731edadc15545fb507a6cd9435216ff2" exitCode=0 Mar 16 15:29:09 crc kubenswrapper[4736]: I0316 15:29:09.462572 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cg2j4" event={"ID":"bbe1b92d-5b14-4646-b9db-134dcce720cb","Type":"ContainerDied","Data":"b3ede3f3b9fadf19f419a464b03ab2ce731edadc15545fb507a6cd9435216ff2"} Mar 16 15:29:09 crc kubenswrapper[4736]: I0316 15:29:09.462611 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cg2j4" event={"ID":"bbe1b92d-5b14-4646-b9db-134dcce720cb","Type":"ContainerStarted","Data":"6adb28b5ebfdc8a508ae5fc5b4bd583efa2407fcfae808125df19d5badb56cf9"} Mar 16 15:29:10 crc kubenswrapper[4736]: I0316 15:29:10.474305 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cg2j4" event={"ID":"bbe1b92d-5b14-4646-b9db-134dcce720cb","Type":"ContainerStarted","Data":"67439162f4ed74a984c4ffdc4b9b3926a6fca44afcb2ce28df0221768667a681"} Mar 16 15:29:11 crc kubenswrapper[4736]: I0316 15:29:11.484139 4736 generic.go:334] "Generic (PLEG): container finished" podID="bbe1b92d-5b14-4646-b9db-134dcce720cb" containerID="67439162f4ed74a984c4ffdc4b9b3926a6fca44afcb2ce28df0221768667a681" exitCode=0 Mar 16 15:29:11 crc kubenswrapper[4736]: I0316 15:29:11.484665 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cg2j4" event={"ID":"bbe1b92d-5b14-4646-b9db-134dcce720cb","Type":"ContainerDied","Data":"67439162f4ed74a984c4ffdc4b9b3926a6fca44afcb2ce28df0221768667a681"} Mar 16 15:29:12 crc kubenswrapper[4736]: I0316 15:29:12.500028 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cg2j4" event={"ID":"bbe1b92d-5b14-4646-b9db-134dcce720cb","Type":"ContainerStarted","Data":"beac500bf2c244985f8a0efeb2b1fbca1b44e7cd63241e7db9dec778918b8805"} Mar 16 15:29:12 crc kubenswrapper[4736]: I0316 15:29:12.528727 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cg2j4" podStartSLOduration=2.085711624 podStartE2EDuration="4.528698034s" podCreationTimestamp="2026-03-16 15:29:08 +0000 UTC" firstStartedPulling="2026-03-16 15:29:09.464743056 +0000 UTC m=+951.192133343" lastFinishedPulling="2026-03-16 15:29:11.907729426 +0000 UTC m=+953.635119753" observedRunningTime="2026-03-16 15:29:12.525686984 +0000 UTC m=+954.253077311" watchObservedRunningTime="2026-03-16 15:29:12.528698034 +0000 UTC m=+954.256088321" Mar 16 15:29:15 crc kubenswrapper[4736]: I0316 15:29:15.623994 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c9wpj"] Mar 16 15:29:15 crc kubenswrapper[4736]: I0316 15:29:15.627158 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:15 crc kubenswrapper[4736]: I0316 15:29:15.644715 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c9wpj"] Mar 16 15:29:15 crc kubenswrapper[4736]: I0316 15:29:15.777420 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2djv\" (UniqueName: \"kubernetes.io/projected/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-kube-api-access-x2djv\") pod \"redhat-operators-c9wpj\" (UID: \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\") " pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:15 crc kubenswrapper[4736]: I0316 15:29:15.777543 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-catalog-content\") pod \"redhat-operators-c9wpj\" (UID: \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\") " pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:15 crc kubenswrapper[4736]: I0316 15:29:15.777606 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-utilities\") pod \"redhat-operators-c9wpj\" (UID: \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\") " pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:15 crc kubenswrapper[4736]: I0316 15:29:15.880741 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2djv\" (UniqueName: \"kubernetes.io/projected/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-kube-api-access-x2djv\") pod \"redhat-operators-c9wpj\" (UID: \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\") " pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:15 crc kubenswrapper[4736]: I0316 15:29:15.880796 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-catalog-content\") pod \"redhat-operators-c9wpj\" (UID: \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\") " pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:15 crc kubenswrapper[4736]: I0316 15:29:15.881381 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-utilities\") pod \"redhat-operators-c9wpj\" (UID: \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\") " pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:15 crc kubenswrapper[4736]: I0316 15:29:15.881858 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-catalog-content\") pod \"redhat-operators-c9wpj\" (UID: \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\") " pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:15 crc kubenswrapper[4736]: I0316 15:29:15.882055 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-utilities\") pod \"redhat-operators-c9wpj\" (UID: \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\") " pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:15 crc kubenswrapper[4736]: I0316 15:29:15.904565 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2djv\" (UniqueName: \"kubernetes.io/projected/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-kube-api-access-x2djv\") pod \"redhat-operators-c9wpj\" (UID: \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\") " pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:15 crc kubenswrapper[4736]: I0316 15:29:15.954522 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:16 crc kubenswrapper[4736]: I0316 15:29:16.432371 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" Mar 16 15:29:16 crc kubenswrapper[4736]: I0316 15:29:16.482085 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg" Mar 16 15:29:16 crc kubenswrapper[4736]: I0316 15:29:16.560075 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c9wpj"] Mar 16 15:29:16 crc kubenswrapper[4736]: I0316 15:29:16.819323 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm" Mar 16 15:29:17 crc kubenswrapper[4736]: I0316 15:29:17.540188 4736 generic.go:334] "Generic (PLEG): container finished" podID="e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" containerID="690e2dc8b5d3feae633fb329f419f18b80285051aac5afa54d20d43e8f304aa9" exitCode=0 Mar 16 15:29:17 crc kubenswrapper[4736]: I0316 15:29:17.540235 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9wpj" event={"ID":"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1","Type":"ContainerDied","Data":"690e2dc8b5d3feae633fb329f419f18b80285051aac5afa54d20d43e8f304aa9"} Mar 16 15:29:17 crc kubenswrapper[4736]: I0316 15:29:17.540283 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9wpj" event={"ID":"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1","Type":"ContainerStarted","Data":"25a7eede4b5d3c8be347390edc281fa9783029d5701cfbb9d297735b4aea4e8b"} Mar 16 15:29:17 crc kubenswrapper[4736]: I0316 15:29:17.787820 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" Mar 16 15:29:18 crc kubenswrapper[4736]: I0316 15:29:18.596827 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:18 crc kubenswrapper[4736]: I0316 15:29:18.597413 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:18 crc kubenswrapper[4736]: I0316 15:29:18.658999 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:19 crc kubenswrapper[4736]: I0316 15:29:19.564949 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9wpj" event={"ID":"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1","Type":"ContainerStarted","Data":"98a5e9028d5823f423c5d0d82eaf6e3ae2122687697ddae27e270f83f3531237"} Mar 16 15:29:19 crc kubenswrapper[4736]: I0316 15:29:19.621343 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:21 crc kubenswrapper[4736]: I0316 15:29:21.585586 4736 generic.go:334] "Generic (PLEG): container finished" podID="e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" containerID="98a5e9028d5823f423c5d0d82eaf6e3ae2122687697ddae27e270f83f3531237" exitCode=0 Mar 16 15:29:21 crc kubenswrapper[4736]: I0316 15:29:21.585647 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9wpj" event={"ID":"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1","Type":"ContainerDied","Data":"98a5e9028d5823f423c5d0d82eaf6e3ae2122687697ddae27e270f83f3531237"} Mar 16 15:29:21 crc kubenswrapper[4736]: I0316 15:29:21.812473 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cg2j4"] Mar 16 15:29:21 crc kubenswrapper[4736]: I0316 15:29:21.812835 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cg2j4" podUID="bbe1b92d-5b14-4646-b9db-134dcce720cb" containerName="registry-server" containerID="cri-o://beac500bf2c244985f8a0efeb2b1fbca1b44e7cd63241e7db9dec778918b8805" gracePeriod=2 Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.324479 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.505282 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5bl5\" (UniqueName: \"kubernetes.io/projected/bbe1b92d-5b14-4646-b9db-134dcce720cb-kube-api-access-c5bl5\") pod \"bbe1b92d-5b14-4646-b9db-134dcce720cb\" (UID: \"bbe1b92d-5b14-4646-b9db-134dcce720cb\") " Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.505457 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe1b92d-5b14-4646-b9db-134dcce720cb-utilities\") pod \"bbe1b92d-5b14-4646-b9db-134dcce720cb\" (UID: \"bbe1b92d-5b14-4646-b9db-134dcce720cb\") " Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.505517 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe1b92d-5b14-4646-b9db-134dcce720cb-catalog-content\") pod \"bbe1b92d-5b14-4646-b9db-134dcce720cb\" (UID: \"bbe1b92d-5b14-4646-b9db-134dcce720cb\") " Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.507005 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbe1b92d-5b14-4646-b9db-134dcce720cb-utilities" (OuterVolumeSpecName: "utilities") pod "bbe1b92d-5b14-4646-b9db-134dcce720cb" (UID: "bbe1b92d-5b14-4646-b9db-134dcce720cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.514122 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe1b92d-5b14-4646-b9db-134dcce720cb-kube-api-access-c5bl5" (OuterVolumeSpecName: "kube-api-access-c5bl5") pod "bbe1b92d-5b14-4646-b9db-134dcce720cb" (UID: "bbe1b92d-5b14-4646-b9db-134dcce720cb"). InnerVolumeSpecName "kube-api-access-c5bl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.567714 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbe1b92d-5b14-4646-b9db-134dcce720cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbe1b92d-5b14-4646-b9db-134dcce720cb" (UID: "bbe1b92d-5b14-4646-b9db-134dcce720cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.597616 4736 generic.go:334] "Generic (PLEG): container finished" podID="bbe1b92d-5b14-4646-b9db-134dcce720cb" containerID="beac500bf2c244985f8a0efeb2b1fbca1b44e7cd63241e7db9dec778918b8805" exitCode=0 Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.597712 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cg2j4" event={"ID":"bbe1b92d-5b14-4646-b9db-134dcce720cb","Type":"ContainerDied","Data":"beac500bf2c244985f8a0efeb2b1fbca1b44e7cd63241e7db9dec778918b8805"} Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.597775 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cg2j4" event={"ID":"bbe1b92d-5b14-4646-b9db-134dcce720cb","Type":"ContainerDied","Data":"6adb28b5ebfdc8a508ae5fc5b4bd583efa2407fcfae808125df19d5badb56cf9"} Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.597782 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cg2j4" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.597798 4736 scope.go:117] "RemoveContainer" containerID="beac500bf2c244985f8a0efeb2b1fbca1b44e7cd63241e7db9dec778918b8805" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.607324 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbe1b92d-5b14-4646-b9db-134dcce720cb-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.607356 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbe1b92d-5b14-4646-b9db-134dcce720cb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.607369 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5bl5\" (UniqueName: \"kubernetes.io/projected/bbe1b92d-5b14-4646-b9db-134dcce720cb-kube-api-access-c5bl5\") on node \"crc\" DevicePath \"\"" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.613451 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9wpj" event={"ID":"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1","Type":"ContainerStarted","Data":"3b8ebed01beabcdc08198ba39e2bf51b23f940797d6c5190ce7a86280576a9a7"} Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.626952 4736 scope.go:117] "RemoveContainer" containerID="67439162f4ed74a984c4ffdc4b9b3926a6fca44afcb2ce28df0221768667a681" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.637686 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cg2j4"] Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.642862 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cg2j4"] Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.650312 4736 scope.go:117] "RemoveContainer" containerID="b3ede3f3b9fadf19f419a464b03ab2ce731edadc15545fb507a6cd9435216ff2" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.666970 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c9wpj" podStartSLOduration=3.034143241 podStartE2EDuration="7.666946487s" podCreationTimestamp="2026-03-16 15:29:15 +0000 UTC" firstStartedPulling="2026-03-16 15:29:17.54316276 +0000 UTC m=+959.270553047" lastFinishedPulling="2026-03-16 15:29:22.175965996 +0000 UTC m=+963.903356293" observedRunningTime="2026-03-16 15:29:22.658919834 +0000 UTC m=+964.386310121" watchObservedRunningTime="2026-03-16 15:29:22.666946487 +0000 UTC m=+964.394336774" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.675569 4736 scope.go:117] "RemoveContainer" containerID="beac500bf2c244985f8a0efeb2b1fbca1b44e7cd63241e7db9dec778918b8805" Mar 16 15:29:22 crc kubenswrapper[4736]: E0316 15:29:22.676254 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beac500bf2c244985f8a0efeb2b1fbca1b44e7cd63241e7db9dec778918b8805\": container with ID starting with beac500bf2c244985f8a0efeb2b1fbca1b44e7cd63241e7db9dec778918b8805 not found: ID does not exist" containerID="beac500bf2c244985f8a0efeb2b1fbca1b44e7cd63241e7db9dec778918b8805" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.676293 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beac500bf2c244985f8a0efeb2b1fbca1b44e7cd63241e7db9dec778918b8805"} err="failed to get container status \"beac500bf2c244985f8a0efeb2b1fbca1b44e7cd63241e7db9dec778918b8805\": rpc error: code = NotFound desc = could not find container \"beac500bf2c244985f8a0efeb2b1fbca1b44e7cd63241e7db9dec778918b8805\": container with ID starting with beac500bf2c244985f8a0efeb2b1fbca1b44e7cd63241e7db9dec778918b8805 not found: ID does not exist" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.676320 4736 scope.go:117] "RemoveContainer" containerID="67439162f4ed74a984c4ffdc4b9b3926a6fca44afcb2ce28df0221768667a681" Mar 16 15:29:22 crc kubenswrapper[4736]: E0316 15:29:22.676615 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67439162f4ed74a984c4ffdc4b9b3926a6fca44afcb2ce28df0221768667a681\": container with ID starting with 67439162f4ed74a984c4ffdc4b9b3926a6fca44afcb2ce28df0221768667a681 not found: ID does not exist" containerID="67439162f4ed74a984c4ffdc4b9b3926a6fca44afcb2ce28df0221768667a681" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.676644 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67439162f4ed74a984c4ffdc4b9b3926a6fca44afcb2ce28df0221768667a681"} err="failed to get container status \"67439162f4ed74a984c4ffdc4b9b3926a6fca44afcb2ce28df0221768667a681\": rpc error: code = NotFound desc = could not find container \"67439162f4ed74a984c4ffdc4b9b3926a6fca44afcb2ce28df0221768667a681\": container with ID starting with 67439162f4ed74a984c4ffdc4b9b3926a6fca44afcb2ce28df0221768667a681 not found: ID does not exist" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.676662 4736 scope.go:117] "RemoveContainer" containerID="b3ede3f3b9fadf19f419a464b03ab2ce731edadc15545fb507a6cd9435216ff2" Mar 16 15:29:22 crc kubenswrapper[4736]: E0316 15:29:22.677177 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ede3f3b9fadf19f419a464b03ab2ce731edadc15545fb507a6cd9435216ff2\": container with ID starting with b3ede3f3b9fadf19f419a464b03ab2ce731edadc15545fb507a6cd9435216ff2 not found: ID does not exist" containerID="b3ede3f3b9fadf19f419a464b03ab2ce731edadc15545fb507a6cd9435216ff2" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.677243 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ede3f3b9fadf19f419a464b03ab2ce731edadc15545fb507a6cd9435216ff2"} err="failed to get container status \"b3ede3f3b9fadf19f419a464b03ab2ce731edadc15545fb507a6cd9435216ff2\": rpc error: code = NotFound desc = could not find container \"b3ede3f3b9fadf19f419a464b03ab2ce731edadc15545fb507a6cd9435216ff2\": container with ID starting with b3ede3f3b9fadf19f419a464b03ab2ce731edadc15545fb507a6cd9435216ff2 not found: ID does not exist" Mar 16 15:29:22 crc kubenswrapper[4736]: I0316 15:29:22.989767 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe1b92d-5b14-4646-b9db-134dcce720cb" path="/var/lib/kubelet/pods/bbe1b92d-5b14-4646-b9db-134dcce720cb/volumes" Mar 16 15:29:25 crc kubenswrapper[4736]: I0316 15:29:25.955578 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:25 crc kubenswrapper[4736]: I0316 15:29:25.957357 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:27 crc kubenswrapper[4736]: I0316 15:29:27.009054 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c9wpj" podUID="e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" containerName="registry-server" probeResult="failure" output=< Mar 16 15:29:27 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 15:29:27 crc kubenswrapper[4736]: > Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.709222 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b95c5c449-sfvlm"] Mar 16 15:29:34 crc kubenswrapper[4736]: E0316 15:29:34.713676 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe1b92d-5b14-4646-b9db-134dcce720cb" containerName="extract-utilities" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.713701 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe1b92d-5b14-4646-b9db-134dcce720cb" containerName="extract-utilities" Mar 16 15:29:34 crc kubenswrapper[4736]: E0316 15:29:34.713713 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe1b92d-5b14-4646-b9db-134dcce720cb" containerName="registry-server" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.713721 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe1b92d-5b14-4646-b9db-134dcce720cb" containerName="registry-server" Mar 16 15:29:34 crc kubenswrapper[4736]: E0316 15:29:34.713753 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe1b92d-5b14-4646-b9db-134dcce720cb" containerName="extract-content" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.713761 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe1b92d-5b14-4646-b9db-134dcce720cb" containerName="extract-content" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.713895 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe1b92d-5b14-4646-b9db-134dcce720cb" containerName="registry-server" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.715994 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b95c5c449-sfvlm" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.727766 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.728021 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-sb2cs" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.728228 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.728344 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.814071 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b95c5c449-sfvlm"] Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.833135 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ff1eb1-99b2-40cb-b393-ed85d7072b2a-config\") pod \"dnsmasq-dns-7b95c5c449-sfvlm\" (UID: \"65ff1eb1-99b2-40cb-b393-ed85d7072b2a\") " pod="openstack/dnsmasq-dns-7b95c5c449-sfvlm" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.833197 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfk4b\" (UniqueName: \"kubernetes.io/projected/65ff1eb1-99b2-40cb-b393-ed85d7072b2a-kube-api-access-dfk4b\") pod \"dnsmasq-dns-7b95c5c449-sfvlm\" (UID: \"65ff1eb1-99b2-40cb-b393-ed85d7072b2a\") " pod="openstack/dnsmasq-dns-7b95c5c449-sfvlm" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.866626 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bd9cf7445-cfh79"] Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.868000 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.873331 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.906023 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bd9cf7445-cfh79"] Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.937228 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ff1eb1-99b2-40cb-b393-ed85d7072b2a-config\") pod \"dnsmasq-dns-7b95c5c449-sfvlm\" (UID: \"65ff1eb1-99b2-40cb-b393-ed85d7072b2a\") " pod="openstack/dnsmasq-dns-7b95c5c449-sfvlm" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.937297 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfk4b\" (UniqueName: \"kubernetes.io/projected/65ff1eb1-99b2-40cb-b393-ed85d7072b2a-kube-api-access-dfk4b\") pod \"dnsmasq-dns-7b95c5c449-sfvlm\" (UID: \"65ff1eb1-99b2-40cb-b393-ed85d7072b2a\") " pod="openstack/dnsmasq-dns-7b95c5c449-sfvlm" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.938479 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ff1eb1-99b2-40cb-b393-ed85d7072b2a-config\") pod \"dnsmasq-dns-7b95c5c449-sfvlm\" (UID: \"65ff1eb1-99b2-40cb-b393-ed85d7072b2a\") " pod="openstack/dnsmasq-dns-7b95c5c449-sfvlm" Mar 16 15:29:34 crc kubenswrapper[4736]: I0316 15:29:34.993435 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfk4b\" (UniqueName: \"kubernetes.io/projected/65ff1eb1-99b2-40cb-b393-ed85d7072b2a-kube-api-access-dfk4b\") pod \"dnsmasq-dns-7b95c5c449-sfvlm\" (UID: \"65ff1eb1-99b2-40cb-b393-ed85d7072b2a\") " pod="openstack/dnsmasq-dns-7b95c5c449-sfvlm" Mar 16 15:29:35 crc kubenswrapper[4736]: I0316 15:29:35.041964 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936bfa8a-f1c2-4c9b-a5f9-84155418f791-config\") pod \"dnsmasq-dns-bd9cf7445-cfh79\" (UID: \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\") " pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" Mar 16 15:29:35 crc kubenswrapper[4736]: I0316 15:29:35.042065 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936bfa8a-f1c2-4c9b-a5f9-84155418f791-dns-svc\") pod \"dnsmasq-dns-bd9cf7445-cfh79\" (UID: \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\") " pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" Mar 16 15:29:35 crc kubenswrapper[4736]: I0316 15:29:35.042111 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsvzh\" (UniqueName: \"kubernetes.io/projected/936bfa8a-f1c2-4c9b-a5f9-84155418f791-kube-api-access-zsvzh\") pod \"dnsmasq-dns-bd9cf7445-cfh79\" (UID: \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\") " pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" Mar 16 15:29:35 crc kubenswrapper[4736]: I0316 15:29:35.066505 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b95c5c449-sfvlm" Mar 16 15:29:35 crc kubenswrapper[4736]: I0316 15:29:35.144117 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936bfa8a-f1c2-4c9b-a5f9-84155418f791-config\") pod \"dnsmasq-dns-bd9cf7445-cfh79\" (UID: \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\") " pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" Mar 16 15:29:35 crc kubenswrapper[4736]: I0316 15:29:35.144238 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936bfa8a-f1c2-4c9b-a5f9-84155418f791-dns-svc\") pod \"dnsmasq-dns-bd9cf7445-cfh79\" (UID: \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\") " pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" Mar 16 15:29:35 crc kubenswrapper[4736]: I0316 15:29:35.144283 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsvzh\" (UniqueName: \"kubernetes.io/projected/936bfa8a-f1c2-4c9b-a5f9-84155418f791-kube-api-access-zsvzh\") pod \"dnsmasq-dns-bd9cf7445-cfh79\" (UID: \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\") " pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" Mar 16 15:29:35 crc kubenswrapper[4736]: I0316 15:29:35.145244 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936bfa8a-f1c2-4c9b-a5f9-84155418f791-config\") pod \"dnsmasq-dns-bd9cf7445-cfh79\" (UID: \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\") " pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" Mar 16 15:29:35 crc kubenswrapper[4736]: I0316 15:29:35.145479 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936bfa8a-f1c2-4c9b-a5f9-84155418f791-dns-svc\") pod \"dnsmasq-dns-bd9cf7445-cfh79\" (UID: \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\") " pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" Mar 16 15:29:35 crc kubenswrapper[4736]: I0316 15:29:35.191706 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsvzh\" (UniqueName: \"kubernetes.io/projected/936bfa8a-f1c2-4c9b-a5f9-84155418f791-kube-api-access-zsvzh\") pod \"dnsmasq-dns-bd9cf7445-cfh79\" (UID: \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\") " pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" Mar 16 15:29:35 crc kubenswrapper[4736]: I0316 15:29:35.202243 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" Mar 16 15:29:35 crc kubenswrapper[4736]: I0316 15:29:35.646229 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b95c5c449-sfvlm"] Mar 16 15:29:35 crc kubenswrapper[4736]: I0316 15:29:35.753887 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b95c5c449-sfvlm" event={"ID":"65ff1eb1-99b2-40cb-b393-ed85d7072b2a","Type":"ContainerStarted","Data":"bf652946e51bcd1641222187b9e39cc4e2021fdb01a9d60e1a00d7fe317ac529"} Mar 16 15:29:36 crc kubenswrapper[4736]: I0316 15:29:36.028357 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:36 crc kubenswrapper[4736]: I0316 15:29:36.048164 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bd9cf7445-cfh79"] Mar 16 15:29:36 crc kubenswrapper[4736]: I0316 15:29:36.121844 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:36 crc kubenswrapper[4736]: I0316 15:29:36.296676 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c9wpj"] Mar 16 15:29:36 crc kubenswrapper[4736]: I0316 15:29:36.773058 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" event={"ID":"936bfa8a-f1c2-4c9b-a5f9-84155418f791","Type":"ContainerStarted","Data":"c04419d8ad978dc6a59d640c946f9e68284bbf59d2cf454c8e44d65eee8aedea"} Mar 16 15:29:37 crc kubenswrapper[4736]: I0316 15:29:37.876362 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c9wpj" podUID="e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" containerName="registry-server" containerID="cri-o://3b8ebed01beabcdc08198ba39e2bf51b23f940797d6c5190ce7a86280576a9a7" gracePeriod=2 Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.019030 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b95c5c449-sfvlm"] Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.067609 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58b7684d4f-wdpvw"] Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.069043 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.088968 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58b7684d4f-wdpvw"] Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.248090 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dd40335-c071-4d7e-8f2b-e76ae17febda-dns-svc\") pod \"dnsmasq-dns-58b7684d4f-wdpvw\" (UID: \"2dd40335-c071-4d7e-8f2b-e76ae17febda\") " pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.248180 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq8pk\" (UniqueName: \"kubernetes.io/projected/2dd40335-c071-4d7e-8f2b-e76ae17febda-kube-api-access-kq8pk\") pod \"dnsmasq-dns-58b7684d4f-wdpvw\" (UID: \"2dd40335-c071-4d7e-8f2b-e76ae17febda\") " pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.248236 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd40335-c071-4d7e-8f2b-e76ae17febda-config\") pod \"dnsmasq-dns-58b7684d4f-wdpvw\" (UID: \"2dd40335-c071-4d7e-8f2b-e76ae17febda\") " pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.349493 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dd40335-c071-4d7e-8f2b-e76ae17febda-dns-svc\") pod \"dnsmasq-dns-58b7684d4f-wdpvw\" (UID: \"2dd40335-c071-4d7e-8f2b-e76ae17febda\") " pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.349566 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq8pk\" (UniqueName: \"kubernetes.io/projected/2dd40335-c071-4d7e-8f2b-e76ae17febda-kube-api-access-kq8pk\") pod \"dnsmasq-dns-58b7684d4f-wdpvw\" (UID: \"2dd40335-c071-4d7e-8f2b-e76ae17febda\") " pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.349606 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd40335-c071-4d7e-8f2b-e76ae17febda-config\") pod \"dnsmasq-dns-58b7684d4f-wdpvw\" (UID: \"2dd40335-c071-4d7e-8f2b-e76ae17febda\") " pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.350932 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd40335-c071-4d7e-8f2b-e76ae17febda-config\") pod \"dnsmasq-dns-58b7684d4f-wdpvw\" (UID: \"2dd40335-c071-4d7e-8f2b-e76ae17febda\") " pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.351005 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dd40335-c071-4d7e-8f2b-e76ae17febda-dns-svc\") pod \"dnsmasq-dns-58b7684d4f-wdpvw\" (UID: \"2dd40335-c071-4d7e-8f2b-e76ae17febda\") " pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.407430 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq8pk\" (UniqueName: \"kubernetes.io/projected/2dd40335-c071-4d7e-8f2b-e76ae17febda-kube-api-access-kq8pk\") pod \"dnsmasq-dns-58b7684d4f-wdpvw\" (UID: \"2dd40335-c071-4d7e-8f2b-e76ae17febda\") " pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.440551 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.509891 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.510015 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.650482 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bd9cf7445-cfh79"] Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.721479 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86545856d7-5chd7"] Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.723435 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86545856d7-5chd7" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.762514 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86545856d7-5chd7"] Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.770619 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11a179c6-fb20-41f3-a16a-a189c65372af-config\") pod \"dnsmasq-dns-86545856d7-5chd7\" (UID: \"11a179c6-fb20-41f3-a16a-a189c65372af\") " pod="openstack/dnsmasq-dns-86545856d7-5chd7" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.771125 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11a179c6-fb20-41f3-a16a-a189c65372af-dns-svc\") pod \"dnsmasq-dns-86545856d7-5chd7\" (UID: \"11a179c6-fb20-41f3-a16a-a189c65372af\") " pod="openstack/dnsmasq-dns-86545856d7-5chd7" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.771241 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwsjl\" (UniqueName: \"kubernetes.io/projected/11a179c6-fb20-41f3-a16a-a189c65372af-kube-api-access-pwsjl\") pod \"dnsmasq-dns-86545856d7-5chd7\" (UID: \"11a179c6-fb20-41f3-a16a-a189c65372af\") " pod="openstack/dnsmasq-dns-86545856d7-5chd7" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.894843 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11a179c6-fb20-41f3-a16a-a189c65372af-dns-svc\") pod \"dnsmasq-dns-86545856d7-5chd7\" (UID: \"11a179c6-fb20-41f3-a16a-a189c65372af\") " pod="openstack/dnsmasq-dns-86545856d7-5chd7" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.894974 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwsjl\" (UniqueName: \"kubernetes.io/projected/11a179c6-fb20-41f3-a16a-a189c65372af-kube-api-access-pwsjl\") pod \"dnsmasq-dns-86545856d7-5chd7\" (UID: \"11a179c6-fb20-41f3-a16a-a189c65372af\") " pod="openstack/dnsmasq-dns-86545856d7-5chd7" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.895074 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11a179c6-fb20-41f3-a16a-a189c65372af-config\") pod \"dnsmasq-dns-86545856d7-5chd7\" (UID: \"11a179c6-fb20-41f3-a16a-a189c65372af\") " pod="openstack/dnsmasq-dns-86545856d7-5chd7" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.896071 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11a179c6-fb20-41f3-a16a-a189c65372af-config\") pod \"dnsmasq-dns-86545856d7-5chd7\" (UID: \"11a179c6-fb20-41f3-a16a-a189c65372af\") " pod="openstack/dnsmasq-dns-86545856d7-5chd7" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.898198 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11a179c6-fb20-41f3-a16a-a189c65372af-dns-svc\") pod \"dnsmasq-dns-86545856d7-5chd7\" (UID: \"11a179c6-fb20-41f3-a16a-a189c65372af\") " pod="openstack/dnsmasq-dns-86545856d7-5chd7" Mar 16 15:29:38 crc kubenswrapper[4736]: I0316 15:29:38.935638 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwsjl\" (UniqueName: \"kubernetes.io/projected/11a179c6-fb20-41f3-a16a-a189c65372af-kube-api-access-pwsjl\") pod \"dnsmasq-dns-86545856d7-5chd7\" (UID: \"11a179c6-fb20-41f3-a16a-a189c65372af\") " pod="openstack/dnsmasq-dns-86545856d7-5chd7" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.045839 4736 generic.go:334] "Generic (PLEG): container finished" podID="e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" containerID="3b8ebed01beabcdc08198ba39e2bf51b23f940797d6c5190ce7a86280576a9a7" exitCode=0 Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.045898 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9wpj" event={"ID":"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1","Type":"ContainerDied","Data":"3b8ebed01beabcdc08198ba39e2bf51b23f940797d6c5190ce7a86280576a9a7"} Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.087533 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86545856d7-5chd7" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.302284 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.305785 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.352555 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.356497 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hw6vc" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.357956 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58b7684d4f-wdpvw"] Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.359361 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.369723 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.370134 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.370383 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.370698 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.440048 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.478164 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnmtk\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-kube-api-access-cnmtk\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.478240 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.478278 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/343be938-86f7-45c1-b8ef-a3143202be82-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.478321 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-server-conf\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.478349 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.478379 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/343be938-86f7-45c1-b8ef-a3143202be82-pod-info\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.478436 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.478463 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.478497 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.478532 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.478563 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-config-data\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.542559 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.581843 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-config-data\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.581917 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnmtk\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-kube-api-access-cnmtk\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.581957 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.581977 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/343be938-86f7-45c1-b8ef-a3143202be82-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.582005 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-server-conf\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.582027 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.582047 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/343be938-86f7-45c1-b8ef-a3143202be82-pod-info\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.582078 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.582110 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.582136 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.582162 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.583254 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-config-data\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.584054 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.585088 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.585416 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.592657 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-server-conf\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.615329 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.618883 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/343be938-86f7-45c1-b8ef-a3143202be82-pod-info\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.662956 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/343be938-86f7-45c1-b8ef-a3143202be82-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.665273 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.665775 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.670075 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnmtk\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-kube-api-access-cnmtk\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.686437 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-catalog-content\") pod \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\" (UID: \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\") " Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.686503 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2djv\" (UniqueName: \"kubernetes.io/projected/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-kube-api-access-x2djv\") pod \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\" (UID: \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\") " Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.686542 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-utilities\") pod \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\" (UID: \"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1\") " Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.693281 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-utilities" (OuterVolumeSpecName: "utilities") pod "e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" (UID: "e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.730447 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-kube-api-access-x2djv" (OuterVolumeSpecName: "kube-api-access-x2djv") pod "e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" (UID: "e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1"). InnerVolumeSpecName "kube-api-access-x2djv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.738811 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " pod="openstack/rabbitmq-server-0" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.788518 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2djv\" (UniqueName: \"kubernetes.io/projected/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-kube-api-access-x2djv\") on node \"crc\" DevicePath \"\"" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.788559 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:29:39 crc kubenswrapper[4736]: I0316 15:29:39.799662 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.059846 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 16 15:29:40 crc kubenswrapper[4736]: E0316 15:29:40.060892 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" containerName="extract-utilities" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.060911 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" containerName="extract-utilities" Mar 16 15:29:40 crc kubenswrapper[4736]: E0316 15:29:40.060927 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" containerName="extract-content" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.060935 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" containerName="extract-content" Mar 16 15:29:40 crc kubenswrapper[4736]: E0316 15:29:40.060955 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" containerName="registry-server" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.060963 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" containerName="registry-server" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.061183 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" containerName="registry-server" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.064435 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.082609 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fcb59" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.082667 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.082778 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.083187 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.083348 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.083492 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.083515 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.136552 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.141910 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" event={"ID":"2dd40335-c071-4d7e-8f2b-e76ae17febda","Type":"ContainerStarted","Data":"188ffa9a0be18c5d1d9eafa7c3751a7fd8298ca9b7313fbfd72dede31524dfc3"} Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.171579 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" (UID: "e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.176436 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c9wpj" event={"ID":"e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1","Type":"ContainerDied","Data":"25a7eede4b5d3c8be347390edc281fa9783029d5701cfbb9d297735b4aea4e8b"} Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.176496 4736 scope.go:117] "RemoveContainer" containerID="3b8ebed01beabcdc08198ba39e2bf51b23f940797d6c5190ce7a86280576a9a7" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.176706 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c9wpj" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.208791 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.209386 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.209511 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4vn2\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-kube-api-access-f4vn2\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.209630 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.209719 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.211536 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.211634 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.211730 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/582900c6-e591-4ff4-ac53-a8965af431e2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.211841 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/582900c6-e591-4ff4-ac53-a8965af431e2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.211937 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.212027 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.212197 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.245942 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86545856d7-5chd7"] Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.295171 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c9wpj"] Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.316092 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.316145 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.316191 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.316218 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.316243 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4vn2\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-kube-api-access-f4vn2\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.316271 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.316295 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.316335 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.316350 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.316371 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/582900c6-e591-4ff4-ac53-a8965af431e2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.316401 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/582900c6-e591-4ff4-ac53-a8965af431e2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.317745 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.318125 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.319034 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.319441 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.320173 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.320693 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.331656 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.332769 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.336222 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c9wpj"] Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.351557 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/582900c6-e591-4ff4-ac53-a8965af431e2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.366691 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/582900c6-e591-4ff4-ac53-a8965af431e2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.370487 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4vn2\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-kube-api-access-f4vn2\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.437723 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.459122 4736 scope.go:117] "RemoveContainer" containerID="98a5e9028d5823f423c5d0d82eaf6e3ae2122687697ddae27e270f83f3531237" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.676360 4736 scope.go:117] "RemoveContainer" containerID="690e2dc8b5d3feae633fb329f419f18b80285051aac5afa54d20d43e8f304aa9" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.707286 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:29:40 crc kubenswrapper[4736]: I0316 15:29:40.841223 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.034978 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1" path="/var/lib/kubelet/pods/e0adb8fb-a295-4cc7-a7bf-edf91dbdf3a1/volumes" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.050936 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.053385 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.053508 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: W0316 15:29:41.057078 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod343be938_86f7_45c1_b8ef_a3143202be82.slice/crio-1b8fa3bf076ec7b1d9a0ddebc282e27234b10fd8568440e98b48a580219c7537 WatchSource:0}: Error finding container 1b8fa3bf076ec7b1d9a0ddebc282e27234b10fd8568440e98b48a580219c7537: Status 404 returned error can't find the container with id 1b8fa3bf076ec7b1d9a0ddebc282e27234b10fd8568440e98b48a580219c7537 Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.058620 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.058940 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.061472 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-88sqx" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.061661 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.079272 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.150781 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e06fc2-19ee-4e32-8118-d4596cb6b124-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.150857 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e06fc2-19ee-4e32-8118-d4596cb6b124-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.150916 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.150978 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e06fc2-19ee-4e32-8118-d4596cb6b124-operator-scripts\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.151069 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/51e06fc2-19ee-4e32-8118-d4596cb6b124-kolla-config\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.151093 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/51e06fc2-19ee-4e32-8118-d4596cb6b124-config-data-generated\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.151279 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/51e06fc2-19ee-4e32-8118-d4596cb6b124-config-data-default\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.151328 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvq4h\" (UniqueName: \"kubernetes.io/projected/51e06fc2-19ee-4e32-8118-d4596cb6b124-kube-api-access-xvq4h\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.208012 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86545856d7-5chd7" event={"ID":"11a179c6-fb20-41f3-a16a-a189c65372af","Type":"ContainerStarted","Data":"12b6f1dda26366be0018255727be67f566f74ec42b23247770b2e22b8c22a554"} Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.243142 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"343be938-86f7-45c1-b8ef-a3143202be82","Type":"ContainerStarted","Data":"1b8fa3bf076ec7b1d9a0ddebc282e27234b10fd8568440e98b48a580219c7537"} Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.257430 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/51e06fc2-19ee-4e32-8118-d4596cb6b124-config-data-default\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.261909 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvq4h\" (UniqueName: \"kubernetes.io/projected/51e06fc2-19ee-4e32-8118-d4596cb6b124-kube-api-access-xvq4h\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.262486 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e06fc2-19ee-4e32-8118-d4596cb6b124-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.262616 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e06fc2-19ee-4e32-8118-d4596cb6b124-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.262800 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.262931 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e06fc2-19ee-4e32-8118-d4596cb6b124-operator-scripts\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.263198 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/51e06fc2-19ee-4e32-8118-d4596cb6b124-kolla-config\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.263391 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/51e06fc2-19ee-4e32-8118-d4596cb6b124-config-data-generated\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.264007 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/51e06fc2-19ee-4e32-8118-d4596cb6b124-config-data-generated\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.271080 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e06fc2-19ee-4e32-8118-d4596cb6b124-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.259658 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/51e06fc2-19ee-4e32-8118-d4596cb6b124-config-data-default\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.272270 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.279085 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/51e06fc2-19ee-4e32-8118-d4596cb6b124-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.281931 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/51e06fc2-19ee-4e32-8118-d4596cb6b124-kolla-config\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.300192 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e06fc2-19ee-4e32-8118-d4596cb6b124-operator-scripts\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.304305 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvq4h\" (UniqueName: \"kubernetes.io/projected/51e06fc2-19ee-4e32-8118-d4596cb6b124-kube-api-access-xvq4h\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.424201 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"51e06fc2-19ee-4e32-8118-d4596cb6b124\") " pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.599579 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 16 15:29:41 crc kubenswrapper[4736]: W0316 15:29:41.635146 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod582900c6_e591_4ff4_ac53_a8965af431e2.slice/crio-5398aeaafa4630b70670235f94f05a9349eb3ecd03762e8b2936df6f875909a8 WatchSource:0}: Error finding container 5398aeaafa4630b70670235f94f05a9349eb3ecd03762e8b2936df6f875909a8: Status 404 returned error can't find the container with id 5398aeaafa4630b70670235f94f05a9349eb3ecd03762e8b2936df6f875909a8 Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.695929 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.718129 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qsnvr"] Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.736263 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.748131 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsnvr"] Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.845397 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.846873 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.849869 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-q4fpf" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.851908 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.852803 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.876676 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.883536 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9a7f53-e3db-465d-b15c-3be5883d8c2a-catalog-content\") pod \"certified-operators-qsnvr\" (UID: \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\") " pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.883621 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9a7f53-e3db-465d-b15c-3be5883d8c2a-utilities\") pod \"certified-operators-qsnvr\" (UID: \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\") " pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.883656 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5f6x\" (UniqueName: \"kubernetes.io/projected/da9a7f53-e3db-465d-b15c-3be5883d8c2a-kube-api-access-p5f6x\") pod \"certified-operators-qsnvr\" (UID: \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\") " pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.883939 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.884093 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.890714 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.891070 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.901728 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-t6ll4" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.901942 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.906627 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986070 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s6cb\" (UniqueName: \"kubernetes.io/projected/9243d80f-05dc-4dff-a328-780f64a121af-kube-api-access-4s6cb\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986143 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24w6z\" (UniqueName: \"kubernetes.io/projected/7fa40817-425b-4ee8-9c3b-e7e109307837-kube-api-access-24w6z\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986174 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9a7f53-e3db-465d-b15c-3be5883d8c2a-catalog-content\") pod \"certified-operators-qsnvr\" (UID: \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\") " pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986210 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fa40817-425b-4ee8-9c3b-e7e109307837-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986237 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa40817-425b-4ee8-9c3b-e7e109307837-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986259 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9243d80f-05dc-4dff-a328-780f64a121af-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986287 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9a7f53-e3db-465d-b15c-3be5883d8c2a-utilities\") pod \"certified-operators-qsnvr\" (UID: \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\") " pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986314 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9243d80f-05dc-4dff-a328-780f64a121af-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986336 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5f6x\" (UniqueName: \"kubernetes.io/projected/da9a7f53-e3db-465d-b15c-3be5883d8c2a-kube-api-access-p5f6x\") pod \"certified-operators-qsnvr\" (UID: \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\") " pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986355 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9243d80f-05dc-4dff-a328-780f64a121af-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986388 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9243d80f-05dc-4dff-a328-780f64a121af-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986409 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fa40817-425b-4ee8-9c3b-e7e109307837-config-data\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986440 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9243d80f-05dc-4dff-a328-780f64a121af-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986471 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9243d80f-05dc-4dff-a328-780f64a121af-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986491 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7fa40817-425b-4ee8-9c3b-e7e109307837-kolla-config\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.986511 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.987367 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9a7f53-e3db-465d-b15c-3be5883d8c2a-catalog-content\") pod \"certified-operators-qsnvr\" (UID: \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\") " pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:29:41 crc kubenswrapper[4736]: I0316 15:29:41.987633 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9a7f53-e3db-465d-b15c-3be5883d8c2a-utilities\") pod \"certified-operators-qsnvr\" (UID: \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\") " pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.030739 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5f6x\" (UniqueName: \"kubernetes.io/projected/da9a7f53-e3db-465d-b15c-3be5883d8c2a-kube-api-access-p5f6x\") pod \"certified-operators-qsnvr\" (UID: \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\") " pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.211456 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9243d80f-05dc-4dff-a328-780f64a121af-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.211534 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9243d80f-05dc-4dff-a328-780f64a121af-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.211573 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9243d80f-05dc-4dff-a328-780f64a121af-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.211602 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fa40817-425b-4ee8-9c3b-e7e109307837-config-data\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.211654 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9243d80f-05dc-4dff-a328-780f64a121af-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.211692 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9243d80f-05dc-4dff-a328-780f64a121af-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.211713 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7fa40817-425b-4ee8-9c3b-e7e109307837-kolla-config\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.211737 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.211793 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s6cb\" (UniqueName: \"kubernetes.io/projected/9243d80f-05dc-4dff-a328-780f64a121af-kube-api-access-4s6cb\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.211819 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24w6z\" (UniqueName: \"kubernetes.io/projected/7fa40817-425b-4ee8-9c3b-e7e109307837-kube-api-access-24w6z\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.211848 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fa40817-425b-4ee8-9c3b-e7e109307837-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.211873 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa40817-425b-4ee8-9c3b-e7e109307837-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.211890 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9243d80f-05dc-4dff-a328-780f64a121af-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.212872 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9243d80f-05dc-4dff-a328-780f64a121af-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.213737 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7fa40817-425b-4ee8-9c3b-e7e109307837-kolla-config\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.213949 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.215042 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.221838 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9243d80f-05dc-4dff-a328-780f64a121af-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.222773 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9243d80f-05dc-4dff-a328-780f64a121af-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.223511 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fa40817-425b-4ee8-9c3b-e7e109307837-config-data\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.224271 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9243d80f-05dc-4dff-a328-780f64a121af-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.232762 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fa40817-425b-4ee8-9c3b-e7e109307837-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.237983 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9243d80f-05dc-4dff-a328-780f64a121af-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.242251 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24w6z\" (UniqueName: \"kubernetes.io/projected/7fa40817-425b-4ee8-9c3b-e7e109307837-kube-api-access-24w6z\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.242278 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9243d80f-05dc-4dff-a328-780f64a121af-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.244561 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa40817-425b-4ee8-9c3b-e7e109307837-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7fa40817-425b-4ee8-9c3b-e7e109307837\") " pod="openstack/memcached-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.271948 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s6cb\" (UniqueName: \"kubernetes.io/projected/9243d80f-05dc-4dff-a328-780f64a121af-kube-api-access-4s6cb\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.292699 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9243d80f-05dc-4dff-a328-780f64a121af\") " pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.474182 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"582900c6-e591-4ff4-ac53-a8965af431e2","Type":"ContainerStarted","Data":"5398aeaafa4630b70670235f94f05a9349eb3ecd03762e8b2936df6f875909a8"} Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.476833 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.553872 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 16 15:29:42 crc kubenswrapper[4736]: I0316 15:29:42.970669 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 16 15:29:43 crc kubenswrapper[4736]: I0316 15:29:43.555373 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"51e06fc2-19ee-4e32-8118-d4596cb6b124","Type":"ContainerStarted","Data":"e5208d7a37fcd155a1d33bdc11bb0f40e7f05bfa3224be02fcc39e488657db27"} Mar 16 15:29:43 crc kubenswrapper[4736]: I0316 15:29:43.804972 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 16 15:29:44 crc kubenswrapper[4736]: I0316 15:29:44.781539 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7fa40817-425b-4ee8-9c3b-e7e109307837","Type":"ContainerStarted","Data":"1868f457e379d76cbb50d9b051e8270b2761a8e62a360c2d7f32f48c8f1bcfd1"} Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.015794 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qsnvr"] Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.126021 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g5mdt"] Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.136694 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.150646 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5mdt"] Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.219565 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.256385 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-utilities\") pod \"redhat-marketplace-g5mdt\" (UID: \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\") " pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.256449 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxtwd\" (UniqueName: \"kubernetes.io/projected/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-kube-api-access-cxtwd\") pod \"redhat-marketplace-g5mdt\" (UID: \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\") " pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.256515 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-catalog-content\") pod \"redhat-marketplace-g5mdt\" (UID: \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\") " pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.421623 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-catalog-content\") pod \"redhat-marketplace-g5mdt\" (UID: \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\") " pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.422287 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-utilities\") pod \"redhat-marketplace-g5mdt\" (UID: \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\") " pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.422335 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxtwd\" (UniqueName: \"kubernetes.io/projected/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-kube-api-access-cxtwd\") pod \"redhat-marketplace-g5mdt\" (UID: \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\") " pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.423207 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-catalog-content\") pod \"redhat-marketplace-g5mdt\" (UID: \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\") " pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.423224 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-utilities\") pod \"redhat-marketplace-g5mdt\" (UID: \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\") " pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.478254 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxtwd\" (UniqueName: \"kubernetes.io/projected/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-kube-api-access-cxtwd\") pod \"redhat-marketplace-g5mdt\" (UID: \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\") " pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.591078 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:29:45 crc kubenswrapper[4736]: I0316 15:29:45.913422 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsnvr" event={"ID":"da9a7f53-e3db-465d-b15c-3be5883d8c2a","Type":"ContainerStarted","Data":"54eb2c02167d080373bb2e8a45ee37068f1d51be9f0a87cf5062727440c5264b"} Mar 16 15:29:46 crc kubenswrapper[4736]: I0316 15:29:46.190404 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9243d80f-05dc-4dff-a328-780f64a121af","Type":"ContainerStarted","Data":"e862bf134a35ef9b65ae44ba536fdf4b71fabd5f5d50015ac567b2e66a24a173"} Mar 16 15:29:46 crc kubenswrapper[4736]: I0316 15:29:46.677955 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 16 15:29:46 crc kubenswrapper[4736]: I0316 15:29:46.679560 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 16 15:29:46 crc kubenswrapper[4736]: I0316 15:29:46.690804 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-7vfmx" Mar 16 15:29:46 crc kubenswrapper[4736]: I0316 15:29:46.726039 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 16 15:29:46 crc kubenswrapper[4736]: I0316 15:29:46.836887 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrmnw\" (UniqueName: \"kubernetes.io/projected/0d509d96-9987-4162-8f43-55188067aa4e-kube-api-access-vrmnw\") pod \"kube-state-metrics-0\" (UID: \"0d509d96-9987-4162-8f43-55188067aa4e\") " pod="openstack/kube-state-metrics-0" Mar 16 15:29:46 crc kubenswrapper[4736]: I0316 15:29:46.938501 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrmnw\" (UniqueName: \"kubernetes.io/projected/0d509d96-9987-4162-8f43-55188067aa4e-kube-api-access-vrmnw\") pod \"kube-state-metrics-0\" (UID: \"0d509d96-9987-4162-8f43-55188067aa4e\") " pod="openstack/kube-state-metrics-0" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.010872 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrmnw\" (UniqueName: \"kubernetes.io/projected/0d509d96-9987-4162-8f43-55188067aa4e-kube-api-access-vrmnw\") pod \"kube-state-metrics-0\" (UID: \"0d509d96-9987-4162-8f43-55188067aa4e\") " pod="openstack/kube-state-metrics-0" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.022494 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.127215 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5mdt"] Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.313825 4736 generic.go:334] "Generic (PLEG): container finished" podID="da9a7f53-e3db-465d-b15c-3be5883d8c2a" containerID="8213a1c38eb98a5b9034dd1c22aa82d5955500c03a01064199ce8773b2cec06a" exitCode=0 Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.314208 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsnvr" event={"ID":"da9a7f53-e3db-465d-b15c-3be5883d8c2a","Type":"ContainerDied","Data":"8213a1c38eb98a5b9034dd1c22aa82d5955500c03a01064199ce8773b2cec06a"} Mar 16 15:29:47 crc kubenswrapper[4736]: W0316 15:29:47.376003 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e1d2ac2_5f4f_496f_a1e8_a6dd9f5c75ab.slice/crio-0b6f9c1416b5e32043af10389eb3b1970fd45667bc588079eed3e43843199323 WatchSource:0}: Error finding container 0b6f9c1416b5e32043af10389eb3b1970fd45667bc588079eed3e43843199323: Status 404 returned error can't find the container with id 0b6f9c1416b5e32043af10389eb3b1970fd45667bc588079eed3e43843199323 Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.520506 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9wkhh"] Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.521714 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.526285 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-229d8" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.529327 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.537578 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.569081 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9wkhh"] Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.672002 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3d93764-b264-4e7d-87fe-ea95bd3fb252-scripts\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.672065 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d93764-b264-4e7d-87fe-ea95bd3fb252-combined-ca-bundle\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.672087 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3d93764-b264-4e7d-87fe-ea95bd3fb252-var-log-ovn\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.672173 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjjkw\" (UniqueName: \"kubernetes.io/projected/b3d93764-b264-4e7d-87fe-ea95bd3fb252-kube-api-access-fjjkw\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.672223 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3d93764-b264-4e7d-87fe-ea95bd3fb252-var-run\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.672241 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3d93764-b264-4e7d-87fe-ea95bd3fb252-ovn-controller-tls-certs\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.672269 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3d93764-b264-4e7d-87fe-ea95bd3fb252-var-run-ovn\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.783132 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d93764-b264-4e7d-87fe-ea95bd3fb252-combined-ca-bundle\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.783189 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3d93764-b264-4e7d-87fe-ea95bd3fb252-var-log-ovn\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.783234 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjjkw\" (UniqueName: \"kubernetes.io/projected/b3d93764-b264-4e7d-87fe-ea95bd3fb252-kube-api-access-fjjkw\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.783288 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3d93764-b264-4e7d-87fe-ea95bd3fb252-var-run\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.783314 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3d93764-b264-4e7d-87fe-ea95bd3fb252-ovn-controller-tls-certs\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.783346 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3d93764-b264-4e7d-87fe-ea95bd3fb252-var-run-ovn\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.783396 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3d93764-b264-4e7d-87fe-ea95bd3fb252-scripts\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.785696 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b3d93764-b264-4e7d-87fe-ea95bd3fb252-var-run\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.785862 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b3d93764-b264-4e7d-87fe-ea95bd3fb252-var-log-ovn\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.789189 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b3d93764-b264-4e7d-87fe-ea95bd3fb252-var-run-ovn\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.797937 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3d93764-b264-4e7d-87fe-ea95bd3fb252-combined-ca-bundle\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.819505 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3d93764-b264-4e7d-87fe-ea95bd3fb252-ovn-controller-tls-certs\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.820741 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-jchb9"] Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.830345 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3d93764-b264-4e7d-87fe-ea95bd3fb252-scripts\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.831719 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.836471 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jchb9"] Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.850721 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjjkw\" (UniqueName: \"kubernetes.io/projected/b3d93764-b264-4e7d-87fe-ea95bd3fb252-kube-api-access-fjjkw\") pod \"ovn-controller-9wkhh\" (UID: \"b3d93764-b264-4e7d-87fe-ea95bd3fb252\") " pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.917531 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wkhh" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.999380 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/197c602f-0abb-430a-8011-a454072994fd-etc-ovs\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.999432 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/197c602f-0abb-430a-8011-a454072994fd-var-log\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:47 crc kubenswrapper[4736]: I0316 15:29:47.999458 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/197c602f-0abb-430a-8011-a454072994fd-var-run\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:48 crc kubenswrapper[4736]: I0316 15:29:47.999493 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/197c602f-0abb-430a-8011-a454072994fd-var-lib\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:48 crc kubenswrapper[4736]: I0316 15:29:47.999521 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/197c602f-0abb-430a-8011-a454072994fd-scripts\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:48 crc kubenswrapper[4736]: I0316 15:29:47.999599 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vf2p\" (UniqueName: \"kubernetes.io/projected/197c602f-0abb-430a-8011-a454072994fd-kube-api-access-9vf2p\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.485803 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vf2p\" (UniqueName: \"kubernetes.io/projected/197c602f-0abb-430a-8011-a454072994fd-kube-api-access-9vf2p\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.493747 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/197c602f-0abb-430a-8011-a454072994fd-etc-ovs\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.493830 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/197c602f-0abb-430a-8011-a454072994fd-var-log\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.493869 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/197c602f-0abb-430a-8011-a454072994fd-var-run\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.493970 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/197c602f-0abb-430a-8011-a454072994fd-var-lib\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.494043 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/197c602f-0abb-430a-8011-a454072994fd-scripts\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.506426 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/197c602f-0abb-430a-8011-a454072994fd-var-log\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.507009 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/197c602f-0abb-430a-8011-a454072994fd-etc-ovs\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.510928 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/197c602f-0abb-430a-8011-a454072994fd-var-run\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.511306 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/197c602f-0abb-430a-8011-a454072994fd-var-lib\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.535880 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/197c602f-0abb-430a-8011-a454072994fd-scripts\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.566785 4736 generic.go:334] "Generic (PLEG): container finished" podID="5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" containerID="9e94d97239afca8bbae3da0b26b1897d2aa353bcdaf3e8a381c78232d5e353c5" exitCode=0 Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.570675 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5mdt" event={"ID":"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab","Type":"ContainerDied","Data":"9e94d97239afca8bbae3da0b26b1897d2aa353bcdaf3e8a381c78232d5e353c5"} Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.570728 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5mdt" event={"ID":"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab","Type":"ContainerStarted","Data":"0b6f9c1416b5e32043af10389eb3b1970fd45667bc588079eed3e43843199323"} Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.682652 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vf2p\" (UniqueName: \"kubernetes.io/projected/197c602f-0abb-430a-8011-a454072994fd-kube-api-access-9vf2p\") pod \"ovn-controller-ovs-jchb9\" (UID: \"197c602f-0abb-430a-8011-a454072994fd\") " pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:49 crc kubenswrapper[4736]: I0316 15:29:49.711410 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:29:50 crc kubenswrapper[4736]: I0316 15:29:50.203712 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 16 15:29:50 crc kubenswrapper[4736]: I0316 15:29:50.459295 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9wkhh"] Mar 16 15:29:50 crc kubenswrapper[4736]: I0316 15:29:50.595922 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wkhh" event={"ID":"b3d93764-b264-4e7d-87fe-ea95bd3fb252","Type":"ContainerStarted","Data":"bc27dee91366793f1c8c285c291aade31436138dafd719031fd375575108dfbc"} Mar 16 15:29:50 crc kubenswrapper[4736]: I0316 15:29:50.608244 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d509d96-9987-4162-8f43-55188067aa4e","Type":"ContainerStarted","Data":"33e254db6fa911e83922cfffec8e34cccdf8c2884ebd9f6a82a612e304936aab"} Mar 16 15:29:51 crc kubenswrapper[4736]: I0316 15:29:51.721942 4736 generic.go:334] "Generic (PLEG): container finished" podID="da9a7f53-e3db-465d-b15c-3be5883d8c2a" containerID="669eb85d338749cf6a64ca867359a09c47b235e49bace6c358abf202db705d35" exitCode=0 Mar 16 15:29:51 crc kubenswrapper[4736]: I0316 15:29:51.725553 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsnvr" event={"ID":"da9a7f53-e3db-465d-b15c-3be5883d8c2a","Type":"ContainerDied","Data":"669eb85d338749cf6a64ca867359a09c47b235e49bace6c358abf202db705d35"} Mar 16 15:29:51 crc kubenswrapper[4736]: I0316 15:29:51.777281 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5mdt" event={"ID":"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab","Type":"ContainerStarted","Data":"3a7bce1e7e6c929f0e9bcaf2aff5f59833c1b81a90d912b04ccad96abf85644d"} Mar 16 15:29:52 crc kubenswrapper[4736]: I0316 15:29:52.687996 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 16 15:29:52 crc kubenswrapper[4736]: I0316 15:29:52.736312 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 16 15:29:52 crc kubenswrapper[4736]: I0316 15:29:52.736449 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.741908 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.742294 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.742480 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-dth9k" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.742734 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.742895 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.850644 4736 generic.go:334] "Generic (PLEG): container finished" podID="5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" containerID="3a7bce1e7e6c929f0e9bcaf2aff5f59833c1b81a90d912b04ccad96abf85644d" exitCode=0 Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.850687 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5mdt" event={"ID":"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab","Type":"ContainerDied","Data":"3a7bce1e7e6c929f0e9bcaf2aff5f59833c1b81a90d912b04ccad96abf85644d"} Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.850706 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfa9156-d077-4b45-af4d-cc113fbff209-config\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.850792 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/adfa9156-d077-4b45-af4d-cc113fbff209-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.850884 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/adfa9156-d077-4b45-af4d-cc113fbff209-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.850974 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/adfa9156-d077-4b45-af4d-cc113fbff209-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.851355 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/adfa9156-d077-4b45-af4d-cc113fbff209-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.851380 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcjxv\" (UniqueName: \"kubernetes.io/projected/adfa9156-d077-4b45-af4d-cc113fbff209-kube-api-access-qcjxv\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.851713 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfa9156-d077-4b45-af4d-cc113fbff209-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.851787 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.906094 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jchb9"] Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.955861 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfa9156-d077-4b45-af4d-cc113fbff209-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.955906 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.955952 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfa9156-d077-4b45-af4d-cc113fbff209-config\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.955971 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/adfa9156-d077-4b45-af4d-cc113fbff209-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.955988 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/adfa9156-d077-4b45-af4d-cc113fbff209-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.956028 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/adfa9156-d077-4b45-af4d-cc113fbff209-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.956056 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcjxv\" (UniqueName: \"kubernetes.io/projected/adfa9156-d077-4b45-af4d-cc113fbff209-kube-api-access-qcjxv\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.956073 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/adfa9156-d077-4b45-af4d-cc113fbff209-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.957084 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.957940 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/adfa9156-d077-4b45-af4d-cc113fbff209-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.958366 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfa9156-d077-4b45-af4d-cc113fbff209-config\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.958547 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/adfa9156-d077-4b45-af4d-cc113fbff209-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.989722 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcjxv\" (UniqueName: \"kubernetes.io/projected/adfa9156-d077-4b45-af4d-cc113fbff209-kube-api-access-qcjxv\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:52.996506 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/adfa9156-d077-4b45-af4d-cc113fbff209-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.009735 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfa9156-d077-4b45-af4d-cc113fbff209-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.022679 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.037085 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/adfa9156-d077-4b45-af4d-cc113fbff209-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"adfa9156-d077-4b45-af4d-cc113fbff209\") " pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.101212 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.104086 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.132091 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.132793 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.133128 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.133372 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.136131 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-b5nvn" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.149834 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.183494 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3228db46-56d3-4e82-8973-77a049c7e003-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.183570 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3228db46-56d3-4e82-8973-77a049c7e003-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.183595 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3228db46-56d3-4e82-8973-77a049c7e003-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.183663 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3228db46-56d3-4e82-8973-77a049c7e003-config\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.183758 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3228db46-56d3-4e82-8973-77a049c7e003-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.183798 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3228db46-56d3-4e82-8973-77a049c7e003-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.183821 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lt29\" (UniqueName: \"kubernetes.io/projected/3228db46-56d3-4e82-8973-77a049c7e003-kube-api-access-6lt29\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.183870 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.286968 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3228db46-56d3-4e82-8973-77a049c7e003-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.287703 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3228db46-56d3-4e82-8973-77a049c7e003-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.287742 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3228db46-56d3-4e82-8973-77a049c7e003-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.287816 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3228db46-56d3-4e82-8973-77a049c7e003-config\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.287967 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3228db46-56d3-4e82-8973-77a049c7e003-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.287985 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3228db46-56d3-4e82-8973-77a049c7e003-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.288009 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lt29\" (UniqueName: \"kubernetes.io/projected/3228db46-56d3-4e82-8973-77a049c7e003-kube-api-access-6lt29\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.288056 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.289721 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3228db46-56d3-4e82-8973-77a049c7e003-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.290314 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3228db46-56d3-4e82-8973-77a049c7e003-config\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.290318 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3228db46-56d3-4e82-8973-77a049c7e003-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.288759 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.294479 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3228db46-56d3-4e82-8973-77a049c7e003-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.302391 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3228db46-56d3-4e82-8973-77a049c7e003-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.306775 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lt29\" (UniqueName: \"kubernetes.io/projected/3228db46-56d3-4e82-8973-77a049c7e003-kube-api-access-6lt29\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.316173 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3228db46-56d3-4e82-8973-77a049c7e003-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.341862 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3228db46-56d3-4e82-8973-77a049c7e003\") " pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.447064 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 16 15:29:53 crc kubenswrapper[4736]: I0316 15:29:53.896997 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jchb9" event={"ID":"197c602f-0abb-430a-8011-a454072994fd","Type":"ContainerStarted","Data":"63601a10f645840cee3c1119fbe7453879df9dc957e733cc229784cc3e045c5d"} Mar 16 15:29:54 crc kubenswrapper[4736]: I0316 15:29:54.323973 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 16 15:29:54 crc kubenswrapper[4736]: W0316 15:29:54.775642 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadfa9156_d077_4b45_af4d_cc113fbff209.slice/crio-08bea05d851a16744a8fea7e9411efca88186df91722ed66467efd6be47d2763 WatchSource:0}: Error finding container 08bea05d851a16744a8fea7e9411efca88186df91722ed66467efd6be47d2763: Status 404 returned error can't find the container with id 08bea05d851a16744a8fea7e9411efca88186df91722ed66467efd6be47d2763 Mar 16 15:29:54 crc kubenswrapper[4736]: I0316 15:29:54.950752 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"adfa9156-d077-4b45-af4d-cc113fbff209","Type":"ContainerStarted","Data":"08bea05d851a16744a8fea7e9411efca88186df91722ed66467efd6be47d2763"} Mar 16 15:29:55 crc kubenswrapper[4736]: I0316 15:29:55.361673 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 16 15:29:56 crc kubenswrapper[4736]: I0316 15:29:56.018307 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3228db46-56d3-4e82-8973-77a049c7e003","Type":"ContainerStarted","Data":"b076ee6e9904062cc949c814b69d3b6f10ccb48ea72f04b8700bd27a1dec9d91"} Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.153937 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561250-v4sxj"] Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.155997 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561250-v4sxj" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.161678 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561250-v4sxj"] Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.164975 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.165318 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.165454 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.290787 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6cgz\" (UniqueName: \"kubernetes.io/projected/9803e05f-288f-4f40-9a58-8e0d8622ce48-kube-api-access-j6cgz\") pod \"auto-csr-approver-29561250-v4sxj\" (UID: \"9803e05f-288f-4f40-9a58-8e0d8622ce48\") " pod="openshift-infra/auto-csr-approver-29561250-v4sxj" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.300226 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk"] Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.301599 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.319681 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.320452 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.335704 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk"] Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.392320 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/929f108f-20f2-47ca-8e05-e29b5e7c4609-config-volume\") pod \"collect-profiles-29561250-jqjsk\" (UID: \"929f108f-20f2-47ca-8e05-e29b5e7c4609\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.392398 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6cgz\" (UniqueName: \"kubernetes.io/projected/9803e05f-288f-4f40-9a58-8e0d8622ce48-kube-api-access-j6cgz\") pod \"auto-csr-approver-29561250-v4sxj\" (UID: \"9803e05f-288f-4f40-9a58-8e0d8622ce48\") " pod="openshift-infra/auto-csr-approver-29561250-v4sxj" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.392421 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/929f108f-20f2-47ca-8e05-e29b5e7c4609-secret-volume\") pod \"collect-profiles-29561250-jqjsk\" (UID: \"929f108f-20f2-47ca-8e05-e29b5e7c4609\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.392463 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhstv\" (UniqueName: \"kubernetes.io/projected/929f108f-20f2-47ca-8e05-e29b5e7c4609-kube-api-access-mhstv\") pod \"collect-profiles-29561250-jqjsk\" (UID: \"929f108f-20f2-47ca-8e05-e29b5e7c4609\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.718500 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/929f108f-20f2-47ca-8e05-e29b5e7c4609-config-volume\") pod \"collect-profiles-29561250-jqjsk\" (UID: \"929f108f-20f2-47ca-8e05-e29b5e7c4609\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.718596 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/929f108f-20f2-47ca-8e05-e29b5e7c4609-secret-volume\") pod \"collect-profiles-29561250-jqjsk\" (UID: \"929f108f-20f2-47ca-8e05-e29b5e7c4609\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.718660 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhstv\" (UniqueName: \"kubernetes.io/projected/929f108f-20f2-47ca-8e05-e29b5e7c4609-kube-api-access-mhstv\") pod \"collect-profiles-29561250-jqjsk\" (UID: \"929f108f-20f2-47ca-8e05-e29b5e7c4609\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.720080 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/929f108f-20f2-47ca-8e05-e29b5e7c4609-config-volume\") pod \"collect-profiles-29561250-jqjsk\" (UID: \"929f108f-20f2-47ca-8e05-e29b5e7c4609\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.748824 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/929f108f-20f2-47ca-8e05-e29b5e7c4609-secret-volume\") pod \"collect-profiles-29561250-jqjsk\" (UID: \"929f108f-20f2-47ca-8e05-e29b5e7c4609\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.749734 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6cgz\" (UniqueName: \"kubernetes.io/projected/9803e05f-288f-4f40-9a58-8e0d8622ce48-kube-api-access-j6cgz\") pod \"auto-csr-approver-29561250-v4sxj\" (UID: \"9803e05f-288f-4f40-9a58-8e0d8622ce48\") " pod="openshift-infra/auto-csr-approver-29561250-v4sxj" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.785596 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhstv\" (UniqueName: \"kubernetes.io/projected/929f108f-20f2-47ca-8e05-e29b5e7c4609-kube-api-access-mhstv\") pod \"collect-profiles-29561250-jqjsk\" (UID: \"929f108f-20f2-47ca-8e05-e29b5e7c4609\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.800591 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561250-v4sxj" Mar 16 15:30:00 crc kubenswrapper[4736]: I0316 15:30:00.934676 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" Mar 16 15:30:08 crc kubenswrapper[4736]: I0316 15:30:08.508166 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:30:08 crc kubenswrapper[4736]: I0316 15:30:08.508796 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:30:08 crc kubenswrapper[4736]: I0316 15:30:08.508843 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:30:08 crc kubenswrapper[4736]: I0316 15:30:08.509841 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d913506ed85bd453632f44b051d57a127bbfa0326b3ffb2cb8d536a6f22597ce"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 15:30:08 crc kubenswrapper[4736]: I0316 15:30:08.509897 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://d913506ed85bd453632f44b051d57a127bbfa0326b3ffb2cb8d536a6f22597ce" gracePeriod=600 Mar 16 15:30:08 crc kubenswrapper[4736]: I0316 15:30:08.914573 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="d913506ed85bd453632f44b051d57a127bbfa0326b3ffb2cb8d536a6f22597ce" exitCode=0 Mar 16 15:30:08 crc kubenswrapper[4736]: I0316 15:30:08.914649 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"d913506ed85bd453632f44b051d57a127bbfa0326b3ffb2cb8d536a6f22597ce"} Mar 16 15:30:08 crc kubenswrapper[4736]: I0316 15:30:08.914689 4736 scope.go:117] "RemoveContainer" containerID="fc3be125f2287a40d18c8298f349a4df97007877194d3db241a15fe3b7bae6c8" Mar 16 15:30:08 crc kubenswrapper[4736]: I0316 15:30:08.918435 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsnvr" event={"ID":"da9a7f53-e3db-465d-b15c-3be5883d8c2a","Type":"ContainerStarted","Data":"f58d8ec2e3edc11dff5a2884370986ae74b9c287ae516f3f94b29c5658875a47"} Mar 16 15:30:08 crc kubenswrapper[4736]: I0316 15:30:08.944919 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qsnvr" podStartSLOduration=17.638781251 podStartE2EDuration="27.944895928s" podCreationTimestamp="2026-03-16 15:29:41 +0000 UTC" firstStartedPulling="2026-03-16 15:29:47.383358718 +0000 UTC m=+989.110749005" lastFinishedPulling="2026-03-16 15:29:57.689473395 +0000 UTC m=+999.416863682" observedRunningTime="2026-03-16 15:30:08.938967584 +0000 UTC m=+1010.666357871" watchObservedRunningTime="2026-03-16 15:30:08.944895928 +0000 UTC m=+1010.672286215" Mar 16 15:30:12 crc kubenswrapper[4736]: I0316 15:30:12.219076 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:30:12 crc kubenswrapper[4736]: I0316 15:30:12.219637 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:30:13 crc kubenswrapper[4736]: I0316 15:30:13.270830 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qsnvr" podUID="da9a7f53-e3db-465d-b15c-3be5883d8c2a" containerName="registry-server" probeResult="failure" output=< Mar 16 15:30:13 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 15:30:13 crc kubenswrapper[4736]: > Mar 16 15:30:18 crc kubenswrapper[4736]: E0316 15:30:18.753875 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:18 crc kubenswrapper[4736]: E0316 15:30:18.754204 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:18 crc kubenswrapper[4736]: E0316 15:30:18.754349 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:e43235cb19da04699a53f42b6a75afe9,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s6cb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(9243d80f-05dc-4dff-a328-780f64a121af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:30:18 crc kubenswrapper[4736]: E0316 15:30:18.756835 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="9243d80f-05dc-4dff-a328-780f64a121af" Mar 16 15:30:19 crc kubenswrapper[4736]: E0316 15:30:19.024893 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="9243d80f-05dc-4dff-a328-780f64a121af" Mar 16 15:30:20 crc kubenswrapper[4736]: E0316 15:30:20.211964 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:20 crc kubenswrapper[4736]: E0316 15:30:20.212129 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:20 crc kubenswrapper[4736]: E0316 15:30:20.212458 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnmtk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(343be938-86f7-45c1-b8ef-a3143202be82): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:30:20 crc kubenswrapper[4736]: E0316 15:30:20.213744 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="343be938-86f7-45c1-b8ef-a3143202be82" Mar 16 15:30:21 crc kubenswrapper[4736]: E0316 15:30:21.046290 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/rabbitmq-server-0" podUID="343be938-86f7-45c1-b8ef-a3143202be82" Mar 16 15:30:22 crc kubenswrapper[4736]: I0316 15:30:22.282709 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:30:22 crc kubenswrapper[4736]: I0316 15:30:22.350292 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:30:22 crc kubenswrapper[4736]: I0316 15:30:22.542349 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qsnvr"] Mar 16 15:30:24 crc kubenswrapper[4736]: I0316 15:30:24.067238 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qsnvr" podUID="da9a7f53-e3db-465d-b15c-3be5883d8c2a" containerName="registry-server" containerID="cri-o://f58d8ec2e3edc11dff5a2884370986ae74b9c287ae516f3f94b29c5658875a47" gracePeriod=2 Mar 16 15:30:25 crc kubenswrapper[4736]: I0316 15:30:25.077174 4736 generic.go:334] "Generic (PLEG): container finished" podID="da9a7f53-e3db-465d-b15c-3be5883d8c2a" containerID="f58d8ec2e3edc11dff5a2884370986ae74b9c287ae516f3f94b29c5658875a47" exitCode=0 Mar 16 15:30:25 crc kubenswrapper[4736]: I0316 15:30:25.077203 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsnvr" event={"ID":"da9a7f53-e3db-465d-b15c-3be5883d8c2a","Type":"ContainerDied","Data":"f58d8ec2e3edc11dff5a2884370986ae74b9c287ae516f3f94b29c5658875a47"} Mar 16 15:30:26 crc kubenswrapper[4736]: E0316 15:30:26.265535 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-base:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:26 crc kubenswrapper[4736]: E0316 15:30:26.265606 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-base:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:26 crc kubenswrapper[4736]: E0316 15:30:26.265791 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-base:e43235cb19da04699a53f42b6a75afe9,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bdh57bh6ch65bhcbh57fhd5hbhfh56chb8h65ch5b9hb6h9fh549h545h5f9h5b6h5b6h664h5dch9fhbdh5d5hcch96h59hdchc4h78h67dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vf2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-jchb9_openstack(197c602f-0abb-430a-8011-a454072994fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:30:26 crc kubenswrapper[4736]: E0316 15:30:26.266964 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-jchb9" podUID="197c602f-0abb-430a-8011-a454072994fd" Mar 16 15:30:26 crc kubenswrapper[4736]: E0316 15:30:26.287391 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:26 crc kubenswrapper[4736]: E0316 15:30:26.287458 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:26 crc kubenswrapper[4736]: E0316 15:30:26.287583 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:e43235cb19da04699a53f42b6a75afe9,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvq4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(51e06fc2-19ee-4e32-8118-d4596cb6b124): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:30:26 crc kubenswrapper[4736]: E0316 15:30:26.288901 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="51e06fc2-19ee-4e32-8118-d4596cb6b124" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.012946 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-memcached:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.013632 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-memcached:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.013882 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-memcached:e43235cb19da04699a53f42b6a75afe9,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nc9h589h667h675hcfh57h67ch67ch697h68ch584h5c8hf5h8h58h6ch68dh66ch5cfh5c7h54fh579h668hfdh65h669hcch87h76h65h5f8h86q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24w6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(7fa40817-425b-4ee8-9c3b-e7e109307837): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.015420 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="7fa40817-425b-4ee8-9c3b-e7e109307837" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.057438 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.057865 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.058073 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4vn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(582900c6-e591-4ff4-ac53-a8965af431e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.059361 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="582900c6-e591-4ff4-ac53-a8965af431e2" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.095567 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-base:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/ovn-controller-ovs-jchb9" podUID="197c602f-0abb-430a-8011-a454072994fd" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.095700 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/openstack-galera-0" podUID="51e06fc2-19ee-4e32-8118-d4596cb6b124" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.097016 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-memcached:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/memcached-0" podUID="7fa40817-425b-4ee8-9c3b-e7e109307837" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.097146 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-rabbitmq:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="582900c6-e591-4ff4-ac53-a8965af431e2" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.985945 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.986002 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.986150 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zsvzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-bd9cf7445-cfh79_openstack(936bfa8a-f1c2-4c9b-a5f9-84155418f791): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:30:27 crc kubenswrapper[4736]: E0316 15:30:27.987411 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" podUID="936bfa8a-f1c2-4c9b-a5f9-84155418f791" Mar 16 15:30:28 crc kubenswrapper[4736]: E0316 15:30:28.442201 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-controller:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:28 crc kubenswrapper[4736]: E0316 15:30:28.442407 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-controller:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:28 crc kubenswrapper[4736]: E0316 15:30:28.442594 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-controller:e43235cb19da04699a53f42b6a75afe9,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bdh57bh6ch65bhcbh57fhd5hbhfh56chb8h65ch5b9hb6h9fh549h545h5f9h5b6h5b6h664h5dch9fhbdh5d5hcch96h59hdchc4h78h67dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fjjkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-9wkhh_openstack(b3d93764-b264-4e7d-87fe-ea95bd3fb252): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:30:28 crc kubenswrapper[4736]: E0316 15:30:28.444287 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-9wkhh" podUID="b3d93764-b264-4e7d-87fe-ea95bd3fb252" Mar 16 15:30:28 crc kubenswrapper[4736]: E0316 15:30:28.988825 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-sb-db-server:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:28 crc kubenswrapper[4736]: E0316 15:30:28.988876 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-sb-db-server:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:28 crc kubenswrapper[4736]: E0316 15:30:28.989004 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-sb-db-server:e43235cb19da04699a53f42b6a75afe9,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n685h77h5ch56fh5dfh598h5c8h6dh65bh54h5f5h95hbch66dh559h584h686h8fhcbh64fh7ch76h84h569h54h579h57bhf9h4h645h5fch548q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qcjxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(adfa9156-d077-4b45-af4d-cc113fbff209): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.053231 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.053300 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.053457 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kq8pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-58b7684d4f-wdpvw_openstack(2dd40335-c071-4d7e-8f2b-e76ae17febda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.054867 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" podUID="2dd40335-c071-4d7e-8f2b-e76ae17febda" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.079619 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.079694 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.079826 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfk4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7b95c5c449-sfvlm_openstack(65ff1eb1-99b2-40cb-b393-ed85d7072b2a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.080948 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7b95c5c449-sfvlm" podUID="65ff1eb1-99b2-40cb-b393-ed85d7072b2a" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.093894 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.093937 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.094038 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pwsjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86545856d7-5chd7_openstack(11a179c6-fb20-41f3-a16a-a189c65372af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.095237 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-86545856d7-5chd7" podUID="11a179c6-fb20-41f3-a16a-a189c65372af" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.111271 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-controller:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/ovn-controller-9wkhh" podUID="b3d93764-b264-4e7d-87fe-ea95bd3fb252" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.111311 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" podUID="2dd40335-c071-4d7e-8f2b-e76ae17febda" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.111480 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/dnsmasq-dns-86545856d7-5chd7" podUID="11a179c6-fb20-41f3-a16a-a189c65372af" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.376964 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-nb-db-server:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.377683 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-nb-db-server:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:30:29 crc kubenswrapper[4736]: E0316 15:30:29.378007 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-nb-db-server:e43235cb19da04699a53f42b6a75afe9,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6fh595h578h66ch654h595h7h96h579h577h55bh8bh5dchf7hb6h56fh66dh598hf4h669h64bh666h585h58fhd5h599h679h5fch659h66dh64ch89q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lt29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(3228db46-56d3-4e82-8973-77a049c7e003): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.527574 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.537870 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.573163 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936bfa8a-f1c2-4c9b-a5f9-84155418f791-dns-svc\") pod \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\" (UID: \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\") " Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.573245 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5f6x\" (UniqueName: \"kubernetes.io/projected/da9a7f53-e3db-465d-b15c-3be5883d8c2a-kube-api-access-p5f6x\") pod \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\" (UID: \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\") " Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.573309 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9a7f53-e3db-465d-b15c-3be5883d8c2a-catalog-content\") pod \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\" (UID: \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\") " Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.573332 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9a7f53-e3db-465d-b15c-3be5883d8c2a-utilities\") pod \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\" (UID: \"da9a7f53-e3db-465d-b15c-3be5883d8c2a\") " Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.573355 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsvzh\" (UniqueName: \"kubernetes.io/projected/936bfa8a-f1c2-4c9b-a5f9-84155418f791-kube-api-access-zsvzh\") pod \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\" (UID: \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\") " Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.573389 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936bfa8a-f1c2-4c9b-a5f9-84155418f791-config\") pod \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\" (UID: \"936bfa8a-f1c2-4c9b-a5f9-84155418f791\") " Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.574159 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/936bfa8a-f1c2-4c9b-a5f9-84155418f791-config" (OuterVolumeSpecName: "config") pod "936bfa8a-f1c2-4c9b-a5f9-84155418f791" (UID: "936bfa8a-f1c2-4c9b-a5f9-84155418f791"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.574676 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da9a7f53-e3db-465d-b15c-3be5883d8c2a-utilities" (OuterVolumeSpecName: "utilities") pod "da9a7f53-e3db-465d-b15c-3be5883d8c2a" (UID: "da9a7f53-e3db-465d-b15c-3be5883d8c2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.575264 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/936bfa8a-f1c2-4c9b-a5f9-84155418f791-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "936bfa8a-f1c2-4c9b-a5f9-84155418f791" (UID: "936bfa8a-f1c2-4c9b-a5f9-84155418f791"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.594062 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/936bfa8a-f1c2-4c9b-a5f9-84155418f791-kube-api-access-zsvzh" (OuterVolumeSpecName: "kube-api-access-zsvzh") pod "936bfa8a-f1c2-4c9b-a5f9-84155418f791" (UID: "936bfa8a-f1c2-4c9b-a5f9-84155418f791"). InnerVolumeSpecName "kube-api-access-zsvzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.598730 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b95c5c449-sfvlm" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.603084 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da9a7f53-e3db-465d-b15c-3be5883d8c2a-kube-api-access-p5f6x" (OuterVolumeSpecName: "kube-api-access-p5f6x") pod "da9a7f53-e3db-465d-b15c-3be5883d8c2a" (UID: "da9a7f53-e3db-465d-b15c-3be5883d8c2a"). InnerVolumeSpecName "kube-api-access-p5f6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.658432 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da9a7f53-e3db-465d-b15c-3be5883d8c2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da9a7f53-e3db-465d-b15c-3be5883d8c2a" (UID: "da9a7f53-e3db-465d-b15c-3be5883d8c2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.677886 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfk4b\" (UniqueName: \"kubernetes.io/projected/65ff1eb1-99b2-40cb-b393-ed85d7072b2a-kube-api-access-dfk4b\") pod \"65ff1eb1-99b2-40cb-b393-ed85d7072b2a\" (UID: \"65ff1eb1-99b2-40cb-b393-ed85d7072b2a\") " Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.677937 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ff1eb1-99b2-40cb-b393-ed85d7072b2a-config\") pod \"65ff1eb1-99b2-40cb-b393-ed85d7072b2a\" (UID: \"65ff1eb1-99b2-40cb-b393-ed85d7072b2a\") " Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.678491 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/936bfa8a-f1c2-4c9b-a5f9-84155418f791-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.678505 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/936bfa8a-f1c2-4c9b-a5f9-84155418f791-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.678515 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5f6x\" (UniqueName: \"kubernetes.io/projected/da9a7f53-e3db-465d-b15c-3be5883d8c2a-kube-api-access-p5f6x\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.678527 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da9a7f53-e3db-465d-b15c-3be5883d8c2a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.678535 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da9a7f53-e3db-465d-b15c-3be5883d8c2a-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.678544 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsvzh\" (UniqueName: \"kubernetes.io/projected/936bfa8a-f1c2-4c9b-a5f9-84155418f791-kube-api-access-zsvzh\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.679672 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65ff1eb1-99b2-40cb-b393-ed85d7072b2a-config" (OuterVolumeSpecName: "config") pod "65ff1eb1-99b2-40cb-b393-ed85d7072b2a" (UID: "65ff1eb1-99b2-40cb-b393-ed85d7072b2a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.684896 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65ff1eb1-99b2-40cb-b393-ed85d7072b2a-kube-api-access-dfk4b" (OuterVolumeSpecName: "kube-api-access-dfk4b") pod "65ff1eb1-99b2-40cb-b393-ed85d7072b2a" (UID: "65ff1eb1-99b2-40cb-b393-ed85d7072b2a"). InnerVolumeSpecName "kube-api-access-dfk4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.780735 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfk4b\" (UniqueName: \"kubernetes.io/projected/65ff1eb1-99b2-40cb-b393-ed85d7072b2a-kube-api-access-dfk4b\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.781238 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65ff1eb1-99b2-40cb-b393-ed85d7072b2a-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.953482 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk"] Mar 16 15:30:29 crc kubenswrapper[4736]: I0316 15:30:29.980663 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561250-v4sxj"] Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.119592 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d509d96-9987-4162-8f43-55188067aa4e","Type":"ContainerStarted","Data":"ffbf9268e958c06ef02bbe2e8b37ee98559287a23e36086653603ffe9a194254"} Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.119713 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.129203 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" event={"ID":"929f108f-20f2-47ca-8e05-e29b5e7c4609","Type":"ContainerStarted","Data":"9f2c4b8e880643eb8d1b334bacf66b1bb8c46ae9601a9cb7d706e1f68019031b"} Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.137199 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"42060cddfe4c59472f3be42cb2e0bb18ea86207173b4fad3d63b9e861b6fe74e"} Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.144593 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=5.010518414 podStartE2EDuration="44.1445728s" podCreationTimestamp="2026-03-16 15:29:46 +0000 UTC" firstStartedPulling="2026-03-16 15:29:50.324929954 +0000 UTC m=+992.052320241" lastFinishedPulling="2026-03-16 15:30:29.45898434 +0000 UTC m=+1031.186374627" observedRunningTime="2026-03-16 15:30:30.142620986 +0000 UTC m=+1031.870011273" watchObservedRunningTime="2026-03-16 15:30:30.1445728 +0000 UTC m=+1031.871963087" Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.158712 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b95c5c449-sfvlm" Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.158753 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b95c5c449-sfvlm" event={"ID":"65ff1eb1-99b2-40cb-b393-ed85d7072b2a","Type":"ContainerDied","Data":"bf652946e51bcd1641222187b9e39cc4e2021fdb01a9d60e1a00d7fe317ac529"} Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.171452 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qsnvr" Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.171741 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qsnvr" event={"ID":"da9a7f53-e3db-465d-b15c-3be5883d8c2a","Type":"ContainerDied","Data":"54eb2c02167d080373bb2e8a45ee37068f1d51be9f0a87cf5062727440c5264b"} Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.171813 4736 scope.go:117] "RemoveContainer" containerID="f58d8ec2e3edc11dff5a2884370986ae74b9c287ae516f3f94b29c5658875a47" Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.178037 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" event={"ID":"936bfa8a-f1c2-4c9b-a5f9-84155418f791","Type":"ContainerDied","Data":"c04419d8ad978dc6a59d640c946f9e68284bbf59d2cf454c8e44d65eee8aedea"} Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.178244 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bd9cf7445-cfh79" Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.191809 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561250-v4sxj" event={"ID":"9803e05f-288f-4f40-9a58-8e0d8622ce48","Type":"ContainerStarted","Data":"7f21383c1f9dcad78b7d8021628102ee085dd3b99a06571cbefd53876c77338c"} Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.198432 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5mdt" event={"ID":"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab","Type":"ContainerStarted","Data":"ae345891c3a496474dcb718801a48193016b163058ceac1e2066d799ebe8c59e"} Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.227922 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g5mdt" podStartSLOduration=36.717784709 podStartE2EDuration="45.227895901s" podCreationTimestamp="2026-03-16 15:29:45 +0000 UTC" firstStartedPulling="2026-03-16 15:29:49.629310986 +0000 UTC m=+991.356701273" lastFinishedPulling="2026-03-16 15:29:58.139422178 +0000 UTC m=+999.866812465" observedRunningTime="2026-03-16 15:30:30.219527749 +0000 UTC m=+1031.946918066" watchObservedRunningTime="2026-03-16 15:30:30.227895901 +0000 UTC m=+1031.955286188" Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.247015 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qsnvr"] Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.269795 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qsnvr"] Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.275140 4736 scope.go:117] "RemoveContainer" containerID="669eb85d338749cf6a64ca867359a09c47b235e49bace6c358abf202db705d35" Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.333539 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b95c5c449-sfvlm"] Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.394812 4736 scope.go:117] "RemoveContainer" containerID="8213a1c38eb98a5b9034dd1c22aa82d5955500c03a01064199ce8773b2cec06a" Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.445835 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b95c5c449-sfvlm"] Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.481785 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bd9cf7445-cfh79"] Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.489969 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bd9cf7445-cfh79"] Mar 16 15:30:30 crc kubenswrapper[4736]: I0316 15:30:30.998927 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65ff1eb1-99b2-40cb-b393-ed85d7072b2a" path="/var/lib/kubelet/pods/65ff1eb1-99b2-40cb-b393-ed85d7072b2a/volumes" Mar 16 15:30:31 crc kubenswrapper[4736]: I0316 15:30:31.000182 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="936bfa8a-f1c2-4c9b-a5f9-84155418f791" path="/var/lib/kubelet/pods/936bfa8a-f1c2-4c9b-a5f9-84155418f791/volumes" Mar 16 15:30:31 crc kubenswrapper[4736]: I0316 15:30:31.000748 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da9a7f53-e3db-465d-b15c-3be5883d8c2a" path="/var/lib/kubelet/pods/da9a7f53-e3db-465d-b15c-3be5883d8c2a/volumes" Mar 16 15:30:31 crc kubenswrapper[4736]: I0316 15:30:31.215523 4736 generic.go:334] "Generic (PLEG): container finished" podID="929f108f-20f2-47ca-8e05-e29b5e7c4609" containerID="5cced67b82a376a1411e2ed994b07c0fe0b37e14cde8e6d954bb48e8fe1a1769" exitCode=0 Mar 16 15:30:31 crc kubenswrapper[4736]: I0316 15:30:31.215622 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" event={"ID":"929f108f-20f2-47ca-8e05-e29b5e7c4609","Type":"ContainerDied","Data":"5cced67b82a376a1411e2ed994b07c0fe0b37e14cde8e6d954bb48e8fe1a1769"} Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.317660 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.380433 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhstv\" (UniqueName: \"kubernetes.io/projected/929f108f-20f2-47ca-8e05-e29b5e7c4609-kube-api-access-mhstv\") pod \"929f108f-20f2-47ca-8e05-e29b5e7c4609\" (UID: \"929f108f-20f2-47ca-8e05-e29b5e7c4609\") " Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.380517 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/929f108f-20f2-47ca-8e05-e29b5e7c4609-config-volume\") pod \"929f108f-20f2-47ca-8e05-e29b5e7c4609\" (UID: \"929f108f-20f2-47ca-8e05-e29b5e7c4609\") " Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.381923 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/929f108f-20f2-47ca-8e05-e29b5e7c4609-secret-volume\") pod \"929f108f-20f2-47ca-8e05-e29b5e7c4609\" (UID: \"929f108f-20f2-47ca-8e05-e29b5e7c4609\") " Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.382259 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/929f108f-20f2-47ca-8e05-e29b5e7c4609-config-volume" (OuterVolumeSpecName: "config-volume") pod "929f108f-20f2-47ca-8e05-e29b5e7c4609" (UID: "929f108f-20f2-47ca-8e05-e29b5e7c4609"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.383616 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/929f108f-20f2-47ca-8e05-e29b5e7c4609-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.393562 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/929f108f-20f2-47ca-8e05-e29b5e7c4609-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "929f108f-20f2-47ca-8e05-e29b5e7c4609" (UID: "929f108f-20f2-47ca-8e05-e29b5e7c4609"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.399798 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/929f108f-20f2-47ca-8e05-e29b5e7c4609-kube-api-access-mhstv" (OuterVolumeSpecName: "kube-api-access-mhstv") pod "929f108f-20f2-47ca-8e05-e29b5e7c4609" (UID: "929f108f-20f2-47ca-8e05-e29b5e7c4609"). InnerVolumeSpecName "kube-api-access-mhstv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.491940 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhstv\" (UniqueName: \"kubernetes.io/projected/929f108f-20f2-47ca-8e05-e29b5e7c4609-kube-api-access-mhstv\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.492214 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/929f108f-20f2-47ca-8e05-e29b5e7c4609-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.494565 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-m74rl"] Mar 16 15:30:33 crc kubenswrapper[4736]: E0316 15:30:33.495523 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9a7f53-e3db-465d-b15c-3be5883d8c2a" containerName="registry-server" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.495635 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9a7f53-e3db-465d-b15c-3be5883d8c2a" containerName="registry-server" Mar 16 15:30:33 crc kubenswrapper[4736]: E0316 15:30:33.495712 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="929f108f-20f2-47ca-8e05-e29b5e7c4609" containerName="collect-profiles" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.495765 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="929f108f-20f2-47ca-8e05-e29b5e7c4609" containerName="collect-profiles" Mar 16 15:30:33 crc kubenswrapper[4736]: E0316 15:30:33.495821 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9a7f53-e3db-465d-b15c-3be5883d8c2a" containerName="extract-content" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.495888 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9a7f53-e3db-465d-b15c-3be5883d8c2a" containerName="extract-content" Mar 16 15:30:33 crc kubenswrapper[4736]: E0316 15:30:33.495944 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9a7f53-e3db-465d-b15c-3be5883d8c2a" containerName="extract-utilities" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.495994 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9a7f53-e3db-465d-b15c-3be5883d8c2a" containerName="extract-utilities" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.516075 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="929f108f-20f2-47ca-8e05-e29b5e7c4609" containerName="collect-profiles" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.516379 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9a7f53-e3db-465d-b15c-3be5883d8c2a" containerName="registry-server" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.517308 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.519070 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-m74rl"] Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.521939 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.594300 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff193b6-fc55-427d-b256-a9b253fa60c4-config\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.594360 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff193b6-fc55-427d-b256-a9b253fa60c4-combined-ca-bundle\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.594425 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8ff193b6-fc55-427d-b256-a9b253fa60c4-ovn-rundir\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.594469 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4fjd\" (UniqueName: \"kubernetes.io/projected/8ff193b6-fc55-427d-b256-a9b253fa60c4-kube-api-access-d4fjd\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.594486 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8ff193b6-fc55-427d-b256-a9b253fa60c4-ovs-rundir\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.594511 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ff193b6-fc55-427d-b256-a9b253fa60c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.697538 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4fjd\" (UniqueName: \"kubernetes.io/projected/8ff193b6-fc55-427d-b256-a9b253fa60c4-kube-api-access-d4fjd\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.697590 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8ff193b6-fc55-427d-b256-a9b253fa60c4-ovs-rundir\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.697619 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ff193b6-fc55-427d-b256-a9b253fa60c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.697667 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff193b6-fc55-427d-b256-a9b253fa60c4-config\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.697699 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff193b6-fc55-427d-b256-a9b253fa60c4-combined-ca-bundle\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.697757 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8ff193b6-fc55-427d-b256-a9b253fa60c4-ovn-rundir\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.698200 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8ff193b6-fc55-427d-b256-a9b253fa60c4-ovn-rundir\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.698569 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8ff193b6-fc55-427d-b256-a9b253fa60c4-ovs-rundir\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.698763 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ff193b6-fc55-427d-b256-a9b253fa60c4-config\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.706685 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ff193b6-fc55-427d-b256-a9b253fa60c4-combined-ca-bundle\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.709438 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58b7684d4f-wdpvw"] Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.717399 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ff193b6-fc55-427d-b256-a9b253fa60c4-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.761751 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4fjd\" (UniqueName: \"kubernetes.io/projected/8ff193b6-fc55-427d-b256-a9b253fa60c4-kube-api-access-d4fjd\") pod \"ovn-controller-metrics-m74rl\" (UID: \"8ff193b6-fc55-427d-b256-a9b253fa60c4\") " pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.778529 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69497cbf8c-j58nb"] Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.791665 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.809411 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.883957 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-m74rl" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.917992 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-dns-svc\") pod \"dnsmasq-dns-69497cbf8c-j58nb\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.918255 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-config\") pod \"dnsmasq-dns-69497cbf8c-j58nb\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.918307 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zv87\" (UniqueName: \"kubernetes.io/projected/31398e35-3ae6-4eac-959c-2cf8ac536b1d-kube-api-access-5zv87\") pod \"dnsmasq-dns-69497cbf8c-j58nb\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.918346 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-ovsdbserver-sb\") pod \"dnsmasq-dns-69497cbf8c-j58nb\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:33 crc kubenswrapper[4736]: I0316 15:30:33.949191 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69497cbf8c-j58nb"] Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.026240 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-config\") pod \"dnsmasq-dns-69497cbf8c-j58nb\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.026298 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zv87\" (UniqueName: \"kubernetes.io/projected/31398e35-3ae6-4eac-959c-2cf8ac536b1d-kube-api-access-5zv87\") pod \"dnsmasq-dns-69497cbf8c-j58nb\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.026339 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-ovsdbserver-sb\") pod \"dnsmasq-dns-69497cbf8c-j58nb\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.026375 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-dns-svc\") pod \"dnsmasq-dns-69497cbf8c-j58nb\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.033985 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-config\") pod \"dnsmasq-dns-69497cbf8c-j58nb\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.034035 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-dns-svc\") pod \"dnsmasq-dns-69497cbf8c-j58nb\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.034057 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-ovsdbserver-sb\") pod \"dnsmasq-dns-69497cbf8c-j58nb\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.052921 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zv87\" (UniqueName: \"kubernetes.io/projected/31398e35-3ae6-4eac-959c-2cf8ac536b1d-kube-api-access-5zv87\") pod \"dnsmasq-dns-69497cbf8c-j58nb\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.132334 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.205861 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86545856d7-5chd7"] Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.262526 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cd56bc579-prjm2"] Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.271513 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.274400 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.275561 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" event={"ID":"929f108f-20f2-47ca-8e05-e29b5e7c4609","Type":"ContainerDied","Data":"9f2c4b8e880643eb8d1b334bacf66b1bb8c46ae9601a9cb7d706e1f68019031b"} Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.275603 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f2c4b8e880643eb8d1b334bacf66b1bb8c46ae9601a9cb7d706e1f68019031b" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.275697 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.344750 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.344797 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-config\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.344855 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pswt\" (UniqueName: \"kubernetes.io/projected/73c748c6-cd8c-4510-8d12-3624b65ddebb-kube-api-access-4pswt\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.344890 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.344939 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-dns-svc\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.355530 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd56bc579-prjm2"] Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.457805 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pswt\" (UniqueName: \"kubernetes.io/projected/73c748c6-cd8c-4510-8d12-3624b65ddebb-kube-api-access-4pswt\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.458409 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.458474 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-dns-svc\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.458503 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.458534 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-config\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.459681 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-config\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.461061 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.461800 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-dns-svc\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.463580 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.504190 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pswt\" (UniqueName: \"kubernetes.io/projected/73c748c6-cd8c-4510-8d12-3624b65ddebb-kube-api-access-4pswt\") pod \"dnsmasq-dns-5cd56bc579-prjm2\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:34 crc kubenswrapper[4736]: I0316 15:30:34.645781 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.177299 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86545856d7-5chd7" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.187639 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.273883 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwsjl\" (UniqueName: \"kubernetes.io/projected/11a179c6-fb20-41f3-a16a-a189c65372af-kube-api-access-pwsjl\") pod \"11a179c6-fb20-41f3-a16a-a189c65372af\" (UID: \"11a179c6-fb20-41f3-a16a-a189c65372af\") " Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.274061 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11a179c6-fb20-41f3-a16a-a189c65372af-config\") pod \"11a179c6-fb20-41f3-a16a-a189c65372af\" (UID: \"11a179c6-fb20-41f3-a16a-a189c65372af\") " Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.274303 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq8pk\" (UniqueName: \"kubernetes.io/projected/2dd40335-c071-4d7e-8f2b-e76ae17febda-kube-api-access-kq8pk\") pod \"2dd40335-c071-4d7e-8f2b-e76ae17febda\" (UID: \"2dd40335-c071-4d7e-8f2b-e76ae17febda\") " Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.274334 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dd40335-c071-4d7e-8f2b-e76ae17febda-dns-svc\") pod \"2dd40335-c071-4d7e-8f2b-e76ae17febda\" (UID: \"2dd40335-c071-4d7e-8f2b-e76ae17febda\") " Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.274380 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11a179c6-fb20-41f3-a16a-a189c65372af-dns-svc\") pod \"11a179c6-fb20-41f3-a16a-a189c65372af\" (UID: \"11a179c6-fb20-41f3-a16a-a189c65372af\") " Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.274429 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd40335-c071-4d7e-8f2b-e76ae17febda-config\") pod \"2dd40335-c071-4d7e-8f2b-e76ae17febda\" (UID: \"2dd40335-c071-4d7e-8f2b-e76ae17febda\") " Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.274936 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11a179c6-fb20-41f3-a16a-a189c65372af-config" (OuterVolumeSpecName: "config") pod "11a179c6-fb20-41f3-a16a-a189c65372af" (UID: "11a179c6-fb20-41f3-a16a-a189c65372af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.275319 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dd40335-c071-4d7e-8f2b-e76ae17febda-config" (OuterVolumeSpecName: "config") pod "2dd40335-c071-4d7e-8f2b-e76ae17febda" (UID: "2dd40335-c071-4d7e-8f2b-e76ae17febda"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.275344 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11a179c6-fb20-41f3-a16a-a189c65372af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11a179c6-fb20-41f3-a16a-a189c65372af" (UID: "11a179c6-fb20-41f3-a16a-a189c65372af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.275647 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dd40335-c071-4d7e-8f2b-e76ae17febda-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2dd40335-c071-4d7e-8f2b-e76ae17febda" (UID: "2dd40335-c071-4d7e-8f2b-e76ae17febda"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.282295 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11a179c6-fb20-41f3-a16a-a189c65372af-kube-api-access-pwsjl" (OuterVolumeSpecName: "kube-api-access-pwsjl") pod "11a179c6-fb20-41f3-a16a-a189c65372af" (UID: "11a179c6-fb20-41f3-a16a-a189c65372af"). InnerVolumeSpecName "kube-api-access-pwsjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.282374 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd40335-c071-4d7e-8f2b-e76ae17febda-kube-api-access-kq8pk" (OuterVolumeSpecName: "kube-api-access-kq8pk") pod "2dd40335-c071-4d7e-8f2b-e76ae17febda" (UID: "2dd40335-c071-4d7e-8f2b-e76ae17febda"). InnerVolumeSpecName "kube-api-access-kq8pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.287818 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86545856d7-5chd7" event={"ID":"11a179c6-fb20-41f3-a16a-a189c65372af","Type":"ContainerDied","Data":"12b6f1dda26366be0018255727be67f566f74ec42b23247770b2e22b8c22a554"} Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.287916 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86545856d7-5chd7" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.291472 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" event={"ID":"2dd40335-c071-4d7e-8f2b-e76ae17febda","Type":"ContainerDied","Data":"188ffa9a0be18c5d1d9eafa7c3751a7fd8298ca9b7313fbfd72dede31524dfc3"} Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.291580 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58b7684d4f-wdpvw" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.380053 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq8pk\" (UniqueName: \"kubernetes.io/projected/2dd40335-c071-4d7e-8f2b-e76ae17febda-kube-api-access-kq8pk\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.380093 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dd40335-c071-4d7e-8f2b-e76ae17febda-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.380116 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11a179c6-fb20-41f3-a16a-a189c65372af-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.380125 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dd40335-c071-4d7e-8f2b-e76ae17febda-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.380135 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwsjl\" (UniqueName: \"kubernetes.io/projected/11a179c6-fb20-41f3-a16a-a189c65372af-kube-api-access-pwsjl\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.380146 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11a179c6-fb20-41f3-a16a-a189c65372af-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.400342 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86545856d7-5chd7"] Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.420910 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86545856d7-5chd7"] Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.463907 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58b7684d4f-wdpvw"] Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.473230 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58b7684d4f-wdpvw"] Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.592022 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:30:35 crc kubenswrapper[4736]: I0316 15:30:35.592084 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:30:36 crc kubenswrapper[4736]: I0316 15:30:36.282945 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69497cbf8c-j58nb"] Mar 16 15:30:36 crc kubenswrapper[4736]: E0316 15:30:36.391408 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="adfa9156-d077-4b45-af4d-cc113fbff209" Mar 16 15:30:36 crc kubenswrapper[4736]: I0316 15:30:36.391714 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd56bc579-prjm2"] Mar 16 15:30:36 crc kubenswrapper[4736]: I0316 15:30:36.394819 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9243d80f-05dc-4dff-a328-780f64a121af","Type":"ContainerStarted","Data":"55e0f68550efe5ad3be04dc4a78ca58c3d415101611542e77eec6de61249888b"} Mar 16 15:30:36 crc kubenswrapper[4736]: I0316 15:30:36.397312 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" event={"ID":"31398e35-3ae6-4eac-959c-2cf8ac536b1d","Type":"ContainerStarted","Data":"930f2134ce7e3250730f0652820b941322bb9ebb4fd4656f512acea98cad265c"} Mar 16 15:30:36 crc kubenswrapper[4736]: E0316 15:30:36.397642 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="3228db46-56d3-4e82-8973-77a049c7e003" Mar 16 15:30:36 crc kubenswrapper[4736]: I0316 15:30:36.461650 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-m74rl"] Mar 16 15:30:36 crc kubenswrapper[4736]: I0316 15:30:36.646441 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-g5mdt" podUID="5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" containerName="registry-server" probeResult="failure" output=< Mar 16 15:30:36 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 15:30:36 crc kubenswrapper[4736]: > Mar 16 15:30:36 crc kubenswrapper[4736]: I0316 15:30:36.992406 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11a179c6-fb20-41f3-a16a-a189c65372af" path="/var/lib/kubelet/pods/11a179c6-fb20-41f3-a16a-a189c65372af/volumes" Mar 16 15:30:36 crc kubenswrapper[4736]: I0316 15:30:36.994194 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd40335-c071-4d7e-8f2b-e76ae17febda" path="/var/lib/kubelet/pods/2dd40335-c071-4d7e-8f2b-e76ae17febda/volumes" Mar 16 15:30:37 crc kubenswrapper[4736]: I0316 15:30:37.032395 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 16 15:30:37 crc kubenswrapper[4736]: I0316 15:30:37.408424 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"343be938-86f7-45c1-b8ef-a3143202be82","Type":"ContainerStarted","Data":"b19f745e7621dc0afc50381b80edc26830f3e6c65e4a8e103d89b5ac5336e755"} Mar 16 15:30:37 crc kubenswrapper[4736]: I0316 15:30:37.411339 4736 generic.go:334] "Generic (PLEG): container finished" podID="31398e35-3ae6-4eac-959c-2cf8ac536b1d" containerID="ba0c89e23909616ca58b643b3dd4d6142510e7f751445c55366a9760a911dcd3" exitCode=0 Mar 16 15:30:37 crc kubenswrapper[4736]: I0316 15:30:37.411409 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" event={"ID":"31398e35-3ae6-4eac-959c-2cf8ac536b1d","Type":"ContainerDied","Data":"ba0c89e23909616ca58b643b3dd4d6142510e7f751445c55366a9760a911dcd3"} Mar 16 15:30:37 crc kubenswrapper[4736]: I0316 15:30:37.414066 4736 generic.go:334] "Generic (PLEG): container finished" podID="9803e05f-288f-4f40-9a58-8e0d8622ce48" containerID="6745450ba06f002766ab495fc801d3d5544d0a839aeac824ebf6498a09bf3b12" exitCode=0 Mar 16 15:30:37 crc kubenswrapper[4736]: I0316 15:30:37.414139 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561250-v4sxj" event={"ID":"9803e05f-288f-4f40-9a58-8e0d8622ce48","Type":"ContainerDied","Data":"6745450ba06f002766ab495fc801d3d5544d0a839aeac824ebf6498a09bf3b12"} Mar 16 15:30:37 crc kubenswrapper[4736]: I0316 15:30:37.415907 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-m74rl" event={"ID":"8ff193b6-fc55-427d-b256-a9b253fa60c4","Type":"ContainerStarted","Data":"055be192066f7291074328f8cf108c463761816e5b88f5d27e39cb1cc5b36558"} Mar 16 15:30:37 crc kubenswrapper[4736]: I0316 15:30:37.415955 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-m74rl" event={"ID":"8ff193b6-fc55-427d-b256-a9b253fa60c4","Type":"ContainerStarted","Data":"655ce7fd2f22944e218c1ae7b41408ef15627c0ab7ab8608a536dde8da5f69bc"} Mar 16 15:30:37 crc kubenswrapper[4736]: I0316 15:30:37.417690 4736 generic.go:334] "Generic (PLEG): container finished" podID="73c748c6-cd8c-4510-8d12-3624b65ddebb" containerID="a41b1fbb9c52a1bc5affeef0b998b32c842eeddca2e3181966809c2f5e514280" exitCode=0 Mar 16 15:30:37 crc kubenswrapper[4736]: I0316 15:30:37.418005 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" event={"ID":"73c748c6-cd8c-4510-8d12-3624b65ddebb","Type":"ContainerDied","Data":"a41b1fbb9c52a1bc5affeef0b998b32c842eeddca2e3181966809c2f5e514280"} Mar 16 15:30:37 crc kubenswrapper[4736]: I0316 15:30:37.418066 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" event={"ID":"73c748c6-cd8c-4510-8d12-3624b65ddebb","Type":"ContainerStarted","Data":"28d23190597c1ba5ea4ea9c028b166ac89e99455a0af778a468a15089ce6b2c9"} Mar 16 15:30:37 crc kubenswrapper[4736]: I0316 15:30:37.422798 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"adfa9156-d077-4b45-af4d-cc113fbff209","Type":"ContainerStarted","Data":"4169a10acab1fa7a4e15ca3ff80a7f0e5e8e4bf146ba1ab55df915d719df7754"} Mar 16 15:30:37 crc kubenswrapper[4736]: E0316 15:30:37.427554 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-sb-db-server:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="adfa9156-d077-4b45-af4d-cc113fbff209" Mar 16 15:30:37 crc kubenswrapper[4736]: I0316 15:30:37.428394 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3228db46-56d3-4e82-8973-77a049c7e003","Type":"ContainerStarted","Data":"cadadcba2bc3d2a27ed98a2edbc1af8a6a223231713ff0530646f0e5ed753d2f"} Mar 16 15:30:37 crc kubenswrapper[4736]: E0316 15:30:37.429816 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-nb-db-server:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="3228db46-56d3-4e82-8973-77a049c7e003" Mar 16 15:30:37 crc kubenswrapper[4736]: I0316 15:30:37.545606 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-m74rl" podStartSLOduration=4.54557547 podStartE2EDuration="4.54557547s" podCreationTimestamp="2026-03-16 15:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:30:37.538754012 +0000 UTC m=+1039.266144299" watchObservedRunningTime="2026-03-16 15:30:37.54557547 +0000 UTC m=+1039.272965757" Mar 16 15:30:38 crc kubenswrapper[4736]: I0316 15:30:38.440440 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" event={"ID":"31398e35-3ae6-4eac-959c-2cf8ac536b1d","Type":"ContainerStarted","Data":"7569dc9d961083a337d769b1904bb3f8861b94ca9d460e3ecf713a5a760a2433"} Mar 16 15:30:38 crc kubenswrapper[4736]: I0316 15:30:38.442119 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:38 crc kubenswrapper[4736]: I0316 15:30:38.443668 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" event={"ID":"73c748c6-cd8c-4510-8d12-3624b65ddebb","Type":"ContainerStarted","Data":"5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b"} Mar 16 15:30:38 crc kubenswrapper[4736]: I0316 15:30:38.444253 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:38 crc kubenswrapper[4736]: E0316 15:30:38.445616 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-sb-db-server:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="adfa9156-d077-4b45-af4d-cc113fbff209" Mar 16 15:30:38 crc kubenswrapper[4736]: E0316 15:30:38.446840 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-nb-db-server:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="3228db46-56d3-4e82-8973-77a049c7e003" Mar 16 15:30:38 crc kubenswrapper[4736]: I0316 15:30:38.467879 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" podStartSLOduration=5.195763148 podStartE2EDuration="5.467849767s" podCreationTimestamp="2026-03-16 15:30:33 +0000 UTC" firstStartedPulling="2026-03-16 15:30:36.345671593 +0000 UTC m=+1038.073061880" lastFinishedPulling="2026-03-16 15:30:36.617758212 +0000 UTC m=+1038.345148499" observedRunningTime="2026-03-16 15:30:38.463609949 +0000 UTC m=+1040.191000246" watchObservedRunningTime="2026-03-16 15:30:38.467849767 +0000 UTC m=+1040.195240074" Mar 16 15:30:38 crc kubenswrapper[4736]: I0316 15:30:38.509536 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" podStartSLOduration=4.2709075819999995 podStartE2EDuration="4.509517363s" podCreationTimestamp="2026-03-16 15:30:34 +0000 UTC" firstStartedPulling="2026-03-16 15:30:36.385883018 +0000 UTC m=+1038.113273295" lastFinishedPulling="2026-03-16 15:30:36.624492789 +0000 UTC m=+1038.351883076" observedRunningTime="2026-03-16 15:30:38.507274621 +0000 UTC m=+1040.234664908" watchObservedRunningTime="2026-03-16 15:30:38.509517363 +0000 UTC m=+1040.236907650" Mar 16 15:30:38 crc kubenswrapper[4736]: I0316 15:30:38.968619 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561250-v4sxj" Mar 16 15:30:39 crc kubenswrapper[4736]: I0316 15:30:39.157354 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6cgz\" (UniqueName: \"kubernetes.io/projected/9803e05f-288f-4f40-9a58-8e0d8622ce48-kube-api-access-j6cgz\") pod \"9803e05f-288f-4f40-9a58-8e0d8622ce48\" (UID: \"9803e05f-288f-4f40-9a58-8e0d8622ce48\") " Mar 16 15:30:39 crc kubenswrapper[4736]: I0316 15:30:39.192600 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9803e05f-288f-4f40-9a58-8e0d8622ce48-kube-api-access-j6cgz" (OuterVolumeSpecName: "kube-api-access-j6cgz") pod "9803e05f-288f-4f40-9a58-8e0d8622ce48" (UID: "9803e05f-288f-4f40-9a58-8e0d8622ce48"). InnerVolumeSpecName "kube-api-access-j6cgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:39 crc kubenswrapper[4736]: I0316 15:30:39.260303 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6cgz\" (UniqueName: \"kubernetes.io/projected/9803e05f-288f-4f40-9a58-8e0d8622ce48-kube-api-access-j6cgz\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:39 crc kubenswrapper[4736]: I0316 15:30:39.455960 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561250-v4sxj" Mar 16 15:30:39 crc kubenswrapper[4736]: I0316 15:30:39.455976 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561250-v4sxj" event={"ID":"9803e05f-288f-4f40-9a58-8e0d8622ce48","Type":"ContainerDied","Data":"7f21383c1f9dcad78b7d8021628102ee085dd3b99a06571cbefd53876c77338c"} Mar 16 15:30:39 crc kubenswrapper[4736]: I0316 15:30:39.456515 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f21383c1f9dcad78b7d8021628102ee085dd3b99a06571cbefd53876c77338c" Mar 16 15:30:39 crc kubenswrapper[4736]: I0316 15:30:39.458367 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"582900c6-e591-4ff4-ac53-a8965af431e2","Type":"ContainerStarted","Data":"fe9b43956d77ed742b1abf72f183143a9580027753dd1c959c175ca9fe80b5e5"} Mar 16 15:30:39 crc kubenswrapper[4736]: I0316 15:30:39.459686 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7fa40817-425b-4ee8-9c3b-e7e109307837","Type":"ContainerStarted","Data":"192b54a34b16a6efa162831c3b9fc73b5af615c0d0837cea570b21c1f8e6a7e6"} Mar 16 15:30:39 crc kubenswrapper[4736]: I0316 15:30:39.504865 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.198044121 podStartE2EDuration="58.504839275s" podCreationTimestamp="2026-03-16 15:29:41 +0000 UTC" firstStartedPulling="2026-03-16 15:29:43.879972336 +0000 UTC m=+985.607362623" lastFinishedPulling="2026-03-16 15:30:39.1867675 +0000 UTC m=+1040.914157777" observedRunningTime="2026-03-16 15:30:39.502581442 +0000 UTC m=+1041.229971729" watchObservedRunningTime="2026-03-16 15:30:39.504839275 +0000 UTC m=+1041.232229562" Mar 16 15:30:40 crc kubenswrapper[4736]: I0316 15:30:40.052934 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561244-zqjt8"] Mar 16 15:30:40 crc kubenswrapper[4736]: I0316 15:30:40.059735 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561244-zqjt8"] Mar 16 15:30:40 crc kubenswrapper[4736]: I0316 15:30:40.472591 4736 generic.go:334] "Generic (PLEG): container finished" podID="197c602f-0abb-430a-8011-a454072994fd" containerID="915009411a05997d3793bc0a4735ad0dc47f7fefefb18439c170153d8aceb3dc" exitCode=0 Mar 16 15:30:40 crc kubenswrapper[4736]: I0316 15:30:40.472660 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jchb9" event={"ID":"197c602f-0abb-430a-8011-a454072994fd","Type":"ContainerDied","Data":"915009411a05997d3793bc0a4735ad0dc47f7fefefb18439c170153d8aceb3dc"} Mar 16 15:30:40 crc kubenswrapper[4736]: I0316 15:30:40.474574 4736 generic.go:334] "Generic (PLEG): container finished" podID="9243d80f-05dc-4dff-a328-780f64a121af" containerID="55e0f68550efe5ad3be04dc4a78ca58c3d415101611542e77eec6de61249888b" exitCode=0 Mar 16 15:30:40 crc kubenswrapper[4736]: I0316 15:30:40.474780 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9243d80f-05dc-4dff-a328-780f64a121af","Type":"ContainerDied","Data":"55e0f68550efe5ad3be04dc4a78ca58c3d415101611542e77eec6de61249888b"} Mar 16 15:30:41 crc kubenswrapper[4736]: I0316 15:30:41.000557 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad" path="/var/lib/kubelet/pods/986ddf4f-6e5d-4d1e-8928-0fbe9afa62ad/volumes" Mar 16 15:30:41 crc kubenswrapper[4736]: I0316 15:30:41.484716 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9243d80f-05dc-4dff-a328-780f64a121af","Type":"ContainerStarted","Data":"55a4feefcf75ad6e34eca5af0f8739ee54072346c8ab23bbb4a93e26d3f28b7e"} Mar 16 15:30:41 crc kubenswrapper[4736]: I0316 15:30:41.488792 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"51e06fc2-19ee-4e32-8118-d4596cb6b124","Type":"ContainerStarted","Data":"48fa4b871745430bc9765b9d0133920ebd7f612adcad5a259449cd2b49513198"} Mar 16 15:30:41 crc kubenswrapper[4736]: I0316 15:30:41.505419 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jchb9" event={"ID":"197c602f-0abb-430a-8011-a454072994fd","Type":"ContainerStarted","Data":"d5fb14a41919c3af60c2683ee24de9369e5b8ed4848304e813eddd07a0ab4b0a"} Mar 16 15:30:41 crc kubenswrapper[4736]: I0316 15:30:41.505457 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jchb9" event={"ID":"197c602f-0abb-430a-8011-a454072994fd","Type":"ContainerStarted","Data":"30314c73a6ff127618c0ecf32109f2ee1308a48554250d97ab3a2950236cbba1"} Mar 16 15:30:41 crc kubenswrapper[4736]: I0316 15:30:41.506250 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:30:41 crc kubenswrapper[4736]: I0316 15:30:41.506285 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:30:41 crc kubenswrapper[4736]: I0316 15:30:41.563251 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.982780741 podStartE2EDuration="1m1.563230148s" podCreationTimestamp="2026-03-16 15:29:40 +0000 UTC" firstStartedPulling="2026-03-16 15:29:45.287884254 +0000 UTC m=+987.015274541" lastFinishedPulling="2026-03-16 15:30:35.868333671 +0000 UTC m=+1037.595723948" observedRunningTime="2026-03-16 15:30:41.550232337 +0000 UTC m=+1043.277622614" watchObservedRunningTime="2026-03-16 15:30:41.563230148 +0000 UTC m=+1043.290620425" Mar 16 15:30:41 crc kubenswrapper[4736]: I0316 15:30:41.586015 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-jchb9" podStartSLOduration=8.382943036 podStartE2EDuration="54.585988569s" podCreationTimestamp="2026-03-16 15:29:47 +0000 UTC" firstStartedPulling="2026-03-16 15:29:52.981388312 +0000 UTC m=+994.708778599" lastFinishedPulling="2026-03-16 15:30:39.184433835 +0000 UTC m=+1040.911824132" observedRunningTime="2026-03-16 15:30:41.581288789 +0000 UTC m=+1043.308679086" watchObservedRunningTime="2026-03-16 15:30:41.585988569 +0000 UTC m=+1043.313378856" Mar 16 15:30:42 crc kubenswrapper[4736]: I0316 15:30:42.477399 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 16 15:30:42 crc kubenswrapper[4736]: I0316 15:30:42.554717 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 16 15:30:42 crc kubenswrapper[4736]: I0316 15:30:42.554798 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 16 15:30:43 crc kubenswrapper[4736]: I0316 15:30:43.524556 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wkhh" event={"ID":"b3d93764-b264-4e7d-87fe-ea95bd3fb252","Type":"ContainerStarted","Data":"fa5283683a868fe18ae90a129fe67cc853dee15c3e5637b972fd393e1416958f"} Mar 16 15:30:43 crc kubenswrapper[4736]: I0316 15:30:43.525206 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-9wkhh" Mar 16 15:30:43 crc kubenswrapper[4736]: I0316 15:30:43.548487 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9wkhh" podStartSLOduration=3.954603332 podStartE2EDuration="56.548464763s" podCreationTimestamp="2026-03-16 15:29:47 +0000 UTC" firstStartedPulling="2026-03-16 15:29:50.543985531 +0000 UTC m=+992.271375818" lastFinishedPulling="2026-03-16 15:30:43.137846962 +0000 UTC m=+1044.865237249" observedRunningTime="2026-03-16 15:30:43.547716762 +0000 UTC m=+1045.275107049" watchObservedRunningTime="2026-03-16 15:30:43.548464763 +0000 UTC m=+1045.275855050" Mar 16 15:30:44 crc kubenswrapper[4736]: I0316 15:30:44.134502 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:44 crc kubenswrapper[4736]: I0316 15:30:44.649051 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:30:44 crc kubenswrapper[4736]: I0316 15:30:44.721746 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69497cbf8c-j58nb"] Mar 16 15:30:44 crc kubenswrapper[4736]: I0316 15:30:44.722029 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" podUID="31398e35-3ae6-4eac-959c-2cf8ac536b1d" containerName="dnsmasq-dns" containerID="cri-o://7569dc9d961083a337d769b1904bb3f8861b94ca9d460e3ecf713a5a760a2433" gracePeriod=10 Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.434072 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.494227 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zv87\" (UniqueName: \"kubernetes.io/projected/31398e35-3ae6-4eac-959c-2cf8ac536b1d-kube-api-access-5zv87\") pod \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.494290 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-dns-svc\") pod \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.494366 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-config\") pod \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.494480 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-ovsdbserver-sb\") pod \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\" (UID: \"31398e35-3ae6-4eac-959c-2cf8ac536b1d\") " Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.516066 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31398e35-3ae6-4eac-959c-2cf8ac536b1d-kube-api-access-5zv87" (OuterVolumeSpecName: "kube-api-access-5zv87") pod "31398e35-3ae6-4eac-959c-2cf8ac536b1d" (UID: "31398e35-3ae6-4eac-959c-2cf8ac536b1d"). InnerVolumeSpecName "kube-api-access-5zv87". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.545567 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-config" (OuterVolumeSpecName: "config") pod "31398e35-3ae6-4eac-959c-2cf8ac536b1d" (UID: "31398e35-3ae6-4eac-959c-2cf8ac536b1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.557093 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "31398e35-3ae6-4eac-959c-2cf8ac536b1d" (UID: "31398e35-3ae6-4eac-959c-2cf8ac536b1d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.559366 4736 generic.go:334] "Generic (PLEG): container finished" podID="51e06fc2-19ee-4e32-8118-d4596cb6b124" containerID="48fa4b871745430bc9765b9d0133920ebd7f612adcad5a259449cd2b49513198" exitCode=0 Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.559450 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"51e06fc2-19ee-4e32-8118-d4596cb6b124","Type":"ContainerDied","Data":"48fa4b871745430bc9765b9d0133920ebd7f612adcad5a259449cd2b49513198"} Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.586750 4736 generic.go:334] "Generic (PLEG): container finished" podID="31398e35-3ae6-4eac-959c-2cf8ac536b1d" containerID="7569dc9d961083a337d769b1904bb3f8861b94ca9d460e3ecf713a5a760a2433" exitCode=0 Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.586809 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" event={"ID":"31398e35-3ae6-4eac-959c-2cf8ac536b1d","Type":"ContainerDied","Data":"7569dc9d961083a337d769b1904bb3f8861b94ca9d460e3ecf713a5a760a2433"} Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.586854 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" event={"ID":"31398e35-3ae6-4eac-959c-2cf8ac536b1d","Type":"ContainerDied","Data":"930f2134ce7e3250730f0652820b941322bb9ebb4fd4656f512acea98cad265c"} Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.586862 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69497cbf8c-j58nb" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.586879 4736 scope.go:117] "RemoveContainer" containerID="7569dc9d961083a337d769b1904bb3f8861b94ca9d460e3ecf713a5a760a2433" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.600989 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.601025 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.601045 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zv87\" (UniqueName: \"kubernetes.io/projected/31398e35-3ae6-4eac-959c-2cf8ac536b1d-kube-api-access-5zv87\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.607988 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "31398e35-3ae6-4eac-959c-2cf8ac536b1d" (UID: "31398e35-3ae6-4eac-959c-2cf8ac536b1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.643586 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.644924 4736 scope.go:117] "RemoveContainer" containerID="ba0c89e23909616ca58b643b3dd4d6142510e7f751445c55366a9760a911dcd3" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.696954 4736 scope.go:117] "RemoveContainer" containerID="7569dc9d961083a337d769b1904bb3f8861b94ca9d460e3ecf713a5a760a2433" Mar 16 15:30:45 crc kubenswrapper[4736]: E0316 15:30:45.697468 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7569dc9d961083a337d769b1904bb3f8861b94ca9d460e3ecf713a5a760a2433\": container with ID starting with 7569dc9d961083a337d769b1904bb3f8861b94ca9d460e3ecf713a5a760a2433 not found: ID does not exist" containerID="7569dc9d961083a337d769b1904bb3f8861b94ca9d460e3ecf713a5a760a2433" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.697506 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7569dc9d961083a337d769b1904bb3f8861b94ca9d460e3ecf713a5a760a2433"} err="failed to get container status \"7569dc9d961083a337d769b1904bb3f8861b94ca9d460e3ecf713a5a760a2433\": rpc error: code = NotFound desc = could not find container \"7569dc9d961083a337d769b1904bb3f8861b94ca9d460e3ecf713a5a760a2433\": container with ID starting with 7569dc9d961083a337d769b1904bb3f8861b94ca9d460e3ecf713a5a760a2433 not found: ID does not exist" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.697533 4736 scope.go:117] "RemoveContainer" containerID="ba0c89e23909616ca58b643b3dd4d6142510e7f751445c55366a9760a911dcd3" Mar 16 15:30:45 crc kubenswrapper[4736]: E0316 15:30:45.697829 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba0c89e23909616ca58b643b3dd4d6142510e7f751445c55366a9760a911dcd3\": container with ID starting with ba0c89e23909616ca58b643b3dd4d6142510e7f751445c55366a9760a911dcd3 not found: ID does not exist" containerID="ba0c89e23909616ca58b643b3dd4d6142510e7f751445c55366a9760a911dcd3" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.697852 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba0c89e23909616ca58b643b3dd4d6142510e7f751445c55366a9760a911dcd3"} err="failed to get container status \"ba0c89e23909616ca58b643b3dd4d6142510e7f751445c55366a9760a911dcd3\": rpc error: code = NotFound desc = could not find container \"ba0c89e23909616ca58b643b3dd4d6142510e7f751445c55366a9760a911dcd3\": container with ID starting with ba0c89e23909616ca58b643b3dd4d6142510e7f751445c55366a9760a911dcd3 not found: ID does not exist" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.703254 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31398e35-3ae6-4eac-959c-2cf8ac536b1d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.710874 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.932995 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69497cbf8c-j58nb"] Mar 16 15:30:45 crc kubenswrapper[4736]: I0316 15:30:45.938591 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69497cbf8c-j58nb"] Mar 16 15:30:46 crc kubenswrapper[4736]: I0316 15:30:46.130693 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5mdt"] Mar 16 15:30:46 crc kubenswrapper[4736]: I0316 15:30:46.597959 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"51e06fc2-19ee-4e32-8118-d4596cb6b124","Type":"ContainerStarted","Data":"1cc900afeb4af4b1b89f6a9d9a44bc72a6c327b2c62a305ca2e100bfcf1b41c9"} Mar 16 15:30:46 crc kubenswrapper[4736]: I0316 15:30:46.626803 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371969.22801 podStartE2EDuration="1m7.626766373s" podCreationTimestamp="2026-03-16 15:29:39 +0000 UTC" firstStartedPulling="2026-03-16 15:29:43.10729894 +0000 UTC m=+984.834689227" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:30:46.624459199 +0000 UTC m=+1048.351849476" watchObservedRunningTime="2026-03-16 15:30:46.626766373 +0000 UTC m=+1048.354156650" Mar 16 15:30:46 crc kubenswrapper[4736]: I0316 15:30:46.664426 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 16 15:30:46 crc kubenswrapper[4736]: I0316 15:30:46.761586 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 16 15:30:46 crc kubenswrapper[4736]: I0316 15:30:46.990585 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31398e35-3ae6-4eac-959c-2cf8ac536b1d" path="/var/lib/kubelet/pods/31398e35-3ae6-4eac-959c-2cf8ac536b1d/volumes" Mar 16 15:30:47 crc kubenswrapper[4736]: I0316 15:30:47.478569 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 16 15:30:47 crc kubenswrapper[4736]: I0316 15:30:47.607951 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g5mdt" podUID="5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" containerName="registry-server" containerID="cri-o://ae345891c3a496474dcb718801a48193016b163058ceac1e2066d799ebe8c59e" gracePeriod=2 Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.116079 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.167669 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-catalog-content\") pod \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\" (UID: \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\") " Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.167742 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxtwd\" (UniqueName: \"kubernetes.io/projected/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-kube-api-access-cxtwd\") pod \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\" (UID: \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\") " Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.167821 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-utilities\") pod \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\" (UID: \"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab\") " Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.168854 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-utilities" (OuterVolumeSpecName: "utilities") pod "5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" (UID: "5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.181447 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-kube-api-access-cxtwd" (OuterVolumeSpecName: "kube-api-access-cxtwd") pod "5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" (UID: "5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab"). InnerVolumeSpecName "kube-api-access-cxtwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.250665 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" (UID: "5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.270195 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.270242 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxtwd\" (UniqueName: \"kubernetes.io/projected/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-kube-api-access-cxtwd\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.270255 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.618353 4736 generic.go:334] "Generic (PLEG): container finished" podID="5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" containerID="ae345891c3a496474dcb718801a48193016b163058ceac1e2066d799ebe8c59e" exitCode=0 Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.618434 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g5mdt" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.618446 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5mdt" event={"ID":"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab","Type":"ContainerDied","Data":"ae345891c3a496474dcb718801a48193016b163058ceac1e2066d799ebe8c59e"} Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.618898 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g5mdt" event={"ID":"5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab","Type":"ContainerDied","Data":"0b6f9c1416b5e32043af10389eb3b1970fd45667bc588079eed3e43843199323"} Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.618926 4736 scope.go:117] "RemoveContainer" containerID="ae345891c3a496474dcb718801a48193016b163058ceac1e2066d799ebe8c59e" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.638202 4736 scope.go:117] "RemoveContainer" containerID="3a7bce1e7e6c929f0e9bcaf2aff5f59833c1b81a90d912b04ccad96abf85644d" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.663766 4736 scope.go:117] "RemoveContainer" containerID="9e94d97239afca8bbae3da0b26b1897d2aa353bcdaf3e8a381c78232d5e353c5" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.680383 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5mdt"] Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.696916 4736 scope.go:117] "RemoveContainer" containerID="ae345891c3a496474dcb718801a48193016b163058ceac1e2066d799ebe8c59e" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.700284 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g5mdt"] Mar 16 15:30:48 crc kubenswrapper[4736]: E0316 15:30:48.701217 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae345891c3a496474dcb718801a48193016b163058ceac1e2066d799ebe8c59e\": container with ID starting with ae345891c3a496474dcb718801a48193016b163058ceac1e2066d799ebe8c59e not found: ID does not exist" containerID="ae345891c3a496474dcb718801a48193016b163058ceac1e2066d799ebe8c59e" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.703553 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae345891c3a496474dcb718801a48193016b163058ceac1e2066d799ebe8c59e"} err="failed to get container status \"ae345891c3a496474dcb718801a48193016b163058ceac1e2066d799ebe8c59e\": rpc error: code = NotFound desc = could not find container \"ae345891c3a496474dcb718801a48193016b163058ceac1e2066d799ebe8c59e\": container with ID starting with ae345891c3a496474dcb718801a48193016b163058ceac1e2066d799ebe8c59e not found: ID does not exist" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.703601 4736 scope.go:117] "RemoveContainer" containerID="3a7bce1e7e6c929f0e9bcaf2aff5f59833c1b81a90d912b04ccad96abf85644d" Mar 16 15:30:48 crc kubenswrapper[4736]: E0316 15:30:48.705832 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a7bce1e7e6c929f0e9bcaf2aff5f59833c1b81a90d912b04ccad96abf85644d\": container with ID starting with 3a7bce1e7e6c929f0e9bcaf2aff5f59833c1b81a90d912b04ccad96abf85644d not found: ID does not exist" containerID="3a7bce1e7e6c929f0e9bcaf2aff5f59833c1b81a90d912b04ccad96abf85644d" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.705965 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a7bce1e7e6c929f0e9bcaf2aff5f59833c1b81a90d912b04ccad96abf85644d"} err="failed to get container status \"3a7bce1e7e6c929f0e9bcaf2aff5f59833c1b81a90d912b04ccad96abf85644d\": rpc error: code = NotFound desc = could not find container \"3a7bce1e7e6c929f0e9bcaf2aff5f59833c1b81a90d912b04ccad96abf85644d\": container with ID starting with 3a7bce1e7e6c929f0e9bcaf2aff5f59833c1b81a90d912b04ccad96abf85644d not found: ID does not exist" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.706073 4736 scope.go:117] "RemoveContainer" containerID="9e94d97239afca8bbae3da0b26b1897d2aa353bcdaf3e8a381c78232d5e353c5" Mar 16 15:30:48 crc kubenswrapper[4736]: E0316 15:30:48.708163 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e94d97239afca8bbae3da0b26b1897d2aa353bcdaf3e8a381c78232d5e353c5\": container with ID starting with 9e94d97239afca8bbae3da0b26b1897d2aa353bcdaf3e8a381c78232d5e353c5 not found: ID does not exist" containerID="9e94d97239afca8bbae3da0b26b1897d2aa353bcdaf3e8a381c78232d5e353c5" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.708200 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e94d97239afca8bbae3da0b26b1897d2aa353bcdaf3e8a381c78232d5e353c5"} err="failed to get container status \"9e94d97239afca8bbae3da0b26b1897d2aa353bcdaf3e8a381c78232d5e353c5\": rpc error: code = NotFound desc = could not find container \"9e94d97239afca8bbae3da0b26b1897d2aa353bcdaf3e8a381c78232d5e353c5\": container with ID starting with 9e94d97239afca8bbae3da0b26b1897d2aa353bcdaf3e8a381c78232d5e353c5 not found: ID does not exist" Mar 16 15:30:48 crc kubenswrapper[4736]: I0316 15:30:48.991035 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" path="/var/lib/kubelet/pods/5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab/volumes" Mar 16 15:30:50 crc kubenswrapper[4736]: I0316 15:30:50.915640 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kb9cl"] Mar 16 15:30:50 crc kubenswrapper[4736]: E0316 15:30:50.916361 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31398e35-3ae6-4eac-959c-2cf8ac536b1d" containerName="dnsmasq-dns" Mar 16 15:30:50 crc kubenswrapper[4736]: I0316 15:30:50.916376 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="31398e35-3ae6-4eac-959c-2cf8ac536b1d" containerName="dnsmasq-dns" Mar 16 15:30:50 crc kubenswrapper[4736]: E0316 15:30:50.916404 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" containerName="registry-server" Mar 16 15:30:50 crc kubenswrapper[4736]: I0316 15:30:50.916434 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" containerName="registry-server" Mar 16 15:30:50 crc kubenswrapper[4736]: E0316 15:30:50.916456 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" containerName="extract-content" Mar 16 15:30:50 crc kubenswrapper[4736]: I0316 15:30:50.916462 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" containerName="extract-content" Mar 16 15:30:50 crc kubenswrapper[4736]: E0316 15:30:50.916474 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" containerName="extract-utilities" Mar 16 15:30:50 crc kubenswrapper[4736]: I0316 15:30:50.916481 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" containerName="extract-utilities" Mar 16 15:30:50 crc kubenswrapper[4736]: E0316 15:30:50.916502 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31398e35-3ae6-4eac-959c-2cf8ac536b1d" containerName="init" Mar 16 15:30:50 crc kubenswrapper[4736]: I0316 15:30:50.916508 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="31398e35-3ae6-4eac-959c-2cf8ac536b1d" containerName="init" Mar 16 15:30:50 crc kubenswrapper[4736]: E0316 15:30:50.916529 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9803e05f-288f-4f40-9a58-8e0d8622ce48" containerName="oc" Mar 16 15:30:50 crc kubenswrapper[4736]: I0316 15:30:50.916535 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9803e05f-288f-4f40-9a58-8e0d8622ce48" containerName="oc" Mar 16 15:30:50 crc kubenswrapper[4736]: I0316 15:30:50.916676 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e1d2ac2-5f4f-496f-a1e8-a6dd9f5c75ab" containerName="registry-server" Mar 16 15:30:50 crc kubenswrapper[4736]: I0316 15:30:50.916686 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9803e05f-288f-4f40-9a58-8e0d8622ce48" containerName="oc" Mar 16 15:30:50 crc kubenswrapper[4736]: I0316 15:30:50.916699 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="31398e35-3ae6-4eac-959c-2cf8ac536b1d" containerName="dnsmasq-dns" Mar 16 15:30:50 crc kubenswrapper[4736]: I0316 15:30:50.917339 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kb9cl" Mar 16 15:30:50 crc kubenswrapper[4736]: I0316 15:30:50.921984 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 16 15:30:50 crc kubenswrapper[4736]: I0316 15:30:50.939318 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kb9cl"] Mar 16 15:30:51 crc kubenswrapper[4736]: I0316 15:30:51.024687 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b568935b-8768-4dd6-b103-3e7e900706d2-operator-scripts\") pod \"root-account-create-update-kb9cl\" (UID: \"b568935b-8768-4dd6-b103-3e7e900706d2\") " pod="openstack/root-account-create-update-kb9cl" Mar 16 15:30:51 crc kubenswrapper[4736]: I0316 15:30:51.025227 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7224\" (UniqueName: \"kubernetes.io/projected/b568935b-8768-4dd6-b103-3e7e900706d2-kube-api-access-d7224\") pod \"root-account-create-update-kb9cl\" (UID: \"b568935b-8768-4dd6-b103-3e7e900706d2\") " pod="openstack/root-account-create-update-kb9cl" Mar 16 15:30:51 crc kubenswrapper[4736]: I0316 15:30:51.127319 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7224\" (UniqueName: \"kubernetes.io/projected/b568935b-8768-4dd6-b103-3e7e900706d2-kube-api-access-d7224\") pod \"root-account-create-update-kb9cl\" (UID: \"b568935b-8768-4dd6-b103-3e7e900706d2\") " pod="openstack/root-account-create-update-kb9cl" Mar 16 15:30:51 crc kubenswrapper[4736]: I0316 15:30:51.127449 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b568935b-8768-4dd6-b103-3e7e900706d2-operator-scripts\") pod \"root-account-create-update-kb9cl\" (UID: \"b568935b-8768-4dd6-b103-3e7e900706d2\") " pod="openstack/root-account-create-update-kb9cl" Mar 16 15:30:51 crc kubenswrapper[4736]: I0316 15:30:51.128303 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b568935b-8768-4dd6-b103-3e7e900706d2-operator-scripts\") pod \"root-account-create-update-kb9cl\" (UID: \"b568935b-8768-4dd6-b103-3e7e900706d2\") " pod="openstack/root-account-create-update-kb9cl" Mar 16 15:30:51 crc kubenswrapper[4736]: I0316 15:30:51.157879 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7224\" (UniqueName: \"kubernetes.io/projected/b568935b-8768-4dd6-b103-3e7e900706d2-kube-api-access-d7224\") pod \"root-account-create-update-kb9cl\" (UID: \"b568935b-8768-4dd6-b103-3e7e900706d2\") " pod="openstack/root-account-create-update-kb9cl" Mar 16 15:30:51 crc kubenswrapper[4736]: I0316 15:30:51.236035 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kb9cl" Mar 16 15:30:51 crc kubenswrapper[4736]: I0316 15:30:51.680280 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kb9cl"] Mar 16 15:30:51 crc kubenswrapper[4736]: W0316 15:30:51.688317 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb568935b_8768_4dd6_b103_3e7e900706d2.slice/crio-fd69cc0c052df18c3e5b6bbec58ed4784f9fa9c08e809f1c854b071a7733f0a8 WatchSource:0}: Error finding container fd69cc0c052df18c3e5b6bbec58ed4784f9fa9c08e809f1c854b071a7733f0a8: Status 404 returned error can't find the container with id fd69cc0c052df18c3e5b6bbec58ed4784f9fa9c08e809f1c854b071a7733f0a8 Mar 16 15:30:51 crc kubenswrapper[4736]: I0316 15:30:51.696038 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 16 15:30:51 crc kubenswrapper[4736]: I0316 15:30:51.696087 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 16 15:30:52 crc kubenswrapper[4736]: I0316 15:30:52.162197 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 16 15:30:52 crc kubenswrapper[4736]: I0316 15:30:52.669394 4736 generic.go:334] "Generic (PLEG): container finished" podID="b568935b-8768-4dd6-b103-3e7e900706d2" containerID="2fef9ead1db6d1a0f6ed9b735d599ef12ff1fc98fd77396e9ab15ecc235ab57b" exitCode=0 Mar 16 15:30:52 crc kubenswrapper[4736]: I0316 15:30:52.670074 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kb9cl" event={"ID":"b568935b-8768-4dd6-b103-3e7e900706d2","Type":"ContainerDied","Data":"2fef9ead1db6d1a0f6ed9b735d599ef12ff1fc98fd77396e9ab15ecc235ab57b"} Mar 16 15:30:52 crc kubenswrapper[4736]: I0316 15:30:52.670201 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kb9cl" event={"ID":"b568935b-8768-4dd6-b103-3e7e900706d2","Type":"ContainerStarted","Data":"fd69cc0c052df18c3e5b6bbec58ed4784f9fa9c08e809f1c854b071a7733f0a8"} Mar 16 15:30:52 crc kubenswrapper[4736]: I0316 15:30:52.681576 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"adfa9156-d077-4b45-af4d-cc113fbff209","Type":"ContainerStarted","Data":"8dd3899ba0aa8a434bd58ae9a1011a15987a38e72298fcaa3df437fe574e77c6"} Mar 16 15:30:52 crc kubenswrapper[4736]: I0316 15:30:52.691817 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3228db46-56d3-4e82-8973-77a049c7e003","Type":"ContainerStarted","Data":"709817f743bb92e7ad6b8b7a4704d0aebe902fcfd528ce633f94c0c497dcbebf"} Mar 16 15:30:52 crc kubenswrapper[4736]: I0316 15:30:52.739629 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=5.497528248 podStartE2EDuration="1m1.739610268s" podCreationTimestamp="2026-03-16 15:29:51 +0000 UTC" firstStartedPulling="2026-03-16 15:29:55.538686527 +0000 UTC m=+997.266076804" lastFinishedPulling="2026-03-16 15:30:51.780768537 +0000 UTC m=+1053.508158824" observedRunningTime="2026-03-16 15:30:52.727343758 +0000 UTC m=+1054.454734055" watchObservedRunningTime="2026-03-16 15:30:52.739610268 +0000 UTC m=+1054.467000555" Mar 16 15:30:52 crc kubenswrapper[4736]: I0316 15:30:52.831127 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 16 15:30:52 crc kubenswrapper[4736]: I0316 15:30:52.858931 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.735838257 podStartE2EDuration="1m1.858905338s" podCreationTimestamp="2026-03-16 15:29:51 +0000 UTC" firstStartedPulling="2026-03-16 15:29:54.797029922 +0000 UTC m=+996.524420209" lastFinishedPulling="2026-03-16 15:30:51.920097003 +0000 UTC m=+1053.647487290" observedRunningTime="2026-03-16 15:30:52.765216938 +0000 UTC m=+1054.492607225" watchObservedRunningTime="2026-03-16 15:30:52.858905338 +0000 UTC m=+1054.586295625" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.136789 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.136908 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.448300 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.448373 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.602264 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-tq8tw"] Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.604090 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq8tw" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.617362 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdnjm\" (UniqueName: \"kubernetes.io/projected/81dee002-8da2-4a98-a38a-4d3b55609e79-kube-api-access-wdnjm\") pod \"keystone-db-create-tq8tw\" (UID: \"81dee002-8da2-4a98-a38a-4d3b55609e79\") " pod="openstack/keystone-db-create-tq8tw" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.617472 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81dee002-8da2-4a98-a38a-4d3b55609e79-operator-scripts\") pod \"keystone-db-create-tq8tw\" (UID: \"81dee002-8da2-4a98-a38a-4d3b55609e79\") " pod="openstack/keystone-db-create-tq8tw" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.629786 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tq8tw"] Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.719140 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81dee002-8da2-4a98-a38a-4d3b55609e79-operator-scripts\") pod \"keystone-db-create-tq8tw\" (UID: \"81dee002-8da2-4a98-a38a-4d3b55609e79\") " pod="openstack/keystone-db-create-tq8tw" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.719254 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdnjm\" (UniqueName: \"kubernetes.io/projected/81dee002-8da2-4a98-a38a-4d3b55609e79-kube-api-access-wdnjm\") pod \"keystone-db-create-tq8tw\" (UID: \"81dee002-8da2-4a98-a38a-4d3b55609e79\") " pod="openstack/keystone-db-create-tq8tw" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.720148 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81dee002-8da2-4a98-a38a-4d3b55609e79-operator-scripts\") pod \"keystone-db-create-tq8tw\" (UID: \"81dee002-8da2-4a98-a38a-4d3b55609e79\") " pod="openstack/keystone-db-create-tq8tw" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.749872 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdnjm\" (UniqueName: \"kubernetes.io/projected/81dee002-8da2-4a98-a38a-4d3b55609e79-kube-api-access-wdnjm\") pod \"keystone-db-create-tq8tw\" (UID: \"81dee002-8da2-4a98-a38a-4d3b55609e79\") " pod="openstack/keystone-db-create-tq8tw" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.827970 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-430b-account-create-update-p6zhb"] Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.831739 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-430b-account-create-update-p6zhb" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.839722 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.854433 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-430b-account-create-update-p6zhb"] Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.920977 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq8tw" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.922482 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8q5b\" (UniqueName: \"kubernetes.io/projected/6809e9ed-0919-419c-87f4-86d756616c27-kube-api-access-w8q5b\") pod \"keystone-430b-account-create-update-p6zhb\" (UID: \"6809e9ed-0919-419c-87f4-86d756616c27\") " pod="openstack/keystone-430b-account-create-update-p6zhb" Mar 16 15:30:53 crc kubenswrapper[4736]: I0316 15:30:53.922583 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6809e9ed-0919-419c-87f4-86d756616c27-operator-scripts\") pod \"keystone-430b-account-create-update-p6zhb\" (UID: \"6809e9ed-0919-419c-87f4-86d756616c27\") " pod="openstack/keystone-430b-account-create-update-p6zhb" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.024569 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8q5b\" (UniqueName: \"kubernetes.io/projected/6809e9ed-0919-419c-87f4-86d756616c27-kube-api-access-w8q5b\") pod \"keystone-430b-account-create-update-p6zhb\" (UID: \"6809e9ed-0919-419c-87f4-86d756616c27\") " pod="openstack/keystone-430b-account-create-update-p6zhb" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.024715 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6809e9ed-0919-419c-87f4-86d756616c27-operator-scripts\") pod \"keystone-430b-account-create-update-p6zhb\" (UID: \"6809e9ed-0919-419c-87f4-86d756616c27\") " pod="openstack/keystone-430b-account-create-update-p6zhb" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.025695 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6809e9ed-0919-419c-87f4-86d756616c27-operator-scripts\") pod \"keystone-430b-account-create-update-p6zhb\" (UID: \"6809e9ed-0919-419c-87f4-86d756616c27\") " pod="openstack/keystone-430b-account-create-update-p6zhb" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.048013 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8q5b\" (UniqueName: \"kubernetes.io/projected/6809e9ed-0919-419c-87f4-86d756616c27-kube-api-access-w8q5b\") pod \"keystone-430b-account-create-update-p6zhb\" (UID: \"6809e9ed-0919-419c-87f4-86d756616c27\") " pod="openstack/keystone-430b-account-create-update-p6zhb" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.087074 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kb9cl" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.159640 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-430b-account-create-update-p6zhb" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.227786 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7224\" (UniqueName: \"kubernetes.io/projected/b568935b-8768-4dd6-b103-3e7e900706d2-kube-api-access-d7224\") pod \"b568935b-8768-4dd6-b103-3e7e900706d2\" (UID: \"b568935b-8768-4dd6-b103-3e7e900706d2\") " Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.228086 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b568935b-8768-4dd6-b103-3e7e900706d2-operator-scripts\") pod \"b568935b-8768-4dd6-b103-3e7e900706d2\" (UID: \"b568935b-8768-4dd6-b103-3e7e900706d2\") " Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.229591 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b568935b-8768-4dd6-b103-3e7e900706d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b568935b-8768-4dd6-b103-3e7e900706d2" (UID: "b568935b-8768-4dd6-b103-3e7e900706d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.234849 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b568935b-8768-4dd6-b103-3e7e900706d2-kube-api-access-d7224" (OuterVolumeSpecName: "kube-api-access-d7224") pod "b568935b-8768-4dd6-b103-3e7e900706d2" (UID: "b568935b-8768-4dd6-b103-3e7e900706d2"). InnerVolumeSpecName "kube-api-access-d7224". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.330535 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b568935b-8768-4dd6-b103-3e7e900706d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.330596 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7224\" (UniqueName: \"kubernetes.io/projected/b568935b-8768-4dd6-b103-3e7e900706d2-kube-api-access-d7224\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.484070 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tq8tw"] Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.709049 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kb9cl" event={"ID":"b568935b-8768-4dd6-b103-3e7e900706d2","Type":"ContainerDied","Data":"fd69cc0c052df18c3e5b6bbec58ed4784f9fa9c08e809f1c854b071a7733f0a8"} Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.709096 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd69cc0c052df18c3e5b6bbec58ed4784f9fa9c08e809f1c854b071a7733f0a8" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.709182 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kb9cl" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.713341 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tq8tw" event={"ID":"81dee002-8da2-4a98-a38a-4d3b55609e79","Type":"ContainerStarted","Data":"2709faefc4553add5911dd82703f8bb247ee54ada241fc8eea8be408fda322f0"} Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.713406 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tq8tw" event={"ID":"81dee002-8da2-4a98-a38a-4d3b55609e79","Type":"ContainerStarted","Data":"63fb03f8b7753aa620b248aa6c0edfbdf1777ad8e445d7ca07f4c5ecbd6c25ea"} Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.737176 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-tq8tw" podStartSLOduration=1.7371464639999998 podStartE2EDuration="1.737146464s" podCreationTimestamp="2026-03-16 15:30:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:30:54.729591045 +0000 UTC m=+1056.456981332" watchObservedRunningTime="2026-03-16 15:30:54.737146464 +0000 UTC m=+1056.464536751" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.796314 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-430b-account-create-update-p6zhb"] Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.840760 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-v2mn2"] Mar 16 15:30:54 crc kubenswrapper[4736]: E0316 15:30:54.841257 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b568935b-8768-4dd6-b103-3e7e900706d2" containerName="mariadb-account-create-update" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.841275 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b568935b-8768-4dd6-b103-3e7e900706d2" containerName="mariadb-account-create-update" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.841478 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b568935b-8768-4dd6-b103-3e7e900706d2" containerName="mariadb-account-create-update" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.842054 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v2mn2" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.861797 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v2mn2"] Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.942833 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78aac930-9cb0-48f5-80a3-aa0b50917c88-operator-scripts\") pod \"placement-db-create-v2mn2\" (UID: \"78aac930-9cb0-48f5-80a3-aa0b50917c88\") " pod="openstack/placement-db-create-v2mn2" Mar 16 15:30:54 crc kubenswrapper[4736]: I0316 15:30:54.942889 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zdbb\" (UniqueName: \"kubernetes.io/projected/78aac930-9cb0-48f5-80a3-aa0b50917c88-kube-api-access-8zdbb\") pod \"placement-db-create-v2mn2\" (UID: \"78aac930-9cb0-48f5-80a3-aa0b50917c88\") " pod="openstack/placement-db-create-v2mn2" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.045003 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78aac930-9cb0-48f5-80a3-aa0b50917c88-operator-scripts\") pod \"placement-db-create-v2mn2\" (UID: \"78aac930-9cb0-48f5-80a3-aa0b50917c88\") " pod="openstack/placement-db-create-v2mn2" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.045083 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zdbb\" (UniqueName: \"kubernetes.io/projected/78aac930-9cb0-48f5-80a3-aa0b50917c88-kube-api-access-8zdbb\") pod \"placement-db-create-v2mn2\" (UID: \"78aac930-9cb0-48f5-80a3-aa0b50917c88\") " pod="openstack/placement-db-create-v2mn2" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.046385 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78aac930-9cb0-48f5-80a3-aa0b50917c88-operator-scripts\") pod \"placement-db-create-v2mn2\" (UID: \"78aac930-9cb0-48f5-80a3-aa0b50917c88\") " pod="openstack/placement-db-create-v2mn2" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.065421 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zdbb\" (UniqueName: \"kubernetes.io/projected/78aac930-9cb0-48f5-80a3-aa0b50917c88-kube-api-access-8zdbb\") pod \"placement-db-create-v2mn2\" (UID: \"78aac930-9cb0-48f5-80a3-aa0b50917c88\") " pod="openstack/placement-db-create-v2mn2" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.214805 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v2mn2" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.302316 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-18c6-account-create-update-887bw"] Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.303554 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-18c6-account-create-update-887bw" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.312879 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.336456 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-18c6-account-create-update-887bw"] Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.456053 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e064290b-372f-478f-b907-557ecf3e5bc3-operator-scripts\") pod \"placement-18c6-account-create-update-887bw\" (UID: \"e064290b-372f-478f-b907-557ecf3e5bc3\") " pod="openstack/placement-18c6-account-create-update-887bw" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.456629 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nnhg\" (UniqueName: \"kubernetes.io/projected/e064290b-372f-478f-b907-557ecf3e5bc3-kube-api-access-9nnhg\") pod \"placement-18c6-account-create-update-887bw\" (UID: \"e064290b-372f-478f-b907-557ecf3e5bc3\") " pod="openstack/placement-18c6-account-create-update-887bw" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.557998 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e064290b-372f-478f-b907-557ecf3e5bc3-operator-scripts\") pod \"placement-18c6-account-create-update-887bw\" (UID: \"e064290b-372f-478f-b907-557ecf3e5bc3\") " pod="openstack/placement-18c6-account-create-update-887bw" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.558132 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nnhg\" (UniqueName: \"kubernetes.io/projected/e064290b-372f-478f-b907-557ecf3e5bc3-kube-api-access-9nnhg\") pod \"placement-18c6-account-create-update-887bw\" (UID: \"e064290b-372f-478f-b907-557ecf3e5bc3\") " pod="openstack/placement-18c6-account-create-update-887bw" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.559226 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e064290b-372f-478f-b907-557ecf3e5bc3-operator-scripts\") pod \"placement-18c6-account-create-update-887bw\" (UID: \"e064290b-372f-478f-b907-557ecf3e5bc3\") " pod="openstack/placement-18c6-account-create-update-887bw" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.620265 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nnhg\" (UniqueName: \"kubernetes.io/projected/e064290b-372f-478f-b907-557ecf3e5bc3-kube-api-access-9nnhg\") pod \"placement-18c6-account-create-update-887bw\" (UID: \"e064290b-372f-478f-b907-557ecf3e5bc3\") " pod="openstack/placement-18c6-account-create-update-887bw" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.633004 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-18c6-account-create-update-887bw" Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.725582 4736 generic.go:334] "Generic (PLEG): container finished" podID="6809e9ed-0919-419c-87f4-86d756616c27" containerID="638eb46df411b49e1b089ab7bce4669146bbbe801a15dafbd1810575e7003ebd" exitCode=0 Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.725748 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-430b-account-create-update-p6zhb" event={"ID":"6809e9ed-0919-419c-87f4-86d756616c27","Type":"ContainerDied","Data":"638eb46df411b49e1b089ab7bce4669146bbbe801a15dafbd1810575e7003ebd"} Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.725786 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-430b-account-create-update-p6zhb" event={"ID":"6809e9ed-0919-419c-87f4-86d756616c27","Type":"ContainerStarted","Data":"d5030fd2de062d81ee9210fd9507aabb0e5ba0ec37e84407be577ffa398c32f9"} Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.727441 4736 generic.go:334] "Generic (PLEG): container finished" podID="81dee002-8da2-4a98-a38a-4d3b55609e79" containerID="2709faefc4553add5911dd82703f8bb247ee54ada241fc8eea8be408fda322f0" exitCode=0 Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.727534 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tq8tw" event={"ID":"81dee002-8da2-4a98-a38a-4d3b55609e79","Type":"ContainerDied","Data":"2709faefc4553add5911dd82703f8bb247ee54ada241fc8eea8be408fda322f0"} Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.917140 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v2mn2"] Mar 16 15:30:55 crc kubenswrapper[4736]: I0316 15:30:55.974472 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-18c6-account-create-update-887bw"] Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.187061 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.527626 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.740907 4736 generic.go:334] "Generic (PLEG): container finished" podID="e064290b-372f-478f-b907-557ecf3e5bc3" containerID="64de37c7cd958cf4e4fd7dd8eb36fce0b47e291836bf38a9ce90cbd6a083d0bb" exitCode=0 Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.740979 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-18c6-account-create-update-887bw" event={"ID":"e064290b-372f-478f-b907-557ecf3e5bc3","Type":"ContainerDied","Data":"64de37c7cd958cf4e4fd7dd8eb36fce0b47e291836bf38a9ce90cbd6a083d0bb"} Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.741011 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-18c6-account-create-update-887bw" event={"ID":"e064290b-372f-478f-b907-557ecf3e5bc3","Type":"ContainerStarted","Data":"40b8e0620965c114059d26500cb5e86030532d086ba8d58e5b575483267b5e9d"} Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.749005 4736 generic.go:334] "Generic (PLEG): container finished" podID="78aac930-9cb0-48f5-80a3-aa0b50917c88" containerID="bd6412bc313477a64c694b08a53e71b2d3cf2d30aafa60f32ab1431a61d42035" exitCode=0 Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.749086 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v2mn2" event={"ID":"78aac930-9cb0-48f5-80a3-aa0b50917c88","Type":"ContainerDied","Data":"bd6412bc313477a64c694b08a53e71b2d3cf2d30aafa60f32ab1431a61d42035"} Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.749168 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v2mn2" event={"ID":"78aac930-9cb0-48f5-80a3-aa0b50917c88","Type":"ContainerStarted","Data":"ff1976ab4116ab4d2a28820e44a94dc76f4c99c7f56ad98deed75118136543a4"} Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.814136 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c8c8d4885-6znjz"] Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.815904 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.900285 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-ovsdbserver-nb\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.900351 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-ovsdbserver-sb\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.900415 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wkt6\" (UniqueName: \"kubernetes.io/projected/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-kube-api-access-4wkt6\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.900448 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-config\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.900505 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-dns-svc\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:56 crc kubenswrapper[4736]: I0316 15:30:56.936864 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c8c8d4885-6znjz"] Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.006127 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-ovsdbserver-nb\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.006473 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-ovsdbserver-sb\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.006596 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wkt6\" (UniqueName: \"kubernetes.io/projected/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-kube-api-access-4wkt6\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.006692 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-config\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.006805 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-dns-svc\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.008471 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-ovsdbserver-sb\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.009794 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-dns-svc\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.013836 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-ovsdbserver-nb\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.014189 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-config\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.053059 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wkt6\" (UniqueName: \"kubernetes.io/projected/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-kube-api-access-4wkt6\") pod \"dnsmasq-dns-6c8c8d4885-6znjz\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.192077 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.526024 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-430b-account-create-update-p6zhb" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.564344 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq8tw" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.651908 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81dee002-8da2-4a98-a38a-4d3b55609e79-operator-scripts\") pod \"81dee002-8da2-4a98-a38a-4d3b55609e79\" (UID: \"81dee002-8da2-4a98-a38a-4d3b55609e79\") " Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.652523 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81dee002-8da2-4a98-a38a-4d3b55609e79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81dee002-8da2-4a98-a38a-4d3b55609e79" (UID: "81dee002-8da2-4a98-a38a-4d3b55609e79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.653157 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdnjm\" (UniqueName: \"kubernetes.io/projected/81dee002-8da2-4a98-a38a-4d3b55609e79-kube-api-access-wdnjm\") pod \"81dee002-8da2-4a98-a38a-4d3b55609e79\" (UID: \"81dee002-8da2-4a98-a38a-4d3b55609e79\") " Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.653222 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8q5b\" (UniqueName: \"kubernetes.io/projected/6809e9ed-0919-419c-87f4-86d756616c27-kube-api-access-w8q5b\") pod \"6809e9ed-0919-419c-87f4-86d756616c27\" (UID: \"6809e9ed-0919-419c-87f4-86d756616c27\") " Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.653318 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6809e9ed-0919-419c-87f4-86d756616c27-operator-scripts\") pod \"6809e9ed-0919-419c-87f4-86d756616c27\" (UID: \"6809e9ed-0919-419c-87f4-86d756616c27\") " Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.653778 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81dee002-8da2-4a98-a38a-4d3b55609e79-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.653942 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6809e9ed-0919-419c-87f4-86d756616c27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6809e9ed-0919-419c-87f4-86d756616c27" (UID: "6809e9ed-0919-419c-87f4-86d756616c27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.658975 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6809e9ed-0919-419c-87f4-86d756616c27-kube-api-access-w8q5b" (OuterVolumeSpecName: "kube-api-access-w8q5b") pod "6809e9ed-0919-419c-87f4-86d756616c27" (UID: "6809e9ed-0919-419c-87f4-86d756616c27"). InnerVolumeSpecName "kube-api-access-w8q5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.661413 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81dee002-8da2-4a98-a38a-4d3b55609e79-kube-api-access-wdnjm" (OuterVolumeSpecName: "kube-api-access-wdnjm") pod "81dee002-8da2-4a98-a38a-4d3b55609e79" (UID: "81dee002-8da2-4a98-a38a-4d3b55609e79"). InnerVolumeSpecName "kube-api-access-wdnjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.694164 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-2nl2c"] Mar 16 15:30:57 crc kubenswrapper[4736]: E0316 15:30:57.694634 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81dee002-8da2-4a98-a38a-4d3b55609e79" containerName="mariadb-database-create" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.694658 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="81dee002-8da2-4a98-a38a-4d3b55609e79" containerName="mariadb-database-create" Mar 16 15:30:57 crc kubenswrapper[4736]: E0316 15:30:57.694687 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6809e9ed-0919-419c-87f4-86d756616c27" containerName="mariadb-account-create-update" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.694696 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6809e9ed-0919-419c-87f4-86d756616c27" containerName="mariadb-account-create-update" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.694874 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="81dee002-8da2-4a98-a38a-4d3b55609e79" containerName="mariadb-database-create" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.694906 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6809e9ed-0919-419c-87f4-86d756616c27" containerName="mariadb-account-create-update" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.695552 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2nl2c" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.711076 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2nl2c"] Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.758395 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8gh8\" (UniqueName: \"kubernetes.io/projected/e0f1f86e-2324-4429-beb7-14d4d02563fe-kube-api-access-d8gh8\") pod \"glance-db-create-2nl2c\" (UID: \"e0f1f86e-2324-4429-beb7-14d4d02563fe\") " pod="openstack/glance-db-create-2nl2c" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.758482 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0f1f86e-2324-4429-beb7-14d4d02563fe-operator-scripts\") pod \"glance-db-create-2nl2c\" (UID: \"e0f1f86e-2324-4429-beb7-14d4d02563fe\") " pod="openstack/glance-db-create-2nl2c" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.758533 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdnjm\" (UniqueName: \"kubernetes.io/projected/81dee002-8da2-4a98-a38a-4d3b55609e79-kube-api-access-wdnjm\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.758546 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8q5b\" (UniqueName: \"kubernetes.io/projected/6809e9ed-0919-419c-87f4-86d756616c27-kube-api-access-w8q5b\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.758557 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6809e9ed-0919-419c-87f4-86d756616c27-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.764405 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tq8tw" event={"ID":"81dee002-8da2-4a98-a38a-4d3b55609e79","Type":"ContainerDied","Data":"63fb03f8b7753aa620b248aa6c0edfbdf1777ad8e445d7ca07f4c5ecbd6c25ea"} Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.764456 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63fb03f8b7753aa620b248aa6c0edfbdf1777ad8e445d7ca07f4c5ecbd6c25ea" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.764525 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq8tw" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.779044 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-430b-account-create-update-p6zhb" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.781598 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-430b-account-create-update-p6zhb" event={"ID":"6809e9ed-0919-419c-87f4-86d756616c27","Type":"ContainerDied","Data":"d5030fd2de062d81ee9210fd9507aabb0e5ba0ec37e84407be577ffa398c32f9"} Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.781645 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5030fd2de062d81ee9210fd9507aabb0e5ba0ec37e84407be577ffa398c32f9" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.815943 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1291-account-create-update-64kxj"] Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.817067 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1291-account-create-update-64kxj" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.819897 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.835841 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1291-account-create-update-64kxj"] Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.860771 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8gh8\" (UniqueName: \"kubernetes.io/projected/e0f1f86e-2324-4429-beb7-14d4d02563fe-kube-api-access-d8gh8\") pod \"glance-db-create-2nl2c\" (UID: \"e0f1f86e-2324-4429-beb7-14d4d02563fe\") " pod="openstack/glance-db-create-2nl2c" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.860865 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0f1f86e-2324-4429-beb7-14d4d02563fe-operator-scripts\") pod \"glance-db-create-2nl2c\" (UID: \"e0f1f86e-2324-4429-beb7-14d4d02563fe\") " pod="openstack/glance-db-create-2nl2c" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.861688 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0f1f86e-2324-4429-beb7-14d4d02563fe-operator-scripts\") pod \"glance-db-create-2nl2c\" (UID: \"e0f1f86e-2324-4429-beb7-14d4d02563fe\") " pod="openstack/glance-db-create-2nl2c" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.869578 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c8c8d4885-6znjz"] Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.883831 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8gh8\" (UniqueName: \"kubernetes.io/projected/e0f1f86e-2324-4429-beb7-14d4d02563fe-kube-api-access-d8gh8\") pod \"glance-db-create-2nl2c\" (UID: \"e0f1f86e-2324-4429-beb7-14d4d02563fe\") " pod="openstack/glance-db-create-2nl2c" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.966651 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lcbb\" (UniqueName: \"kubernetes.io/projected/21b9e5ba-7a59-41be-9981-4dcf75383b70-kube-api-access-2lcbb\") pod \"glance-1291-account-create-update-64kxj\" (UID: \"21b9e5ba-7a59-41be-9981-4dcf75383b70\") " pod="openstack/glance-1291-account-create-update-64kxj" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.967892 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b9e5ba-7a59-41be-9981-4dcf75383b70-operator-scripts\") pod \"glance-1291-account-create-update-64kxj\" (UID: \"21b9e5ba-7a59-41be-9981-4dcf75383b70\") " pod="openstack/glance-1291-account-create-update-64kxj" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.971961 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.984621 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.991665 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.991662 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-9rw6b" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.991961 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 16 15:30:57 crc kubenswrapper[4736]: I0316 15:30:57.993437 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.020212 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2nl2c" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.029620 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.077210 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lcbb\" (UniqueName: \"kubernetes.io/projected/21b9e5ba-7a59-41be-9981-4dcf75383b70-kube-api-access-2lcbb\") pod \"glance-1291-account-create-update-64kxj\" (UID: \"21b9e5ba-7a59-41be-9981-4dcf75383b70\") " pod="openstack/glance-1291-account-create-update-64kxj" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.077346 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.077369 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0892ebc9-dbd4-4652-9691-13028da07f80-cache\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.077419 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b9e5ba-7a59-41be-9981-4dcf75383b70-operator-scripts\") pod \"glance-1291-account-create-update-64kxj\" (UID: \"21b9e5ba-7a59-41be-9981-4dcf75383b70\") " pod="openstack/glance-1291-account-create-update-64kxj" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.077445 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0892ebc9-dbd4-4652-9691-13028da07f80-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.077508 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfrhl\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-kube-api-access-bfrhl\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.077537 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.077562 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0892ebc9-dbd4-4652-9691-13028da07f80-lock\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.080028 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b9e5ba-7a59-41be-9981-4dcf75383b70-operator-scripts\") pod \"glance-1291-account-create-update-64kxj\" (UID: \"21b9e5ba-7a59-41be-9981-4dcf75383b70\") " pod="openstack/glance-1291-account-create-update-64kxj" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.098204 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lcbb\" (UniqueName: \"kubernetes.io/projected/21b9e5ba-7a59-41be-9981-4dcf75383b70-kube-api-access-2lcbb\") pod \"glance-1291-account-create-update-64kxj\" (UID: \"21b9e5ba-7a59-41be-9981-4dcf75383b70\") " pod="openstack/glance-1291-account-create-update-64kxj" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.173848 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1291-account-create-update-64kxj" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.189666 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0892ebc9-dbd4-4652-9691-13028da07f80-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.189732 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfrhl\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-kube-api-access-bfrhl\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.189764 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.189790 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0892ebc9-dbd4-4652-9691-13028da07f80-lock\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.189865 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.189883 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0892ebc9-dbd4-4652-9691-13028da07f80-cache\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.190480 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0892ebc9-dbd4-4652-9691-13028da07f80-cache\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: E0316 15:30:58.191678 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:30:58 crc kubenswrapper[4736]: E0316 15:30:58.191714 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 16 15:30:58 crc kubenswrapper[4736]: E0316 15:30:58.191982 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift podName:0892ebc9-dbd4-4652-9691-13028da07f80 nodeName:}" failed. No retries permitted until 2026-03-16 15:30:58.691761134 +0000 UTC m=+1060.419151421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift") pod "swift-storage-0" (UID: "0892ebc9-dbd4-4652-9691-13028da07f80") : configmap "swift-ring-files" not found Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.192733 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0892ebc9-dbd4-4652-9691-13028da07f80-lock\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.192851 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.211303 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0892ebc9-dbd4-4652-9691-13028da07f80-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.271897 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfrhl\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-kube-api-access-bfrhl\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.301638 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.396299 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.469666 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v2mn2" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.485859 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qpbdn"] Mar 16 15:30:58 crc kubenswrapper[4736]: E0316 15:30:58.486262 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78aac930-9cb0-48f5-80a3-aa0b50917c88" containerName="mariadb-database-create" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.486281 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="78aac930-9cb0-48f5-80a3-aa0b50917c88" containerName="mariadb-database-create" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.486439 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="78aac930-9cb0-48f5-80a3-aa0b50917c88" containerName="mariadb-database-create" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.487012 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.503366 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78aac930-9cb0-48f5-80a3-aa0b50917c88-operator-scripts\") pod \"78aac930-9cb0-48f5-80a3-aa0b50917c88\" (UID: \"78aac930-9cb0-48f5-80a3-aa0b50917c88\") " Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.503510 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zdbb\" (UniqueName: \"kubernetes.io/projected/78aac930-9cb0-48f5-80a3-aa0b50917c88-kube-api-access-8zdbb\") pod \"78aac930-9cb0-48f5-80a3-aa0b50917c88\" (UID: \"78aac930-9cb0-48f5-80a3-aa0b50917c88\") " Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.503662 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-combined-ca-bundle\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.503688 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gs2s\" (UniqueName: \"kubernetes.io/projected/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-kube-api-access-4gs2s\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.503735 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-dispersionconf\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.503772 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-scripts\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.503812 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-etc-swift\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.503837 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-swiftconf\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.503862 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-ring-data-devices\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.507074 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78aac930-9cb0-48f5-80a3-aa0b50917c88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78aac930-9cb0-48f5-80a3-aa0b50917c88" (UID: "78aac930-9cb0-48f5-80a3-aa0b50917c88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.510214 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.510450 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.510585 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.562763 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78aac930-9cb0-48f5-80a3-aa0b50917c88-kube-api-access-8zdbb" (OuterVolumeSpecName: "kube-api-access-8zdbb") pod "78aac930-9cb0-48f5-80a3-aa0b50917c88" (UID: "78aac930-9cb0-48f5-80a3-aa0b50917c88"). InnerVolumeSpecName "kube-api-access-8zdbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.604947 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-scripts\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.605012 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-etc-swift\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.605037 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-swiftconf\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.605066 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-ring-data-devices\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.605144 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-combined-ca-bundle\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.605166 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gs2s\" (UniqueName: \"kubernetes.io/projected/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-kube-api-access-4gs2s\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.605211 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-dispersionconf\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.605256 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zdbb\" (UniqueName: \"kubernetes.io/projected/78aac930-9cb0-48f5-80a3-aa0b50917c88-kube-api-access-8zdbb\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.605267 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78aac930-9cb0-48f5-80a3-aa0b50917c88-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.609252 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-scripts\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.609436 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-etc-swift\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.609668 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-18c6-account-create-update-887bw" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.610339 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-dispersionconf\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.610997 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-ring-data-devices\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.627389 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-combined-ca-bundle\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.630196 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qpbdn"] Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.632651 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-swiftconf\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.654313 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-qpbdn"] Mar 16 15:30:58 crc kubenswrapper[4736]: E0316 15:30:58.655024 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-4gs2s], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-qpbdn" podUID="8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.686535 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gs2s\" (UniqueName: \"kubernetes.io/projected/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-kube-api-access-4gs2s\") pod \"swift-ring-rebalance-qpbdn\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.709838 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:58 crc kubenswrapper[4736]: E0316 15:30:58.710186 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:30:58 crc kubenswrapper[4736]: E0316 15:30:58.710209 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 16 15:30:58 crc kubenswrapper[4736]: E0316 15:30:58.710276 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift podName:0892ebc9-dbd4-4652-9691-13028da07f80 nodeName:}" failed. No retries permitted until 2026-03-16 15:30:59.710249208 +0000 UTC m=+1061.437639495 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift") pod "swift-storage-0" (UID: "0892ebc9-dbd4-4652-9691-13028da07f80") : configmap "swift-ring-files" not found Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.817884 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e064290b-372f-478f-b907-557ecf3e5bc3-operator-scripts\") pod \"e064290b-372f-478f-b907-557ecf3e5bc3\" (UID: \"e064290b-372f-478f-b907-557ecf3e5bc3\") " Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.817992 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nnhg\" (UniqueName: \"kubernetes.io/projected/e064290b-372f-478f-b907-557ecf3e5bc3-kube-api-access-9nnhg\") pod \"e064290b-372f-478f-b907-557ecf3e5bc3\" (UID: \"e064290b-372f-478f-b907-557ecf3e5bc3\") " Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.818728 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e064290b-372f-478f-b907-557ecf3e5bc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e064290b-372f-478f-b907-557ecf3e5bc3" (UID: "e064290b-372f-478f-b907-557ecf3e5bc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.848019 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e064290b-372f-478f-b907-557ecf3e5bc3-kube-api-access-9nnhg" (OuterVolumeSpecName: "kube-api-access-9nnhg") pod "e064290b-372f-478f-b907-557ecf3e5bc3" (UID: "e064290b-372f-478f-b907-557ecf3e5bc3"). InnerVolumeSpecName "kube-api-access-9nnhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.848674 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v2mn2" event={"ID":"78aac930-9cb0-48f5-80a3-aa0b50917c88","Type":"ContainerDied","Data":"ff1976ab4116ab4d2a28820e44a94dc76f4c99c7f56ad98deed75118136543a4"} Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.848712 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff1976ab4116ab4d2a28820e44a94dc76f4c99c7f56ad98deed75118136543a4" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.849016 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v2mn2" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.852148 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e064290b-372f-478f-b907-557ecf3e5bc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.852179 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nnhg\" (UniqueName: \"kubernetes.io/projected/e064290b-372f-478f-b907-557ecf3e5bc3-kube-api-access-9nnhg\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.856307 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" event={"ID":"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9","Type":"ContainerStarted","Data":"d30810ed002a08d8c57640b2c719af899038ad94462db92a6b9913418b2a10ba"} Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.857583 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.858308 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-18c6-account-create-update-887bw" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.858557 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-18c6-account-create-update-887bw" event={"ID":"e064290b-372f-478f-b907-557ecf3e5bc3","Type":"ContainerDied","Data":"40b8e0620965c114059d26500cb5e86030532d086ba8d58e5b575483267b5e9d"} Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.858610 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40b8e0620965c114059d26500cb5e86030532d086ba8d58e5b575483267b5e9d" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.909274 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.955312 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-swiftconf\") pod \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.955383 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-dispersionconf\") pod \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.955409 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gs2s\" (UniqueName: \"kubernetes.io/projected/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-kube-api-access-4gs2s\") pod \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.955469 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-scripts\") pod \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.955544 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-etc-swift\") pod \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.955601 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-combined-ca-bundle\") pod \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.955815 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-ring-data-devices\") pod \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\" (UID: \"8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107\") " Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.956784 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107" (UID: "8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.960511 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107" (UID: "8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.963394 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-scripts" (OuterVolumeSpecName: "scripts") pod "8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107" (UID: "8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.974084 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107" (UID: "8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.974635 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107" (UID: "8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.976281 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-kube-api-access-4gs2s" (OuterVolumeSpecName: "kube-api-access-4gs2s") pod "8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107" (UID: "8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107"). InnerVolumeSpecName "kube-api-access-4gs2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:30:58 crc kubenswrapper[4736]: I0316 15:30:58.980291 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107" (UID: "8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.065474 4736 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.065509 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.065521 4736 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.065533 4736 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.065545 4736 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.065553 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gs2s\" (UniqueName: \"kubernetes.io/projected/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-kube-api-access-4gs2s\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.065563 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.105415 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.281058 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1291-account-create-update-64kxj"] Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.452843 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 16 15:30:59 crc kubenswrapper[4736]: E0316 15:30:59.453276 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e064290b-372f-478f-b907-557ecf3e5bc3" containerName="mariadb-account-create-update" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.453298 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e064290b-372f-478f-b907-557ecf3e5bc3" containerName="mariadb-account-create-update" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.453495 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e064290b-372f-478f-b907-557ecf3e5bc3" containerName="mariadb-account-create-update" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.455523 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.471715 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.472185 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.472383 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-hl5vb" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.472383 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.486794 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38835fa0-dde3-4eb4-8ec0-7627436b49ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.486846 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tst7\" (UniqueName: \"kubernetes.io/projected/38835fa0-dde3-4eb4-8ec0-7627436b49ca-kube-api-access-4tst7\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.486868 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/38835fa0-dde3-4eb4-8ec0-7627436b49ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.486894 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/38835fa0-dde3-4eb4-8ec0-7627436b49ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.486970 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/38835fa0-dde3-4eb4-8ec0-7627436b49ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.486996 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38835fa0-dde3-4eb4-8ec0-7627436b49ca-scripts\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.487017 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38835fa0-dde3-4eb4-8ec0-7627436b49ca-config\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.494201 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-2nl2c"] Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.518395 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.588994 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38835fa0-dde3-4eb4-8ec0-7627436b49ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.589046 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tst7\" (UniqueName: \"kubernetes.io/projected/38835fa0-dde3-4eb4-8ec0-7627436b49ca-kube-api-access-4tst7\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.589072 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/38835fa0-dde3-4eb4-8ec0-7627436b49ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.589093 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/38835fa0-dde3-4eb4-8ec0-7627436b49ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.589187 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/38835fa0-dde3-4eb4-8ec0-7627436b49ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.589215 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38835fa0-dde3-4eb4-8ec0-7627436b49ca-scripts\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.589231 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38835fa0-dde3-4eb4-8ec0-7627436b49ca-config\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.590236 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38835fa0-dde3-4eb4-8ec0-7627436b49ca-scripts\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.592034 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38835fa0-dde3-4eb4-8ec0-7627436b49ca-config\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.610016 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/38835fa0-dde3-4eb4-8ec0-7627436b49ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.610303 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/38835fa0-dde3-4eb4-8ec0-7627436b49ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.615183 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/38835fa0-dde3-4eb4-8ec0-7627436b49ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.617694 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tst7\" (UniqueName: \"kubernetes.io/projected/38835fa0-dde3-4eb4-8ec0-7627436b49ca-kube-api-access-4tst7\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.621536 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38835fa0-dde3-4eb4-8ec0-7627436b49ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"38835fa0-dde3-4eb4-8ec0-7627436b49ca\") " pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.792421 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:30:59 crc kubenswrapper[4736]: E0316 15:30:59.792719 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:30:59 crc kubenswrapper[4736]: E0316 15:30:59.792760 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 16 15:30:59 crc kubenswrapper[4736]: E0316 15:30:59.792847 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift podName:0892ebc9-dbd4-4652-9691-13028da07f80 nodeName:}" failed. No retries permitted until 2026-03-16 15:31:01.792820072 +0000 UTC m=+1063.520210359 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift") pod "swift-storage-0" (UID: "0892ebc9-dbd4-4652-9691-13028da07f80") : configmap "swift-ring-files" not found Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.807727 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.874260 4736 generic.go:334] "Generic (PLEG): container finished" podID="25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" containerID="4b4e93accd2785e1e689ee30a98b7d3638386496474367678785c525387ee714" exitCode=0 Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.874577 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" event={"ID":"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9","Type":"ContainerDied","Data":"4b4e93accd2785e1e689ee30a98b7d3638386496474367678785c525387ee714"} Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.880145 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1291-account-create-update-64kxj" event={"ID":"21b9e5ba-7a59-41be-9981-4dcf75383b70","Type":"ContainerStarted","Data":"718bcfdd2a4b2526e48e296eba2be2a0de00ad286c262892a19015dc2f16ad38"} Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.880209 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1291-account-create-update-64kxj" event={"ID":"21b9e5ba-7a59-41be-9981-4dcf75383b70","Type":"ContainerStarted","Data":"5012469f797279847cc86d11034d43e9402e5975e3d5e6e19c0ed183b20acb2c"} Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.895346 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qpbdn" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.895508 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2nl2c" event={"ID":"e0f1f86e-2324-4429-beb7-14d4d02563fe","Type":"ContainerStarted","Data":"697af517e42b276811837143ee066426967533af73af2e7a8ca850d0a1a3354f"} Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.895585 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2nl2c" event={"ID":"e0f1f86e-2324-4429-beb7-14d4d02563fe","Type":"ContainerStarted","Data":"fd5efbcacab2ee39eb4ab07f6aa98b17498193331bd13435b4f266357916136b"} Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.931191 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-1291-account-create-update-64kxj" podStartSLOduration=2.931142729 podStartE2EDuration="2.931142729s" podCreationTimestamp="2026-03-16 15:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:30:59.913587412 +0000 UTC m=+1061.640977699" watchObservedRunningTime="2026-03-16 15:30:59.931142729 +0000 UTC m=+1061.658533016" Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.953269 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-qpbdn"] Mar 16 15:30:59 crc kubenswrapper[4736]: I0316 15:30:59.957916 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-qpbdn"] Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.011207 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-2nl2c" podStartSLOduration=3.011085937 podStartE2EDuration="3.011085937s" podCreationTimestamp="2026-03-16 15:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:30:59.973633107 +0000 UTC m=+1061.701023394" watchObservedRunningTime="2026-03-16 15:31:00.011085937 +0000 UTC m=+1061.738476224" Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.030723 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kb9cl"] Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.072554 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kb9cl"] Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.102726 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-24qsf"] Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.103930 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-24qsf" Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.109027 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.127550 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-24qsf"] Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.200940 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af39abb7-382d-4776-b257-0e47f0c50a64-operator-scripts\") pod \"root-account-create-update-24qsf\" (UID: \"af39abb7-382d-4776-b257-0e47f0c50a64\") " pod="openstack/root-account-create-update-24qsf" Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.201061 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5j4m\" (UniqueName: \"kubernetes.io/projected/af39abb7-382d-4776-b257-0e47f0c50a64-kube-api-access-g5j4m\") pod \"root-account-create-update-24qsf\" (UID: \"af39abb7-382d-4776-b257-0e47f0c50a64\") " pod="openstack/root-account-create-update-24qsf" Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.303185 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af39abb7-382d-4776-b257-0e47f0c50a64-operator-scripts\") pod \"root-account-create-update-24qsf\" (UID: \"af39abb7-382d-4776-b257-0e47f0c50a64\") " pod="openstack/root-account-create-update-24qsf" Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.303289 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5j4m\" (UniqueName: \"kubernetes.io/projected/af39abb7-382d-4776-b257-0e47f0c50a64-kube-api-access-g5j4m\") pod \"root-account-create-update-24qsf\" (UID: \"af39abb7-382d-4776-b257-0e47f0c50a64\") " pod="openstack/root-account-create-update-24qsf" Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.304467 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af39abb7-382d-4776-b257-0e47f0c50a64-operator-scripts\") pod \"root-account-create-update-24qsf\" (UID: \"af39abb7-382d-4776-b257-0e47f0c50a64\") " pod="openstack/root-account-create-update-24qsf" Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.326088 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5j4m\" (UniqueName: \"kubernetes.io/projected/af39abb7-382d-4776-b257-0e47f0c50a64-kube-api-access-g5j4m\") pod \"root-account-create-update-24qsf\" (UID: \"af39abb7-382d-4776-b257-0e47f0c50a64\") " pod="openstack/root-account-create-update-24qsf" Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.438400 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-24qsf" Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.502732 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.911555 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"38835fa0-dde3-4eb4-8ec0-7627436b49ca","Type":"ContainerStarted","Data":"5653f7a138cdbaeeedc416b30e7436ed1cdcca43b67f695db37ddaaa3348810e"} Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.915582 4736 generic.go:334] "Generic (PLEG): container finished" podID="e0f1f86e-2324-4429-beb7-14d4d02563fe" containerID="697af517e42b276811837143ee066426967533af73af2e7a8ca850d0a1a3354f" exitCode=0 Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.915697 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2nl2c" event={"ID":"e0f1f86e-2324-4429-beb7-14d4d02563fe","Type":"ContainerDied","Data":"697af517e42b276811837143ee066426967533af73af2e7a8ca850d0a1a3354f"} Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.919156 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" event={"ID":"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9","Type":"ContainerStarted","Data":"771ad06af2d4c9ef36653f83baa4eed9e42d44669bf0c3fdcd58900470917f32"} Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.920055 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.923612 4736 generic.go:334] "Generic (PLEG): container finished" podID="21b9e5ba-7a59-41be-9981-4dcf75383b70" containerID="718bcfdd2a4b2526e48e296eba2be2a0de00ad286c262892a19015dc2f16ad38" exitCode=0 Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.923656 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1291-account-create-update-64kxj" event={"ID":"21b9e5ba-7a59-41be-9981-4dcf75383b70","Type":"ContainerDied","Data":"718bcfdd2a4b2526e48e296eba2be2a0de00ad286c262892a19015dc2f16ad38"} Mar 16 15:31:00 crc kubenswrapper[4736]: W0316 15:31:00.988762 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf39abb7_382d_4776_b257_0e47f0c50a64.slice/crio-ce557e2dfd4c376c301bcb3af3164b9a935f6fb693fb1ac10c0da6603cffddbb WatchSource:0}: Error finding container ce557e2dfd4c376c301bcb3af3164b9a935f6fb693fb1ac10c0da6603cffddbb: Status 404 returned error can't find the container with id ce557e2dfd4c376c301bcb3af3164b9a935f6fb693fb1ac10c0da6603cffddbb Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.990379 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107" path="/var/lib/kubelet/pods/8c5d9b4a-4b3b-4cd0-a628-f07a31ffa107/volumes" Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.990807 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b568935b-8768-4dd6-b103-3e7e900706d2" path="/var/lib/kubelet/pods/b568935b-8768-4dd6-b103-3e7e900706d2/volumes" Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.991363 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-24qsf"] Mar 16 15:31:00 crc kubenswrapper[4736]: I0316 15:31:00.996155 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" podStartSLOduration=4.996130274 podStartE2EDuration="4.996130274s" podCreationTimestamp="2026-03-16 15:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:31:00.988572515 +0000 UTC m=+1062.715962812" watchObservedRunningTime="2026-03-16 15:31:00.996130274 +0000 UTC m=+1062.723520561" Mar 16 15:31:01 crc kubenswrapper[4736]: I0316 15:31:01.840801 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:31:01 crc kubenswrapper[4736]: E0316 15:31:01.841583 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:31:01 crc kubenswrapper[4736]: E0316 15:31:01.841608 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 16 15:31:01 crc kubenswrapper[4736]: E0316 15:31:01.841669 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift podName:0892ebc9-dbd4-4652-9691-13028da07f80 nodeName:}" failed. No retries permitted until 2026-03-16 15:31:05.841651931 +0000 UTC m=+1067.569042218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift") pod "swift-storage-0" (UID: "0892ebc9-dbd4-4652-9691-13028da07f80") : configmap "swift-ring-files" not found Mar 16 15:31:01 crc kubenswrapper[4736]: I0316 15:31:01.942085 4736 generic.go:334] "Generic (PLEG): container finished" podID="af39abb7-382d-4776-b257-0e47f0c50a64" containerID="9c560c8506d22d71c570cfe52b90c8a77fcb15d8633e3201d1e4a6dac25a5ab2" exitCode=0 Mar 16 15:31:01 crc kubenswrapper[4736]: I0316 15:31:01.943306 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-24qsf" event={"ID":"af39abb7-382d-4776-b257-0e47f0c50a64","Type":"ContainerDied","Data":"9c560c8506d22d71c570cfe52b90c8a77fcb15d8633e3201d1e4a6dac25a5ab2"} Mar 16 15:31:01 crc kubenswrapper[4736]: I0316 15:31:01.943361 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-24qsf" event={"ID":"af39abb7-382d-4776-b257-0e47f0c50a64","Type":"ContainerStarted","Data":"ce557e2dfd4c376c301bcb3af3164b9a935f6fb693fb1ac10c0da6603cffddbb"} Mar 16 15:31:01 crc kubenswrapper[4736]: I0316 15:31:01.947525 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"38835fa0-dde3-4eb4-8ec0-7627436b49ca","Type":"ContainerStarted","Data":"9017bc124eb5b05e7269b3f6c5a0f57c41aacaff9fa0b53d9b0aa9dc144e0a23"} Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.622618 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1291-account-create-update-64kxj" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.671151 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b9e5ba-7a59-41be-9981-4dcf75383b70-operator-scripts\") pod \"21b9e5ba-7a59-41be-9981-4dcf75383b70\" (UID: \"21b9e5ba-7a59-41be-9981-4dcf75383b70\") " Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.671202 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lcbb\" (UniqueName: \"kubernetes.io/projected/21b9e5ba-7a59-41be-9981-4dcf75383b70-kube-api-access-2lcbb\") pod \"21b9e5ba-7a59-41be-9981-4dcf75383b70\" (UID: \"21b9e5ba-7a59-41be-9981-4dcf75383b70\") " Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.672034 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b9e5ba-7a59-41be-9981-4dcf75383b70-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21b9e5ba-7a59-41be-9981-4dcf75383b70" (UID: "21b9e5ba-7a59-41be-9981-4dcf75383b70"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.694561 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21b9e5ba-7a59-41be-9981-4dcf75383b70-kube-api-access-2lcbb" (OuterVolumeSpecName: "kube-api-access-2lcbb") pod "21b9e5ba-7a59-41be-9981-4dcf75383b70" (UID: "21b9e5ba-7a59-41be-9981-4dcf75383b70"). InnerVolumeSpecName "kube-api-access-2lcbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.739280 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2nl2c" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.773285 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0f1f86e-2324-4429-beb7-14d4d02563fe-operator-scripts\") pod \"e0f1f86e-2324-4429-beb7-14d4d02563fe\" (UID: \"e0f1f86e-2324-4429-beb7-14d4d02563fe\") " Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.773430 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8gh8\" (UniqueName: \"kubernetes.io/projected/e0f1f86e-2324-4429-beb7-14d4d02563fe-kube-api-access-d8gh8\") pod \"e0f1f86e-2324-4429-beb7-14d4d02563fe\" (UID: \"e0f1f86e-2324-4429-beb7-14d4d02563fe\") " Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.773961 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b9e5ba-7a59-41be-9981-4dcf75383b70-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.773981 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lcbb\" (UniqueName: \"kubernetes.io/projected/21b9e5ba-7a59-41be-9981-4dcf75383b70-kube-api-access-2lcbb\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.774239 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f1f86e-2324-4429-beb7-14d4d02563fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0f1f86e-2324-4429-beb7-14d4d02563fe" (UID: "e0f1f86e-2324-4429-beb7-14d4d02563fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.778130 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f1f86e-2324-4429-beb7-14d4d02563fe-kube-api-access-d8gh8" (OuterVolumeSpecName: "kube-api-access-d8gh8") pod "e0f1f86e-2324-4429-beb7-14d4d02563fe" (UID: "e0f1f86e-2324-4429-beb7-14d4d02563fe"). InnerVolumeSpecName "kube-api-access-d8gh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.875712 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0f1f86e-2324-4429-beb7-14d4d02563fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.875747 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8gh8\" (UniqueName: \"kubernetes.io/projected/e0f1f86e-2324-4429-beb7-14d4d02563fe-kube-api-access-d8gh8\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.958546 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1291-account-create-update-64kxj" event={"ID":"21b9e5ba-7a59-41be-9981-4dcf75383b70","Type":"ContainerDied","Data":"5012469f797279847cc86d11034d43e9402e5975e3d5e6e19c0ed183b20acb2c"} Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.959753 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5012469f797279847cc86d11034d43e9402e5975e3d5e6e19c0ed183b20acb2c" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.959949 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1291-account-create-update-64kxj" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.963594 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"38835fa0-dde3-4eb4-8ec0-7627436b49ca","Type":"ContainerStarted","Data":"7e96b95ec6ec17cafb61f500e2a18110f94c052afa79bc53bd989366f46ec651"} Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.963775 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.965840 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-2nl2c" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.965815 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-2nl2c" event={"ID":"e0f1f86e-2324-4429-beb7-14d4d02563fe","Type":"ContainerDied","Data":"fd5efbcacab2ee39eb4ab07f6aa98b17498193331bd13435b4f266357916136b"} Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.965968 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd5efbcacab2ee39eb4ab07f6aa98b17498193331bd13435b4f266357916136b" Mar 16 15:31:02 crc kubenswrapper[4736]: I0316 15:31:02.994062 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.908184167 podStartE2EDuration="3.994037242s" podCreationTimestamp="2026-03-16 15:30:59 +0000 UTC" firstStartedPulling="2026-03-16 15:31:00.519731857 +0000 UTC m=+1062.247122134" lastFinishedPulling="2026-03-16 15:31:01.605584932 +0000 UTC m=+1063.332975209" observedRunningTime="2026-03-16 15:31:02.992819147 +0000 UTC m=+1064.720209424" watchObservedRunningTime="2026-03-16 15:31:02.994037242 +0000 UTC m=+1064.721427529" Mar 16 15:31:03 crc kubenswrapper[4736]: I0316 15:31:03.320492 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-24qsf" Mar 16 15:31:03 crc kubenswrapper[4736]: I0316 15:31:03.386392 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af39abb7-382d-4776-b257-0e47f0c50a64-operator-scripts\") pod \"af39abb7-382d-4776-b257-0e47f0c50a64\" (UID: \"af39abb7-382d-4776-b257-0e47f0c50a64\") " Mar 16 15:31:03 crc kubenswrapper[4736]: I0316 15:31:03.386458 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5j4m\" (UniqueName: \"kubernetes.io/projected/af39abb7-382d-4776-b257-0e47f0c50a64-kube-api-access-g5j4m\") pod \"af39abb7-382d-4776-b257-0e47f0c50a64\" (UID: \"af39abb7-382d-4776-b257-0e47f0c50a64\") " Mar 16 15:31:03 crc kubenswrapper[4736]: I0316 15:31:03.386961 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af39abb7-382d-4776-b257-0e47f0c50a64-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af39abb7-382d-4776-b257-0e47f0c50a64" (UID: "af39abb7-382d-4776-b257-0e47f0c50a64"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:03 crc kubenswrapper[4736]: I0316 15:31:03.391324 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af39abb7-382d-4776-b257-0e47f0c50a64-kube-api-access-g5j4m" (OuterVolumeSpecName: "kube-api-access-g5j4m") pod "af39abb7-382d-4776-b257-0e47f0c50a64" (UID: "af39abb7-382d-4776-b257-0e47f0c50a64"). InnerVolumeSpecName "kube-api-access-g5j4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:03 crc kubenswrapper[4736]: I0316 15:31:03.514082 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5j4m\" (UniqueName: \"kubernetes.io/projected/af39abb7-382d-4776-b257-0e47f0c50a64-kube-api-access-g5j4m\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:03 crc kubenswrapper[4736]: I0316 15:31:03.514151 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af39abb7-382d-4776-b257-0e47f0c50a64-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:03 crc kubenswrapper[4736]: I0316 15:31:03.977009 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-24qsf" Mar 16 15:31:03 crc kubenswrapper[4736]: I0316 15:31:03.977056 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-24qsf" event={"ID":"af39abb7-382d-4776-b257-0e47f0c50a64","Type":"ContainerDied","Data":"ce557e2dfd4c376c301bcb3af3164b9a935f6fb693fb1ac10c0da6603cffddbb"} Mar 16 15:31:03 crc kubenswrapper[4736]: I0316 15:31:03.977644 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce557e2dfd4c376c301bcb3af3164b9a935f6fb693fb1ac10c0da6603cffddbb" Mar 16 15:31:05 crc kubenswrapper[4736]: I0316 15:31:05.863971 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:31:05 crc kubenswrapper[4736]: E0316 15:31:05.864328 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:31:05 crc kubenswrapper[4736]: E0316 15:31:05.865377 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 16 15:31:05 crc kubenswrapper[4736]: E0316 15:31:05.865488 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift podName:0892ebc9-dbd4-4652-9691-13028da07f80 nodeName:}" failed. No retries permitted until 2026-03-16 15:31:13.865450202 +0000 UTC m=+1075.592840529 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift") pod "swift-storage-0" (UID: "0892ebc9-dbd4-4652-9691-13028da07f80") : configmap "swift-ring-files" not found Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.194236 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.319466 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd56bc579-prjm2"] Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.319763 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" podUID="73c748c6-cd8c-4510-8d12-3624b65ddebb" containerName="dnsmasq-dns" containerID="cri-o://5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b" gracePeriod=10 Mar 16 15:31:07 crc kubenswrapper[4736]: E0316 15:31:07.486957 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c748c6_cd8c_4510_8d12_3624b65ddebb.slice/crio-5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73c748c6_cd8c_4510_8d12_3624b65ddebb.slice/crio-conmon-5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b.scope\": RecentStats: unable to find data in memory cache]" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.832982 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.923811 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-dns-svc\") pod \"73c748c6-cd8c-4510-8d12-3624b65ddebb\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.923985 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-config\") pod \"73c748c6-cd8c-4510-8d12-3624b65ddebb\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.924064 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-ovsdbserver-nb\") pod \"73c748c6-cd8c-4510-8d12-3624b65ddebb\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.924087 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pswt\" (UniqueName: \"kubernetes.io/projected/73c748c6-cd8c-4510-8d12-3624b65ddebb-kube-api-access-4pswt\") pod \"73c748c6-cd8c-4510-8d12-3624b65ddebb\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.924143 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-ovsdbserver-sb\") pod \"73c748c6-cd8c-4510-8d12-3624b65ddebb\" (UID: \"73c748c6-cd8c-4510-8d12-3624b65ddebb\") " Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.946237 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c748c6-cd8c-4510-8d12-3624b65ddebb-kube-api-access-4pswt" (OuterVolumeSpecName: "kube-api-access-4pswt") pod "73c748c6-cd8c-4510-8d12-3624b65ddebb" (UID: "73c748c6-cd8c-4510-8d12-3624b65ddebb"). InnerVolumeSpecName "kube-api-access-4pswt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.951821 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-7g9td"] Mar 16 15:31:07 crc kubenswrapper[4736]: E0316 15:31:07.952258 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c748c6-cd8c-4510-8d12-3624b65ddebb" containerName="init" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.952273 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c748c6-cd8c-4510-8d12-3624b65ddebb" containerName="init" Mar 16 15:31:07 crc kubenswrapper[4736]: E0316 15:31:07.952297 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b9e5ba-7a59-41be-9981-4dcf75383b70" containerName="mariadb-account-create-update" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.952305 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b9e5ba-7a59-41be-9981-4dcf75383b70" containerName="mariadb-account-create-update" Mar 16 15:31:07 crc kubenswrapper[4736]: E0316 15:31:07.952318 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af39abb7-382d-4776-b257-0e47f0c50a64" containerName="mariadb-account-create-update" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.952324 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="af39abb7-382d-4776-b257-0e47f0c50a64" containerName="mariadb-account-create-update" Mar 16 15:31:07 crc kubenswrapper[4736]: E0316 15:31:07.952346 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c748c6-cd8c-4510-8d12-3624b65ddebb" containerName="dnsmasq-dns" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.952353 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c748c6-cd8c-4510-8d12-3624b65ddebb" containerName="dnsmasq-dns" Mar 16 15:31:07 crc kubenswrapper[4736]: E0316 15:31:07.952364 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f1f86e-2324-4429-beb7-14d4d02563fe" containerName="mariadb-database-create" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.952371 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f1f86e-2324-4429-beb7-14d4d02563fe" containerName="mariadb-database-create" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.952522 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b9e5ba-7a59-41be-9981-4dcf75383b70" containerName="mariadb-account-create-update" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.952536 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="af39abb7-382d-4776-b257-0e47f0c50a64" containerName="mariadb-account-create-update" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.952543 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c748c6-cd8c-4510-8d12-3624b65ddebb" containerName="dnsmasq-dns" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.952558 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f1f86e-2324-4429-beb7-14d4d02563fe" containerName="mariadb-database-create" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.953198 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7g9td" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.966533 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.966765 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-l5s4t" Mar 16 15:31:07 crc kubenswrapper[4736]: I0316 15:31:07.974584 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7g9td"] Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.028587 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-config-data\") pod \"glance-db-sync-7g9td\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " pod="openstack/glance-db-sync-7g9td" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.028663 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-db-sync-config-data\") pod \"glance-db-sync-7g9td\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " pod="openstack/glance-db-sync-7g9td" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.028727 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-combined-ca-bundle\") pod \"glance-db-sync-7g9td\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " pod="openstack/glance-db-sync-7g9td" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.028748 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff7rp\" (UniqueName: \"kubernetes.io/projected/1002af8b-786a-47c1-8872-f417cc88561e-kube-api-access-ff7rp\") pod \"glance-db-sync-7g9td\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " pod="openstack/glance-db-sync-7g9td" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.028794 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pswt\" (UniqueName: \"kubernetes.io/projected/73c748c6-cd8c-4510-8d12-3624b65ddebb-kube-api-access-4pswt\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.047514 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-config" (OuterVolumeSpecName: "config") pod "73c748c6-cd8c-4510-8d12-3624b65ddebb" (UID: "73c748c6-cd8c-4510-8d12-3624b65ddebb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.047778 4736 generic.go:334] "Generic (PLEG): container finished" podID="73c748c6-cd8c-4510-8d12-3624b65ddebb" containerID="5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b" exitCode=0 Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.048027 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.047838 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" event={"ID":"73c748c6-cd8c-4510-8d12-3624b65ddebb","Type":"ContainerDied","Data":"5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b"} Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.048171 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd56bc579-prjm2" event={"ID":"73c748c6-cd8c-4510-8d12-3624b65ddebb","Type":"ContainerDied","Data":"28d23190597c1ba5ea4ea9c028b166ac89e99455a0af778a468a15089ce6b2c9"} Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.048195 4736 scope.go:117] "RemoveContainer" containerID="5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.054714 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73c748c6-cd8c-4510-8d12-3624b65ddebb" (UID: "73c748c6-cd8c-4510-8d12-3624b65ddebb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.061272 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73c748c6-cd8c-4510-8d12-3624b65ddebb" (UID: "73c748c6-cd8c-4510-8d12-3624b65ddebb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.066149 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "73c748c6-cd8c-4510-8d12-3624b65ddebb" (UID: "73c748c6-cd8c-4510-8d12-3624b65ddebb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.073646 4736 scope.go:117] "RemoveContainer" containerID="a41b1fbb9c52a1bc5affeef0b998b32c842eeddca2e3181966809c2f5e514280" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.092604 4736 scope.go:117] "RemoveContainer" containerID="5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b" Mar 16 15:31:08 crc kubenswrapper[4736]: E0316 15:31:08.093712 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b\": container with ID starting with 5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b not found: ID does not exist" containerID="5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.093917 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b"} err="failed to get container status \"5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b\": rpc error: code = NotFound desc = could not find container \"5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b\": container with ID starting with 5f6398bc9e4328f11dd81090d84c70b217d05752244d45eea37453d98ae05c6b not found: ID does not exist" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.094035 4736 scope.go:117] "RemoveContainer" containerID="a41b1fbb9c52a1bc5affeef0b998b32c842eeddca2e3181966809c2f5e514280" Mar 16 15:31:08 crc kubenswrapper[4736]: E0316 15:31:08.095165 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a41b1fbb9c52a1bc5affeef0b998b32c842eeddca2e3181966809c2f5e514280\": container with ID starting with a41b1fbb9c52a1bc5affeef0b998b32c842eeddca2e3181966809c2f5e514280 not found: ID does not exist" containerID="a41b1fbb9c52a1bc5affeef0b998b32c842eeddca2e3181966809c2f5e514280" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.095261 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a41b1fbb9c52a1bc5affeef0b998b32c842eeddca2e3181966809c2f5e514280"} err="failed to get container status \"a41b1fbb9c52a1bc5affeef0b998b32c842eeddca2e3181966809c2f5e514280\": rpc error: code = NotFound desc = could not find container \"a41b1fbb9c52a1bc5affeef0b998b32c842eeddca2e3181966809c2f5e514280\": container with ID starting with a41b1fbb9c52a1bc5affeef0b998b32c842eeddca2e3181966809c2f5e514280 not found: ID does not exist" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.130774 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-combined-ca-bundle\") pod \"glance-db-sync-7g9td\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " pod="openstack/glance-db-sync-7g9td" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.130818 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff7rp\" (UniqueName: \"kubernetes.io/projected/1002af8b-786a-47c1-8872-f417cc88561e-kube-api-access-ff7rp\") pod \"glance-db-sync-7g9td\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " pod="openstack/glance-db-sync-7g9td" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.130963 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-config-data\") pod \"glance-db-sync-7g9td\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " pod="openstack/glance-db-sync-7g9td" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.131001 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-db-sync-config-data\") pod \"glance-db-sync-7g9td\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " pod="openstack/glance-db-sync-7g9td" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.131085 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.131096 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.131138 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.131153 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73c748c6-cd8c-4510-8d12-3624b65ddebb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.135664 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-config-data\") pod \"glance-db-sync-7g9td\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " pod="openstack/glance-db-sync-7g9td" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.135866 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-combined-ca-bundle\") pod \"glance-db-sync-7g9td\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " pod="openstack/glance-db-sync-7g9td" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.135905 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-db-sync-config-data\") pod \"glance-db-sync-7g9td\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " pod="openstack/glance-db-sync-7g9td" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.150272 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff7rp\" (UniqueName: \"kubernetes.io/projected/1002af8b-786a-47c1-8872-f417cc88561e-kube-api-access-ff7rp\") pod \"glance-db-sync-7g9td\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " pod="openstack/glance-db-sync-7g9td" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.363054 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7g9td" Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.403019 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd56bc579-prjm2"] Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.410636 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cd56bc579-prjm2"] Mar 16 15:31:08 crc kubenswrapper[4736]: I0316 15:31:08.962685 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7g9td"] Mar 16 15:31:09 crc kubenswrapper[4736]: I0316 15:31:08.999641 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73c748c6-cd8c-4510-8d12-3624b65ddebb" path="/var/lib/kubelet/pods/73c748c6-cd8c-4510-8d12-3624b65ddebb/volumes" Mar 16 15:31:09 crc kubenswrapper[4736]: I0316 15:31:09.059475 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7g9td" event={"ID":"1002af8b-786a-47c1-8872-f417cc88561e","Type":"ContainerStarted","Data":"da72a72aff60506bbe2b5d55347a5a3bfd72d757c06cd2d3aafdb9054db43bc3"} Mar 16 15:31:10 crc kubenswrapper[4736]: I0316 15:31:10.069836 4736 generic.go:334] "Generic (PLEG): container finished" podID="343be938-86f7-45c1-b8ef-a3143202be82" containerID="b19f745e7621dc0afc50381b80edc26830f3e6c65e4a8e103d89b5ac5336e755" exitCode=0 Mar 16 15:31:10 crc kubenswrapper[4736]: I0316 15:31:10.069901 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"343be938-86f7-45c1-b8ef-a3143202be82","Type":"ContainerDied","Data":"b19f745e7621dc0afc50381b80edc26830f3e6c65e4a8e103d89b5ac5336e755"} Mar 16 15:31:11 crc kubenswrapper[4736]: I0316 15:31:11.080964 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"343be938-86f7-45c1-b8ef-a3143202be82","Type":"ContainerStarted","Data":"501dc9d256dd94e2b7424efff25421d73d4dd41d42dcdab4adda73b4b8210496"} Mar 16 15:31:11 crc kubenswrapper[4736]: I0316 15:31:11.081567 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 16 15:31:11 crc kubenswrapper[4736]: I0316 15:31:11.118839 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.37993196 podStartE2EDuration="1m33.118796592s" podCreationTimestamp="2026-03-16 15:29:38 +0000 UTC" firstStartedPulling="2026-03-16 15:29:41.134678214 +0000 UTC m=+982.862068501" lastFinishedPulling="2026-03-16 15:30:35.873542856 +0000 UTC m=+1037.600933133" observedRunningTime="2026-03-16 15:31:11.117370242 +0000 UTC m=+1072.844760529" watchObservedRunningTime="2026-03-16 15:31:11.118796592 +0000 UTC m=+1072.846186879" Mar 16 15:31:12 crc kubenswrapper[4736]: I0316 15:31:12.090603 4736 generic.go:334] "Generic (PLEG): container finished" podID="582900c6-e591-4ff4-ac53-a8965af431e2" containerID="fe9b43956d77ed742b1abf72f183143a9580027753dd1c959c175ca9fe80b5e5" exitCode=0 Mar 16 15:31:12 crc kubenswrapper[4736]: I0316 15:31:12.090699 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"582900c6-e591-4ff4-ac53-a8965af431e2","Type":"ContainerDied","Data":"fe9b43956d77ed742b1abf72f183143a9580027753dd1c959c175ca9fe80b5e5"} Mar 16 15:31:13 crc kubenswrapper[4736]: I0316 15:31:13.102981 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"582900c6-e591-4ff4-ac53-a8965af431e2","Type":"ContainerStarted","Data":"501a39f0357b2d260be56384e2411cc489d7f31f3e7014f27fc44ce4f2e7ab5d"} Mar 16 15:31:13 crc kubenswrapper[4736]: I0316 15:31:13.104323 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:31:13 crc kubenswrapper[4736]: I0316 15:31:13.138611 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371941.71625 podStartE2EDuration="1m35.138525795s" podCreationTimestamp="2026-03-16 15:29:38 +0000 UTC" firstStartedPulling="2026-03-16 15:29:41.677350679 +0000 UTC m=+983.404740966" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:31:13.127140829 +0000 UTC m=+1074.854531266" watchObservedRunningTime="2026-03-16 15:31:13.138525795 +0000 UTC m=+1074.865916122" Mar 16 15:31:13 crc kubenswrapper[4736]: I0316 15:31:13.958041 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:31:13 crc kubenswrapper[4736]: E0316 15:31:13.958241 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:31:13 crc kubenswrapper[4736]: E0316 15:31:13.958383 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 16 15:31:13 crc kubenswrapper[4736]: E0316 15:31:13.958446 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift podName:0892ebc9-dbd4-4652-9691-13028da07f80 nodeName:}" failed. No retries permitted until 2026-03-16 15:31:29.958424261 +0000 UTC m=+1091.685814548 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift") pod "swift-storage-0" (UID: "0892ebc9-dbd4-4652-9691-13028da07f80") : configmap "swift-ring-files" not found Mar 16 15:31:14 crc kubenswrapper[4736]: I0316 15:31:14.761973 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:31:14 crc kubenswrapper[4736]: I0316 15:31:14.836424 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jchb9" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.068188 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9wkhh-config-ktfgs"] Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.069325 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.074204 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.097926 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9wkhh-config-ktfgs"] Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.185206 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-run-ovn\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.185347 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-scripts\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.185387 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-run\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.185427 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-log-ovn\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.185632 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz2jx\" (UniqueName: \"kubernetes.io/projected/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-kube-api-access-zz2jx\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.185742 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-additional-scripts\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.288091 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-run\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.288334 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-log-ovn\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.288544 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-run\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.288644 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-log-ovn\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.289778 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz2jx\" (UniqueName: \"kubernetes.io/projected/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-kube-api-access-zz2jx\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.290408 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-additional-scripts\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.290566 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-run-ovn\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.291476 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-additional-scripts\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.291598 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-run-ovn\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.291796 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-scripts\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.294489 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-scripts\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.317499 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz2jx\" (UniqueName: \"kubernetes.io/projected/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-kube-api-access-zz2jx\") pod \"ovn-controller-9wkhh-config-ktfgs\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.390499 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:15 crc kubenswrapper[4736]: I0316 15:31:15.961472 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9wkhh-config-ktfgs"] Mar 16 15:31:15 crc kubenswrapper[4736]: W0316 15:31:15.989753 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc9c2afb_4c51_45c2_9f06_cc2a194c2086.slice/crio-e4027cf2895a20c591566c393356b6870fb48eeda80a1f3d870ffda9621eec55 WatchSource:0}: Error finding container e4027cf2895a20c591566c393356b6870fb48eeda80a1f3d870ffda9621eec55: Status 404 returned error can't find the container with id e4027cf2895a20c591566c393356b6870fb48eeda80a1f3d870ffda9621eec55 Mar 16 15:31:16 crc kubenswrapper[4736]: I0316 15:31:16.145520 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wkhh-config-ktfgs" event={"ID":"fc9c2afb-4c51-45c2-9f06-cc2a194c2086","Type":"ContainerStarted","Data":"e4027cf2895a20c591566c393356b6870fb48eeda80a1f3d870ffda9621eec55"} Mar 16 15:31:17 crc kubenswrapper[4736]: I0316 15:31:17.169039 4736 generic.go:334] "Generic (PLEG): container finished" podID="fc9c2afb-4c51-45c2-9f06-cc2a194c2086" containerID="2fdc991772647e8f9e82a90a6e036097e6f4e3e7023d5725236a84aa3609e469" exitCode=0 Mar 16 15:31:17 crc kubenswrapper[4736]: I0316 15:31:17.169563 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wkhh-config-ktfgs" event={"ID":"fc9c2afb-4c51-45c2-9f06-cc2a194c2086","Type":"ContainerDied","Data":"2fdc991772647e8f9e82a90a6e036097e6f4e3e7023d5725236a84aa3609e469"} Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.019728 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-9wkhh" Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.619644 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.783300 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz2jx\" (UniqueName: \"kubernetes.io/projected/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-kube-api-access-zz2jx\") pod \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.783409 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-scripts\") pod \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.783503 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-additional-scripts\") pod \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.783525 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-run\") pod \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.783695 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-run-ovn\") pod \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.783680 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-run" (OuterVolumeSpecName: "var-run") pod "fc9c2afb-4c51-45c2-9f06-cc2a194c2086" (UID: "fc9c2afb-4c51-45c2-9f06-cc2a194c2086"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.783732 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-log-ovn\") pod \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\" (UID: \"fc9c2afb-4c51-45c2-9f06-cc2a194c2086\") " Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.783773 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "fc9c2afb-4c51-45c2-9f06-cc2a194c2086" (UID: "fc9c2afb-4c51-45c2-9f06-cc2a194c2086"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.783805 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "fc9c2afb-4c51-45c2-9f06-cc2a194c2086" (UID: "fc9c2afb-4c51-45c2-9f06-cc2a194c2086"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.784773 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "fc9c2afb-4c51-45c2-9f06-cc2a194c2086" (UID: "fc9c2afb-4c51-45c2-9f06-cc2a194c2086"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.784889 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-scripts" (OuterVolumeSpecName: "scripts") pod "fc9c2afb-4c51-45c2-9f06-cc2a194c2086" (UID: "fc9c2afb-4c51-45c2-9f06-cc2a194c2086"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.785675 4736 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.785696 4736 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.785708 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.785718 4736 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.785731 4736 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-var-run\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.795538 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-kube-api-access-zz2jx" (OuterVolumeSpecName: "kube-api-access-zz2jx") pod "fc9c2afb-4c51-45c2-9f06-cc2a194c2086" (UID: "fc9c2afb-4c51-45c2-9f06-cc2a194c2086"). InnerVolumeSpecName "kube-api-access-zz2jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:18 crc kubenswrapper[4736]: I0316 15:31:18.888545 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz2jx\" (UniqueName: \"kubernetes.io/projected/fc9c2afb-4c51-45c2-9f06-cc2a194c2086-kube-api-access-zz2jx\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:19 crc kubenswrapper[4736]: I0316 15:31:19.203435 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wkhh-config-ktfgs" event={"ID":"fc9c2afb-4c51-45c2-9f06-cc2a194c2086","Type":"ContainerDied","Data":"e4027cf2895a20c591566c393356b6870fb48eeda80a1f3d870ffda9621eec55"} Mar 16 15:31:19 crc kubenswrapper[4736]: I0316 15:31:19.203528 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4027cf2895a20c591566c393356b6870fb48eeda80a1f3d870ffda9621eec55" Mar 16 15:31:19 crc kubenswrapper[4736]: I0316 15:31:19.203633 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wkhh-config-ktfgs" Mar 16 15:31:19 crc kubenswrapper[4736]: I0316 15:31:19.787062 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9wkhh-config-ktfgs"] Mar 16 15:31:19 crc kubenswrapper[4736]: I0316 15:31:19.801503 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9wkhh-config-ktfgs"] Mar 16 15:31:19 crc kubenswrapper[4736]: I0316 15:31:19.895392 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 16 15:31:19 crc kubenswrapper[4736]: I0316 15:31:19.918750 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9wkhh-config-qgxrq"] Mar 16 15:31:19 crc kubenswrapper[4736]: E0316 15:31:19.919183 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9c2afb-4c51-45c2-9f06-cc2a194c2086" containerName="ovn-config" Mar 16 15:31:19 crc kubenswrapper[4736]: I0316 15:31:19.919210 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9c2afb-4c51-45c2-9f06-cc2a194c2086" containerName="ovn-config" Mar 16 15:31:19 crc kubenswrapper[4736]: I0316 15:31:19.919408 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9c2afb-4c51-45c2-9f06-cc2a194c2086" containerName="ovn-config" Mar 16 15:31:19 crc kubenswrapper[4736]: I0316 15:31:19.920063 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:19 crc kubenswrapper[4736]: I0316 15:31:19.924789 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 16 15:31:19 crc kubenswrapper[4736]: I0316 15:31:19.950599 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9wkhh-config-qgxrq"] Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.010644 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dbd08ca-3a7b-4539-9032-82c022dba460-scripts\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.010705 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dbd08ca-3a7b-4539-9032-82c022dba460-additional-scripts\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.010798 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-run-ovn\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.010912 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-run\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.010971 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwkjc\" (UniqueName: \"kubernetes.io/projected/2dbd08ca-3a7b-4539-9032-82c022dba460-kube-api-access-dwkjc\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.010998 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-log-ovn\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.112843 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwkjc\" (UniqueName: \"kubernetes.io/projected/2dbd08ca-3a7b-4539-9032-82c022dba460-kube-api-access-dwkjc\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.112893 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-log-ovn\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.112953 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dbd08ca-3a7b-4539-9032-82c022dba460-scripts\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.112976 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dbd08ca-3a7b-4539-9032-82c022dba460-additional-scripts\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.113035 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-run-ovn\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.113097 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-run\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.113374 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-run\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.113397 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-run-ovn\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.114191 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dbd08ca-3a7b-4539-9032-82c022dba460-additional-scripts\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.114349 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-log-ovn\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.115424 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dbd08ca-3a7b-4539-9032-82c022dba460-scripts\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.140793 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwkjc\" (UniqueName: \"kubernetes.io/projected/2dbd08ca-3a7b-4539-9032-82c022dba460-kube-api-access-dwkjc\") pod \"ovn-controller-9wkhh-config-qgxrq\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.241975 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.872155 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9wkhh-config-qgxrq"] Mar 16 15:31:20 crc kubenswrapper[4736]: I0316 15:31:20.993905 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9c2afb-4c51-45c2-9f06-cc2a194c2086" path="/var/lib/kubelet/pods/fc9c2afb-4c51-45c2-9f06-cc2a194c2086/volumes" Mar 16 15:31:21 crc kubenswrapper[4736]: I0316 15:31:21.222256 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wkhh-config-qgxrq" event={"ID":"2dbd08ca-3a7b-4539-9032-82c022dba460","Type":"ContainerStarted","Data":"d11ee4027cbb6ffabb8a1ca3ac17d91b9e6be1867b2cba83af248faa1b7b4005"} Mar 16 15:31:22 crc kubenswrapper[4736]: I0316 15:31:22.232930 4736 generic.go:334] "Generic (PLEG): container finished" podID="2dbd08ca-3a7b-4539-9032-82c022dba460" containerID="c4ccb2ca0fb811b54227ef052038e094027e03b3b98fdbd0bafa1e787643d441" exitCode=0 Mar 16 15:31:22 crc kubenswrapper[4736]: I0316 15:31:22.233064 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wkhh-config-qgxrq" event={"ID":"2dbd08ca-3a7b-4539-9032-82c022dba460","Type":"ContainerDied","Data":"c4ccb2ca0fb811b54227ef052038e094027e03b3b98fdbd0bafa1e787643d441"} Mar 16 15:31:29 crc kubenswrapper[4736]: I0316 15:31:29.478324 4736 scope.go:117] "RemoveContainer" containerID="647bdc45d9a68be3ffa555e0db22de624942eb07560e00e78f8a83ba3b5c139e" Mar 16 15:31:29 crc kubenswrapper[4736]: I0316 15:31:29.803292 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 16 15:31:30 crc kubenswrapper[4736]: I0316 15:31:30.029703 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:31:30 crc kubenswrapper[4736]: E0316 15:31:30.029896 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:31:30 crc kubenswrapper[4736]: E0316 15:31:30.029929 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 16 15:31:30 crc kubenswrapper[4736]: E0316 15:31:30.030005 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift podName:0892ebc9-dbd4-4652-9691-13028da07f80 nodeName:}" failed. No retries permitted until 2026-03-16 15:32:02.029980346 +0000 UTC m=+1123.757370633 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift") pod "swift-storage-0" (UID: "0892ebc9-dbd4-4652-9691-13028da07f80") : configmap "swift-ring-files" not found Mar 16 15:31:30 crc kubenswrapper[4736]: I0316 15:31:30.477121 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-xbrjm"] Mar 16 15:31:30 crc kubenswrapper[4736]: I0316 15:31:30.480642 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xbrjm" Mar 16 15:31:30 crc kubenswrapper[4736]: I0316 15:31:30.512235 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-xbrjm"] Mar 16 15:31:30 crc kubenswrapper[4736]: I0316 15:31:30.639222 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g27qr\" (UniqueName: \"kubernetes.io/projected/0e29bc35-4ebf-41e4-a7b0-ef90df644ca4-kube-api-access-g27qr\") pod \"heat-db-create-xbrjm\" (UID: \"0e29bc35-4ebf-41e4-a7b0-ef90df644ca4\") " pod="openstack/heat-db-create-xbrjm" Mar 16 15:31:30 crc kubenswrapper[4736]: I0316 15:31:30.639357 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e29bc35-4ebf-41e4-a7b0-ef90df644ca4-operator-scripts\") pod \"heat-db-create-xbrjm\" (UID: \"0e29bc35-4ebf-41e4-a7b0-ef90df644ca4\") " pod="openstack/heat-db-create-xbrjm" Mar 16 15:31:30 crc kubenswrapper[4736]: I0316 15:31:30.714232 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:31:30 crc kubenswrapper[4736]: I0316 15:31:30.742479 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g27qr\" (UniqueName: \"kubernetes.io/projected/0e29bc35-4ebf-41e4-a7b0-ef90df644ca4-kube-api-access-g27qr\") pod \"heat-db-create-xbrjm\" (UID: \"0e29bc35-4ebf-41e4-a7b0-ef90df644ca4\") " pod="openstack/heat-db-create-xbrjm" Mar 16 15:31:30 crc kubenswrapper[4736]: I0316 15:31:30.742613 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e29bc35-4ebf-41e4-a7b0-ef90df644ca4-operator-scripts\") pod \"heat-db-create-xbrjm\" (UID: \"0e29bc35-4ebf-41e4-a7b0-ef90df644ca4\") " pod="openstack/heat-db-create-xbrjm" Mar 16 15:31:30 crc kubenswrapper[4736]: I0316 15:31:30.743436 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e29bc35-4ebf-41e4-a7b0-ef90df644ca4-operator-scripts\") pod \"heat-db-create-xbrjm\" (UID: \"0e29bc35-4ebf-41e4-a7b0-ef90df644ca4\") " pod="openstack/heat-db-create-xbrjm" Mar 16 15:31:30 crc kubenswrapper[4736]: I0316 15:31:30.813670 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g27qr\" (UniqueName: \"kubernetes.io/projected/0e29bc35-4ebf-41e4-a7b0-ef90df644ca4-kube-api-access-g27qr\") pod \"heat-db-create-xbrjm\" (UID: \"0e29bc35-4ebf-41e4-a7b0-ef90df644ca4\") " pod="openstack/heat-db-create-xbrjm" Mar 16 15:31:30 crc kubenswrapper[4736]: I0316 15:31:30.855745 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xbrjm" Mar 16 15:31:30 crc kubenswrapper[4736]: I0316 15:31:30.995797 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-a023-account-create-update-ncblm"] Mar 16 15:31:30 crc kubenswrapper[4736]: I0316 15:31:30.996941 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a023-account-create-update-ncblm" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.003414 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.041249 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-a023-account-create-update-ncblm"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.064787 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-qwxkl"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.073755 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qwxkl" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.095199 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qwxkl"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.152296 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf9zm\" (UniqueName: \"kubernetes.io/projected/7fe221db-0528-4bf5-a178-fc56b36d79f0-kube-api-access-wf9zm\") pod \"heat-a023-account-create-update-ncblm\" (UID: \"7fe221db-0528-4bf5-a178-fc56b36d79f0\") " pod="openstack/heat-a023-account-create-update-ncblm" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.152576 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fe221db-0528-4bf5-a178-fc56b36d79f0-operator-scripts\") pod \"heat-a023-account-create-update-ncblm\" (UID: \"7fe221db-0528-4bf5-a178-fc56b36d79f0\") " pod="openstack/heat-a023-account-create-update-ncblm" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.255771 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fe221db-0528-4bf5-a178-fc56b36d79f0-operator-scripts\") pod \"heat-a023-account-create-update-ncblm\" (UID: \"7fe221db-0528-4bf5-a178-fc56b36d79f0\") " pod="openstack/heat-a023-account-create-update-ncblm" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.255893 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ebb150-6fe4-4bfc-92ec-948f02fc5d17-operator-scripts\") pod \"cinder-db-create-qwxkl\" (UID: \"d8ebb150-6fe4-4bfc-92ec-948f02fc5d17\") " pod="openstack/cinder-db-create-qwxkl" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.255918 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf9zm\" (UniqueName: \"kubernetes.io/projected/7fe221db-0528-4bf5-a178-fc56b36d79f0-kube-api-access-wf9zm\") pod \"heat-a023-account-create-update-ncblm\" (UID: \"7fe221db-0528-4bf5-a178-fc56b36d79f0\") " pod="openstack/heat-a023-account-create-update-ncblm" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.255978 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mm9g\" (UniqueName: \"kubernetes.io/projected/d8ebb150-6fe4-4bfc-92ec-948f02fc5d17-kube-api-access-5mm9g\") pod \"cinder-db-create-qwxkl\" (UID: \"d8ebb150-6fe4-4bfc-92ec-948f02fc5d17\") " pod="openstack/cinder-db-create-qwxkl" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.257474 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fe221db-0528-4bf5-a178-fc56b36d79f0-operator-scripts\") pod \"heat-a023-account-create-update-ncblm\" (UID: \"7fe221db-0528-4bf5-a178-fc56b36d79f0\") " pod="openstack/heat-a023-account-create-update-ncblm" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.260466 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8spsp"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.261787 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8spsp" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.275265 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8spsp"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.277740 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.280825 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.293202 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.302651 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4bf5w" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.349526 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-tc7zt"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.350690 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tc7zt" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.358417 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d527aa-c409-49d5-8901-5bd60482dfe4-combined-ca-bundle\") pod \"keystone-db-sync-8spsp\" (UID: \"36d527aa-c409-49d5-8901-5bd60482dfe4\") " pod="openstack/keystone-db-sync-8spsp" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.358473 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d527aa-c409-49d5-8901-5bd60482dfe4-config-data\") pod \"keystone-db-sync-8spsp\" (UID: \"36d527aa-c409-49d5-8901-5bd60482dfe4\") " pod="openstack/keystone-db-sync-8spsp" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.358512 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd857\" (UniqueName: \"kubernetes.io/projected/36d527aa-c409-49d5-8901-5bd60482dfe4-kube-api-access-cd857\") pod \"keystone-db-sync-8spsp\" (UID: \"36d527aa-c409-49d5-8901-5bd60482dfe4\") " pod="openstack/keystone-db-sync-8spsp" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.358546 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ebb150-6fe4-4bfc-92ec-948f02fc5d17-operator-scripts\") pod \"cinder-db-create-qwxkl\" (UID: \"d8ebb150-6fe4-4bfc-92ec-948f02fc5d17\") " pod="openstack/cinder-db-create-qwxkl" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.358669 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mm9g\" (UniqueName: \"kubernetes.io/projected/d8ebb150-6fe4-4bfc-92ec-948f02fc5d17-kube-api-access-5mm9g\") pod \"cinder-db-create-qwxkl\" (UID: \"d8ebb150-6fe4-4bfc-92ec-948f02fc5d17\") " pod="openstack/cinder-db-create-qwxkl" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.359785 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ebb150-6fe4-4bfc-92ec-948f02fc5d17-operator-scripts\") pod \"cinder-db-create-qwxkl\" (UID: \"d8ebb150-6fe4-4bfc-92ec-948f02fc5d17\") " pod="openstack/cinder-db-create-qwxkl" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.363286 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf9zm\" (UniqueName: \"kubernetes.io/projected/7fe221db-0528-4bf5-a178-fc56b36d79f0-kube-api-access-wf9zm\") pod \"heat-a023-account-create-update-ncblm\" (UID: \"7fe221db-0528-4bf5-a178-fc56b36d79f0\") " pod="openstack/heat-a023-account-create-update-ncblm" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.397705 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mm9g\" (UniqueName: \"kubernetes.io/projected/d8ebb150-6fe4-4bfc-92ec-948f02fc5d17-kube-api-access-5mm9g\") pod \"cinder-db-create-qwxkl\" (UID: \"d8ebb150-6fe4-4bfc-92ec-948f02fc5d17\") " pod="openstack/cinder-db-create-qwxkl" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.398186 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qwxkl" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.417183 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-tc7zt"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.439384 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-407f-account-create-update-cnjtc"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.440831 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-407f-account-create-update-cnjtc" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.451273 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.462431 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwvvm\" (UniqueName: \"kubernetes.io/projected/becf0d51-28cb-47fc-9c0d-9d3042fbec0a-kube-api-access-gwvvm\") pod \"neutron-db-create-tc7zt\" (UID: \"becf0d51-28cb-47fc-9c0d-9d3042fbec0a\") " pod="openstack/neutron-db-create-tc7zt" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.462502 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d527aa-c409-49d5-8901-5bd60482dfe4-combined-ca-bundle\") pod \"keystone-db-sync-8spsp\" (UID: \"36d527aa-c409-49d5-8901-5bd60482dfe4\") " pod="openstack/keystone-db-sync-8spsp" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.462556 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d527aa-c409-49d5-8901-5bd60482dfe4-config-data\") pod \"keystone-db-sync-8spsp\" (UID: \"36d527aa-c409-49d5-8901-5bd60482dfe4\") " pod="openstack/keystone-db-sync-8spsp" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.462594 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd857\" (UniqueName: \"kubernetes.io/projected/36d527aa-c409-49d5-8901-5bd60482dfe4-kube-api-access-cd857\") pod \"keystone-db-sync-8spsp\" (UID: \"36d527aa-c409-49d5-8901-5bd60482dfe4\") " pod="openstack/keystone-db-sync-8spsp" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.462668 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/becf0d51-28cb-47fc-9c0d-9d3042fbec0a-operator-scripts\") pod \"neutron-db-create-tc7zt\" (UID: \"becf0d51-28cb-47fc-9c0d-9d3042fbec0a\") " pod="openstack/neutron-db-create-tc7zt" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.472490 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d527aa-c409-49d5-8901-5bd60482dfe4-config-data\") pod \"keystone-db-sync-8spsp\" (UID: \"36d527aa-c409-49d5-8901-5bd60482dfe4\") " pod="openstack/keystone-db-sync-8spsp" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.480844 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-407f-account-create-update-cnjtc"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.483900 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d527aa-c409-49d5-8901-5bd60482dfe4-combined-ca-bundle\") pod \"keystone-db-sync-8spsp\" (UID: \"36d527aa-c409-49d5-8901-5bd60482dfe4\") " pod="openstack/keystone-db-sync-8spsp" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.543586 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd857\" (UniqueName: \"kubernetes.io/projected/36d527aa-c409-49d5-8901-5bd60482dfe4-kube-api-access-cd857\") pod \"keystone-db-sync-8spsp\" (UID: \"36d527aa-c409-49d5-8901-5bd60482dfe4\") " pod="openstack/keystone-db-sync-8spsp" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.557968 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-x6q66"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.564199 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-x6q66" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.582210 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwvvm\" (UniqueName: \"kubernetes.io/projected/becf0d51-28cb-47fc-9c0d-9d3042fbec0a-kube-api-access-gwvvm\") pod \"neutron-db-create-tc7zt\" (UID: \"becf0d51-28cb-47fc-9c0d-9d3042fbec0a\") " pod="openstack/neutron-db-create-tc7zt" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.582392 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab1dd355-d321-4c74-be86-7b850f60a065-operator-scripts\") pod \"neutron-407f-account-create-update-cnjtc\" (UID: \"ab1dd355-d321-4c74-be86-7b850f60a065\") " pod="openstack/neutron-407f-account-create-update-cnjtc" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.582656 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/becf0d51-28cb-47fc-9c0d-9d3042fbec0a-operator-scripts\") pod \"neutron-db-create-tc7zt\" (UID: \"becf0d51-28cb-47fc-9c0d-9d3042fbec0a\") " pod="openstack/neutron-db-create-tc7zt" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.583331 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4l7s\" (UniqueName: \"kubernetes.io/projected/ab1dd355-d321-4c74-be86-7b850f60a065-kube-api-access-m4l7s\") pod \"neutron-407f-account-create-update-cnjtc\" (UID: \"ab1dd355-d321-4c74-be86-7b850f60a065\") " pod="openstack/neutron-407f-account-create-update-cnjtc" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.584444 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8spsp" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.592479 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/becf0d51-28cb-47fc-9c0d-9d3042fbec0a-operator-scripts\") pod \"neutron-db-create-tc7zt\" (UID: \"becf0d51-28cb-47fc-9c0d-9d3042fbec0a\") " pod="openstack/neutron-db-create-tc7zt" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.619848 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a023-account-create-update-ncblm" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.651168 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwvvm\" (UniqueName: \"kubernetes.io/projected/becf0d51-28cb-47fc-9c0d-9d3042fbec0a-kube-api-access-gwvvm\") pod \"neutron-db-create-tc7zt\" (UID: \"becf0d51-28cb-47fc-9c0d-9d3042fbec0a\") " pod="openstack/neutron-db-create-tc7zt" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.651277 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-x6q66"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.691230 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4l7s\" (UniqueName: \"kubernetes.io/projected/ab1dd355-d321-4c74-be86-7b850f60a065-kube-api-access-m4l7s\") pod \"neutron-407f-account-create-update-cnjtc\" (UID: \"ab1dd355-d321-4c74-be86-7b850f60a065\") " pod="openstack/neutron-407f-account-create-update-cnjtc" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.691531 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab1dd355-d321-4c74-be86-7b850f60a065-operator-scripts\") pod \"neutron-407f-account-create-update-cnjtc\" (UID: \"ab1dd355-d321-4c74-be86-7b850f60a065\") " pod="openstack/neutron-407f-account-create-update-cnjtc" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.691593 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2240a5bb-051a-419c-bc1d-f5dc902a10e6-operator-scripts\") pod \"barbican-db-create-x6q66\" (UID: \"2240a5bb-051a-419c-bc1d-f5dc902a10e6\") " pod="openstack/barbican-db-create-x6q66" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.691686 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr5cq\" (UniqueName: \"kubernetes.io/projected/2240a5bb-051a-419c-bc1d-f5dc902a10e6-kube-api-access-sr5cq\") pod \"barbican-db-create-x6q66\" (UID: \"2240a5bb-051a-419c-bc1d-f5dc902a10e6\") " pod="openstack/barbican-db-create-x6q66" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.692946 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab1dd355-d321-4c74-be86-7b850f60a065-operator-scripts\") pod \"neutron-407f-account-create-update-cnjtc\" (UID: \"ab1dd355-d321-4c74-be86-7b850f60a065\") " pod="openstack/neutron-407f-account-create-update-cnjtc" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.721883 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-ed8c-account-create-update-pxbvq"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.723545 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ed8c-account-create-update-pxbvq" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.734221 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.750148 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4l7s\" (UniqueName: \"kubernetes.io/projected/ab1dd355-d321-4c74-be86-7b850f60a065-kube-api-access-m4l7s\") pod \"neutron-407f-account-create-update-cnjtc\" (UID: \"ab1dd355-d321-4c74-be86-7b850f60a065\") " pod="openstack/neutron-407f-account-create-update-cnjtc" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.762185 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ed8c-account-create-update-pxbvq"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.794180 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2240a5bb-051a-419c-bc1d-f5dc902a10e6-operator-scripts\") pod \"barbican-db-create-x6q66\" (UID: \"2240a5bb-051a-419c-bc1d-f5dc902a10e6\") " pod="openstack/barbican-db-create-x6q66" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.794557 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr5cq\" (UniqueName: \"kubernetes.io/projected/2240a5bb-051a-419c-bc1d-f5dc902a10e6-kube-api-access-sr5cq\") pod \"barbican-db-create-x6q66\" (UID: \"2240a5bb-051a-419c-bc1d-f5dc902a10e6\") " pod="openstack/barbican-db-create-x6q66" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.795136 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2240a5bb-051a-419c-bc1d-f5dc902a10e6-operator-scripts\") pod \"barbican-db-create-x6q66\" (UID: \"2240a5bb-051a-419c-bc1d-f5dc902a10e6\") " pod="openstack/barbican-db-create-x6q66" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.831824 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tc7zt" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.842573 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-407f-account-create-update-cnjtc" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.847854 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr5cq\" (UniqueName: \"kubernetes.io/projected/2240a5bb-051a-419c-bc1d-f5dc902a10e6-kube-api-access-sr5cq\") pod \"barbican-db-create-x6q66\" (UID: \"2240a5bb-051a-419c-bc1d-f5dc902a10e6\") " pod="openstack/barbican-db-create-x6q66" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.868169 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-f7de-account-create-update-wvjgd"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.876574 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7de-account-create-update-wvjgd" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.883876 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.893840 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f7de-account-create-update-wvjgd"] Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.898727 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53ccca4-e4ca-4621-b748-47fd9cea24f7-operator-scripts\") pod \"barbican-ed8c-account-create-update-pxbvq\" (UID: \"e53ccca4-e4ca-4621-b748-47fd9cea24f7\") " pod="openstack/barbican-ed8c-account-create-update-pxbvq" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.898995 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t295h\" (UniqueName: \"kubernetes.io/projected/e53ccca4-e4ca-4621-b748-47fd9cea24f7-kube-api-access-t295h\") pod \"barbican-ed8c-account-create-update-pxbvq\" (UID: \"e53ccca4-e4ca-4621-b748-47fd9cea24f7\") " pod="openstack/barbican-ed8c-account-create-update-pxbvq" Mar 16 15:31:31 crc kubenswrapper[4736]: I0316 15:31:31.926517 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-x6q66" Mar 16 15:31:32 crc kubenswrapper[4736]: I0316 15:31:32.004278 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2898e153-f18f-4134-bc3c-59928983c1b9-operator-scripts\") pod \"cinder-f7de-account-create-update-wvjgd\" (UID: \"2898e153-f18f-4134-bc3c-59928983c1b9\") " pod="openstack/cinder-f7de-account-create-update-wvjgd" Mar 16 15:31:32 crc kubenswrapper[4736]: I0316 15:31:32.004349 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53ccca4-e4ca-4621-b748-47fd9cea24f7-operator-scripts\") pod \"barbican-ed8c-account-create-update-pxbvq\" (UID: \"e53ccca4-e4ca-4621-b748-47fd9cea24f7\") " pod="openstack/barbican-ed8c-account-create-update-pxbvq" Mar 16 15:31:32 crc kubenswrapper[4736]: I0316 15:31:32.004405 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t295h\" (UniqueName: \"kubernetes.io/projected/e53ccca4-e4ca-4621-b748-47fd9cea24f7-kube-api-access-t295h\") pod \"barbican-ed8c-account-create-update-pxbvq\" (UID: \"e53ccca4-e4ca-4621-b748-47fd9cea24f7\") " pod="openstack/barbican-ed8c-account-create-update-pxbvq" Mar 16 15:31:32 crc kubenswrapper[4736]: I0316 15:31:32.004473 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7z78\" (UniqueName: \"kubernetes.io/projected/2898e153-f18f-4134-bc3c-59928983c1b9-kube-api-access-p7z78\") pod \"cinder-f7de-account-create-update-wvjgd\" (UID: \"2898e153-f18f-4134-bc3c-59928983c1b9\") " pod="openstack/cinder-f7de-account-create-update-wvjgd" Mar 16 15:31:32 crc kubenswrapper[4736]: I0316 15:31:32.005383 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53ccca4-e4ca-4621-b748-47fd9cea24f7-operator-scripts\") pod \"barbican-ed8c-account-create-update-pxbvq\" (UID: \"e53ccca4-e4ca-4621-b748-47fd9cea24f7\") " pod="openstack/barbican-ed8c-account-create-update-pxbvq" Mar 16 15:31:32 crc kubenswrapper[4736]: I0316 15:31:32.026458 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t295h\" (UniqueName: \"kubernetes.io/projected/e53ccca4-e4ca-4621-b748-47fd9cea24f7-kube-api-access-t295h\") pod \"barbican-ed8c-account-create-update-pxbvq\" (UID: \"e53ccca4-e4ca-4621-b748-47fd9cea24f7\") " pod="openstack/barbican-ed8c-account-create-update-pxbvq" Mar 16 15:31:32 crc kubenswrapper[4736]: I0316 15:31:32.077494 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ed8c-account-create-update-pxbvq" Mar 16 15:31:32 crc kubenswrapper[4736]: I0316 15:31:32.106290 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2898e153-f18f-4134-bc3c-59928983c1b9-operator-scripts\") pod \"cinder-f7de-account-create-update-wvjgd\" (UID: \"2898e153-f18f-4134-bc3c-59928983c1b9\") " pod="openstack/cinder-f7de-account-create-update-wvjgd" Mar 16 15:31:32 crc kubenswrapper[4736]: I0316 15:31:32.107131 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2898e153-f18f-4134-bc3c-59928983c1b9-operator-scripts\") pod \"cinder-f7de-account-create-update-wvjgd\" (UID: \"2898e153-f18f-4134-bc3c-59928983c1b9\") " pod="openstack/cinder-f7de-account-create-update-wvjgd" Mar 16 15:31:32 crc kubenswrapper[4736]: I0316 15:31:32.109714 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7z78\" (UniqueName: \"kubernetes.io/projected/2898e153-f18f-4134-bc3c-59928983c1b9-kube-api-access-p7z78\") pod \"cinder-f7de-account-create-update-wvjgd\" (UID: \"2898e153-f18f-4134-bc3c-59928983c1b9\") " pod="openstack/cinder-f7de-account-create-update-wvjgd" Mar 16 15:31:32 crc kubenswrapper[4736]: I0316 15:31:32.129321 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7z78\" (UniqueName: \"kubernetes.io/projected/2898e153-f18f-4134-bc3c-59928983c1b9-kube-api-access-p7z78\") pod \"cinder-f7de-account-create-update-wvjgd\" (UID: \"2898e153-f18f-4134-bc3c-59928983c1b9\") " pod="openstack/cinder-f7de-account-create-update-wvjgd" Mar 16 15:31:32 crc kubenswrapper[4736]: I0316 15:31:32.213022 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7de-account-create-update-wvjgd" Mar 16 15:31:33 crc kubenswrapper[4736]: E0316 15:31:33.150420 4736 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.30:36920->38.102.83.30:38289: read tcp 38.102.83.30:36920->38.102.83.30:38289: read: connection reset by peer Mar 16 15:31:34 crc kubenswrapper[4736]: E0316 15:31:34.134034 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:31:34 crc kubenswrapper[4736]: E0316 15:31:34.134094 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:31:34 crc kubenswrapper[4736]: E0316 15:31:34.134327 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ff7rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-7g9td_openstack(1002af8b-786a-47c1-8872-f417cc88561e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:31:34 crc kubenswrapper[4736]: E0316 15:31:34.135796 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-7g9td" podUID="1002af8b-786a-47c1-8872-f417cc88561e" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.217581 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.355147 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-run\") pod \"2dbd08ca-3a7b-4539-9032-82c022dba460\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.355695 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dbd08ca-3a7b-4539-9032-82c022dba460-scripts\") pod \"2dbd08ca-3a7b-4539-9032-82c022dba460\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.355853 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwkjc\" (UniqueName: \"kubernetes.io/projected/2dbd08ca-3a7b-4539-9032-82c022dba460-kube-api-access-dwkjc\") pod \"2dbd08ca-3a7b-4539-9032-82c022dba460\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.355953 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-run-ovn\") pod \"2dbd08ca-3a7b-4539-9032-82c022dba460\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.355989 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-log-ovn\") pod \"2dbd08ca-3a7b-4539-9032-82c022dba460\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.356136 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dbd08ca-3a7b-4539-9032-82c022dba460-additional-scripts\") pod \"2dbd08ca-3a7b-4539-9032-82c022dba460\" (UID: \"2dbd08ca-3a7b-4539-9032-82c022dba460\") " Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.357365 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dbd08ca-3a7b-4539-9032-82c022dba460-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2dbd08ca-3a7b-4539-9032-82c022dba460" (UID: "2dbd08ca-3a7b-4539-9032-82c022dba460"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.357392 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dbd08ca-3a7b-4539-9032-82c022dba460-scripts" (OuterVolumeSpecName: "scripts") pod "2dbd08ca-3a7b-4539-9032-82c022dba460" (UID: "2dbd08ca-3a7b-4539-9032-82c022dba460"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.357461 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-run" (OuterVolumeSpecName: "var-run") pod "2dbd08ca-3a7b-4539-9032-82c022dba460" (UID: "2dbd08ca-3a7b-4539-9032-82c022dba460"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.357489 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2dbd08ca-3a7b-4539-9032-82c022dba460" (UID: "2dbd08ca-3a7b-4539-9032-82c022dba460"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.357513 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2dbd08ca-3a7b-4539-9032-82c022dba460" (UID: "2dbd08ca-3a7b-4539-9032-82c022dba460"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.368424 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dbd08ca-3a7b-4539-9032-82c022dba460-kube-api-access-dwkjc" (OuterVolumeSpecName: "kube-api-access-dwkjc") pod "2dbd08ca-3a7b-4539-9032-82c022dba460" (UID: "2dbd08ca-3a7b-4539-9032-82c022dba460"). InnerVolumeSpecName "kube-api-access-dwkjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.381380 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9wkhh-config-qgxrq" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.382071 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9wkhh-config-qgxrq" event={"ID":"2dbd08ca-3a7b-4539-9032-82c022dba460","Type":"ContainerDied","Data":"d11ee4027cbb6ffabb8a1ca3ac17d91b9e6be1867b2cba83af248faa1b7b4005"} Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.382221 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d11ee4027cbb6ffabb8a1ca3ac17d91b9e6be1867b2cba83af248faa1b7b4005" Mar 16 15:31:34 crc kubenswrapper[4736]: E0316 15:31:34.386342 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-glance-api:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/glance-db-sync-7g9td" podUID="1002af8b-786a-47c1-8872-f417cc88561e" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.460545 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwkjc\" (UniqueName: \"kubernetes.io/projected/2dbd08ca-3a7b-4539-9032-82c022dba460-kube-api-access-dwkjc\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.460584 4736 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.460598 4736 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.460607 4736 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2dbd08ca-3a7b-4539-9032-82c022dba460-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.460618 4736 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2dbd08ca-3a7b-4539-9032-82c022dba460-var-run\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:34 crc kubenswrapper[4736]: I0316 15:31:34.460626 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2dbd08ca-3a7b-4539-9032-82c022dba460-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:35 crc kubenswrapper[4736]: I0316 15:31:35.249566 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-x6q66"] Mar 16 15:31:35 crc kubenswrapper[4736]: I0316 15:31:35.379920 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9wkhh-config-qgxrq"] Mar 16 15:31:35 crc kubenswrapper[4736]: I0316 15:31:35.436612 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9wkhh-config-qgxrq"] Mar 16 15:31:35 crc kubenswrapper[4736]: I0316 15:31:35.473249 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-x6q66" event={"ID":"2240a5bb-051a-419c-bc1d-f5dc902a10e6","Type":"ContainerStarted","Data":"6bed02e22a35d1d78134d129ce009be215bd0ac121d271ee7b0a34675035ce0d"} Mar 16 15:31:35 crc kubenswrapper[4736]: I0316 15:31:35.744374 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-a023-account-create-update-ncblm"] Mar 16 15:31:35 crc kubenswrapper[4736]: I0316 15:31:35.847758 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-f7de-account-create-update-wvjgd"] Mar 16 15:31:35 crc kubenswrapper[4736]: I0316 15:31:35.852750 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qwxkl"] Mar 16 15:31:35 crc kubenswrapper[4736]: I0316 15:31:35.859479 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ed8c-account-create-update-pxbvq"] Mar 16 15:31:35 crc kubenswrapper[4736]: I0316 15:31:35.994884 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-xbrjm"] Mar 16 15:31:36 crc kubenswrapper[4736]: W0316 15:31:36.015069 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e29bc35_4ebf_41e4_a7b0_ef90df644ca4.slice/crio-62ecd719fed0517d1a6b2b24a48f3025245b4cf4e929de8a0efe968e6c03ccb2 WatchSource:0}: Error finding container 62ecd719fed0517d1a6b2b24a48f3025245b4cf4e929de8a0efe968e6c03ccb2: Status 404 returned error can't find the container with id 62ecd719fed0517d1a6b2b24a48f3025245b4cf4e929de8a0efe968e6c03ccb2 Mar 16 15:31:36 crc kubenswrapper[4736]: W0316 15:31:36.033590 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbecf0d51_28cb_47fc_9c0d_9d3042fbec0a.slice/crio-f0deeabf5be19ec141786de2912cf035b7a0cf70ff911f6919bf7e74cb91ed6a WatchSource:0}: Error finding container f0deeabf5be19ec141786de2912cf035b7a0cf70ff911f6919bf7e74cb91ed6a: Status 404 returned error can't find the container with id f0deeabf5be19ec141786de2912cf035b7a0cf70ff911f6919bf7e74cb91ed6a Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.039660 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-tc7zt"] Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.068869 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-407f-account-create-update-cnjtc"] Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.105198 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8spsp"] Mar 16 15:31:36 crc kubenswrapper[4736]: W0316 15:31:36.124036 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36d527aa_c409_49d5_8901_5bd60482dfe4.slice/crio-ff931fb8acad1077a2b0e42a97c569d91f44efca716ee0aece171e8c98287aac WatchSource:0}: Error finding container ff931fb8acad1077a2b0e42a97c569d91f44efca716ee0aece171e8c98287aac: Status 404 returned error can't find the container with id ff931fb8acad1077a2b0e42a97c569d91f44efca716ee0aece171e8c98287aac Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.490024 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qwxkl" event={"ID":"d8ebb150-6fe4-4bfc-92ec-948f02fc5d17","Type":"ContainerStarted","Data":"390a54e7eb8e4628cc1c5449123960b89d45b7ae03f231d258115e5fef42a340"} Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.494473 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tc7zt" event={"ID":"becf0d51-28cb-47fc-9c0d-9d3042fbec0a","Type":"ContainerStarted","Data":"f0deeabf5be19ec141786de2912cf035b7a0cf70ff911f6919bf7e74cb91ed6a"} Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.501585 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8spsp" event={"ID":"36d527aa-c409-49d5-8901-5bd60482dfe4","Type":"ContainerStarted","Data":"ff931fb8acad1077a2b0e42a97c569d91f44efca716ee0aece171e8c98287aac"} Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.503178 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ed8c-account-create-update-pxbvq" event={"ID":"e53ccca4-e4ca-4621-b748-47fd9cea24f7","Type":"ContainerStarted","Data":"53a15ec634f0d46d62306a1c5f51ee998812b65f3c3d21792dc43cd14de83dae"} Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.503205 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ed8c-account-create-update-pxbvq" event={"ID":"e53ccca4-e4ca-4621-b748-47fd9cea24f7","Type":"ContainerStarted","Data":"74ee59d8f458f172875a8d672ba8d56db70cd68771df796b5431dc6ab2c21976"} Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.512858 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-407f-account-create-update-cnjtc" event={"ID":"ab1dd355-d321-4c74-be86-7b850f60a065","Type":"ContainerStarted","Data":"160a38dba9c2fb2206f4e940d65a4a40aad00d6c3847715e00b2ddb368676914"} Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.519976 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-a023-account-create-update-ncblm" event={"ID":"7fe221db-0528-4bf5-a178-fc56b36d79f0","Type":"ContainerStarted","Data":"82cc9c688b89915b5512d2f403cf0c16a3dac46edaf03953fd33cf9dbdec5df5"} Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.520009 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-a023-account-create-update-ncblm" event={"ID":"7fe221db-0528-4bf5-a178-fc56b36d79f0","Type":"ContainerStarted","Data":"28df1afef7c9882014adf1d9048ade1f8707776aa72f5843c65b00518dac866a"} Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.530520 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f7de-account-create-update-wvjgd" event={"ID":"2898e153-f18f-4134-bc3c-59928983c1b9","Type":"ContainerStarted","Data":"78ee47d11a732d90e0a223a49ea99a4d40543193cb137539392405ea124eaf7a"} Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.530605 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f7de-account-create-update-wvjgd" event={"ID":"2898e153-f18f-4134-bc3c-59928983c1b9","Type":"ContainerStarted","Data":"0acffd78b7c892402b2ae9ded149e234f1543a4b727fd976eea33b06d74a55c7"} Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.535342 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-ed8c-account-create-update-pxbvq" podStartSLOduration=5.53532236 podStartE2EDuration="5.53532236s" podCreationTimestamp="2026-03-16 15:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:31:36.529100647 +0000 UTC m=+1098.256490934" watchObservedRunningTime="2026-03-16 15:31:36.53532236 +0000 UTC m=+1098.262712647" Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.537361 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xbrjm" event={"ID":"0e29bc35-4ebf-41e4-a7b0-ef90df644ca4","Type":"ContainerStarted","Data":"62ecd719fed0517d1a6b2b24a48f3025245b4cf4e929de8a0efe968e6c03ccb2"} Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.542796 4736 generic.go:334] "Generic (PLEG): container finished" podID="2240a5bb-051a-419c-bc1d-f5dc902a10e6" containerID="caa48ce8fc3c1d250d20913ef1b6daeb412c47aa82baf1193fd75c6d0178037a" exitCode=0 Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.542864 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-x6q66" event={"ID":"2240a5bb-051a-419c-bc1d-f5dc902a10e6","Type":"ContainerDied","Data":"caa48ce8fc3c1d250d20913ef1b6daeb412c47aa82baf1193fd75c6d0178037a"} Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.556433 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-a023-account-create-update-ncblm" podStartSLOduration=6.556407464 podStartE2EDuration="6.556407464s" podCreationTimestamp="2026-03-16 15:31:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:31:36.551680354 +0000 UTC m=+1098.279070641" watchObservedRunningTime="2026-03-16 15:31:36.556407464 +0000 UTC m=+1098.283797751" Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.600397 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-f7de-account-create-update-wvjgd" podStartSLOduration=5.600367764 podStartE2EDuration="5.600367764s" podCreationTimestamp="2026-03-16 15:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:31:36.592805364 +0000 UTC m=+1098.320195651" watchObservedRunningTime="2026-03-16 15:31:36.600367764 +0000 UTC m=+1098.327758051" Mar 16 15:31:36 crc kubenswrapper[4736]: I0316 15:31:36.990297 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dbd08ca-3a7b-4539-9032-82c022dba460" path="/var/lib/kubelet/pods/2dbd08ca-3a7b-4539-9032-82c022dba460/volumes" Mar 16 15:31:37 crc kubenswrapper[4736]: I0316 15:31:37.561744 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qwxkl" event={"ID":"d8ebb150-6fe4-4bfc-92ec-948f02fc5d17","Type":"ContainerStarted","Data":"e9d300993f33bd28843615e27106f966a001f3876dd6b290452761d9a138627b"} Mar 16 15:31:37 crc kubenswrapper[4736]: I0316 15:31:37.565429 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tc7zt" event={"ID":"becf0d51-28cb-47fc-9c0d-9d3042fbec0a","Type":"ContainerStarted","Data":"3e1f8046fe55418048c301dc329fbaa3d4ad78f5d23dae64130ca6b632dd40c0"} Mar 16 15:31:37 crc kubenswrapper[4736]: I0316 15:31:37.568562 4736 generic.go:334] "Generic (PLEG): container finished" podID="e53ccca4-e4ca-4621-b748-47fd9cea24f7" containerID="53a15ec634f0d46d62306a1c5f51ee998812b65f3c3d21792dc43cd14de83dae" exitCode=0 Mar 16 15:31:37 crc kubenswrapper[4736]: I0316 15:31:37.568657 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ed8c-account-create-update-pxbvq" event={"ID":"e53ccca4-e4ca-4621-b748-47fd9cea24f7","Type":"ContainerDied","Data":"53a15ec634f0d46d62306a1c5f51ee998812b65f3c3d21792dc43cd14de83dae"} Mar 16 15:31:37 crc kubenswrapper[4736]: I0316 15:31:37.570699 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-407f-account-create-update-cnjtc" event={"ID":"ab1dd355-d321-4c74-be86-7b850f60a065","Type":"ContainerStarted","Data":"cfdd9a9596b270afe3bba4f10528b20e0354bbdfcfae1209e265f7c222c3584b"} Mar 16 15:31:37 crc kubenswrapper[4736]: I0316 15:31:37.573542 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xbrjm" event={"ID":"0e29bc35-4ebf-41e4-a7b0-ef90df644ca4","Type":"ContainerStarted","Data":"f02d9b45ae6d3129da7816c391fc38fd1657272770481d72938c3f133d08ff93"} Mar 16 15:31:37 crc kubenswrapper[4736]: I0316 15:31:37.585313 4736 generic.go:334] "Generic (PLEG): container finished" podID="7fe221db-0528-4bf5-a178-fc56b36d79f0" containerID="82cc9c688b89915b5512d2f403cf0c16a3dac46edaf03953fd33cf9dbdec5df5" exitCode=0 Mar 16 15:31:37 crc kubenswrapper[4736]: I0316 15:31:37.585421 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-a023-account-create-update-ncblm" event={"ID":"7fe221db-0528-4bf5-a178-fc56b36d79f0","Type":"ContainerDied","Data":"82cc9c688b89915b5512d2f403cf0c16a3dac46edaf03953fd33cf9dbdec5df5"} Mar 16 15:31:37 crc kubenswrapper[4736]: I0316 15:31:37.601208 4736 generic.go:334] "Generic (PLEG): container finished" podID="2898e153-f18f-4134-bc3c-59928983c1b9" containerID="78ee47d11a732d90e0a223a49ea99a4d40543193cb137539392405ea124eaf7a" exitCode=0 Mar 16 15:31:37 crc kubenswrapper[4736]: I0316 15:31:37.601589 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f7de-account-create-update-wvjgd" event={"ID":"2898e153-f18f-4134-bc3c-59928983c1b9","Type":"ContainerDied","Data":"78ee47d11a732d90e0a223a49ea99a4d40543193cb137539392405ea124eaf7a"} Mar 16 15:31:37 crc kubenswrapper[4736]: I0316 15:31:37.604369 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-qwxkl" podStartSLOduration=6.604352006 podStartE2EDuration="6.604352006s" podCreationTimestamp="2026-03-16 15:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:31:37.581502323 +0000 UTC m=+1099.308892610" watchObservedRunningTime="2026-03-16 15:31:37.604352006 +0000 UTC m=+1099.331742293" Mar 16 15:31:37 crc kubenswrapper[4736]: I0316 15:31:37.605087 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-xbrjm" podStartSLOduration=7.605074177 podStartE2EDuration="7.605074177s" podCreationTimestamp="2026-03-16 15:31:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:31:37.59651961 +0000 UTC m=+1099.323909897" watchObservedRunningTime="2026-03-16 15:31:37.605074177 +0000 UTC m=+1099.332464474" Mar 16 15:31:37 crc kubenswrapper[4736]: I0316 15:31:37.658888 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-tc7zt" podStartSLOduration=6.658862459 podStartE2EDuration="6.658862459s" podCreationTimestamp="2026-03-16 15:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:31:37.641798076 +0000 UTC m=+1099.369188373" watchObservedRunningTime="2026-03-16 15:31:37.658862459 +0000 UTC m=+1099.386252746" Mar 16 15:31:37 crc kubenswrapper[4736]: I0316 15:31:37.675039 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-407f-account-create-update-cnjtc" podStartSLOduration=6.675014847 podStartE2EDuration="6.675014847s" podCreationTimestamp="2026-03-16 15:31:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:31:37.660556246 +0000 UTC m=+1099.387946533" watchObservedRunningTime="2026-03-16 15:31:37.675014847 +0000 UTC m=+1099.402405134" Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.019295 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-x6q66" Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.202644 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr5cq\" (UniqueName: \"kubernetes.io/projected/2240a5bb-051a-419c-bc1d-f5dc902a10e6-kube-api-access-sr5cq\") pod \"2240a5bb-051a-419c-bc1d-f5dc902a10e6\" (UID: \"2240a5bb-051a-419c-bc1d-f5dc902a10e6\") " Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.203007 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2240a5bb-051a-419c-bc1d-f5dc902a10e6-operator-scripts\") pod \"2240a5bb-051a-419c-bc1d-f5dc902a10e6\" (UID: \"2240a5bb-051a-419c-bc1d-f5dc902a10e6\") " Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.204024 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2240a5bb-051a-419c-bc1d-f5dc902a10e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2240a5bb-051a-419c-bc1d-f5dc902a10e6" (UID: "2240a5bb-051a-419c-bc1d-f5dc902a10e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.211319 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2240a5bb-051a-419c-bc1d-f5dc902a10e6-kube-api-access-sr5cq" (OuterVolumeSpecName: "kube-api-access-sr5cq") pod "2240a5bb-051a-419c-bc1d-f5dc902a10e6" (UID: "2240a5bb-051a-419c-bc1d-f5dc902a10e6"). InnerVolumeSpecName "kube-api-access-sr5cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.305207 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2240a5bb-051a-419c-bc1d-f5dc902a10e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.305246 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr5cq\" (UniqueName: \"kubernetes.io/projected/2240a5bb-051a-419c-bc1d-f5dc902a10e6-kube-api-access-sr5cq\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.619016 4736 generic.go:334] "Generic (PLEG): container finished" podID="0e29bc35-4ebf-41e4-a7b0-ef90df644ca4" containerID="f02d9b45ae6d3129da7816c391fc38fd1657272770481d72938c3f133d08ff93" exitCode=0 Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.619119 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xbrjm" event={"ID":"0e29bc35-4ebf-41e4-a7b0-ef90df644ca4","Type":"ContainerDied","Data":"f02d9b45ae6d3129da7816c391fc38fd1657272770481d72938c3f133d08ff93"} Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.629762 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-x6q66" event={"ID":"2240a5bb-051a-419c-bc1d-f5dc902a10e6","Type":"ContainerDied","Data":"6bed02e22a35d1d78134d129ce009be215bd0ac121d271ee7b0a34675035ce0d"} Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.629811 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bed02e22a35d1d78134d129ce009be215bd0ac121d271ee7b0a34675035ce0d" Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.629879 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-x6q66" Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.632644 4736 generic.go:334] "Generic (PLEG): container finished" podID="d8ebb150-6fe4-4bfc-92ec-948f02fc5d17" containerID="e9d300993f33bd28843615e27106f966a001f3876dd6b290452761d9a138627b" exitCode=0 Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.632712 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qwxkl" event={"ID":"d8ebb150-6fe4-4bfc-92ec-948f02fc5d17","Type":"ContainerDied","Data":"e9d300993f33bd28843615e27106f966a001f3876dd6b290452761d9a138627b"} Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.641427 4736 generic.go:334] "Generic (PLEG): container finished" podID="becf0d51-28cb-47fc-9c0d-9d3042fbec0a" containerID="3e1f8046fe55418048c301dc329fbaa3d4ad78f5d23dae64130ca6b632dd40c0" exitCode=0 Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.641573 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tc7zt" event={"ID":"becf0d51-28cb-47fc-9c0d-9d3042fbec0a","Type":"ContainerDied","Data":"3e1f8046fe55418048c301dc329fbaa3d4ad78f5d23dae64130ca6b632dd40c0"} Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.644713 4736 generic.go:334] "Generic (PLEG): container finished" podID="ab1dd355-d321-4c74-be86-7b850f60a065" containerID="cfdd9a9596b270afe3bba4f10528b20e0354bbdfcfae1209e265f7c222c3584b" exitCode=0 Mar 16 15:31:38 crc kubenswrapper[4736]: I0316 15:31:38.644966 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-407f-account-create-update-cnjtc" event={"ID":"ab1dd355-d321-4c74-be86-7b850f60a065","Type":"ContainerDied","Data":"cfdd9a9596b270afe3bba4f10528b20e0354bbdfcfae1209e265f7c222c3584b"} Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.090797 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a023-account-create-update-ncblm" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.204955 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7de-account-create-update-wvjgd" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.210658 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ed8c-account-create-update-pxbvq" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.229385 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf9zm\" (UniqueName: \"kubernetes.io/projected/7fe221db-0528-4bf5-a178-fc56b36d79f0-kube-api-access-wf9zm\") pod \"7fe221db-0528-4bf5-a178-fc56b36d79f0\" (UID: \"7fe221db-0528-4bf5-a178-fc56b36d79f0\") " Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.229581 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fe221db-0528-4bf5-a178-fc56b36d79f0-operator-scripts\") pod \"7fe221db-0528-4bf5-a178-fc56b36d79f0\" (UID: \"7fe221db-0528-4bf5-a178-fc56b36d79f0\") " Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.235627 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fe221db-0528-4bf5-a178-fc56b36d79f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fe221db-0528-4bf5-a178-fc56b36d79f0" (UID: "7fe221db-0528-4bf5-a178-fc56b36d79f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.239977 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe221db-0528-4bf5-a178-fc56b36d79f0-kube-api-access-wf9zm" (OuterVolumeSpecName: "kube-api-access-wf9zm") pod "7fe221db-0528-4bf5-a178-fc56b36d79f0" (UID: "7fe221db-0528-4bf5-a178-fc56b36d79f0"). InnerVolumeSpecName "kube-api-access-wf9zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.331279 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t295h\" (UniqueName: \"kubernetes.io/projected/e53ccca4-e4ca-4621-b748-47fd9cea24f7-kube-api-access-t295h\") pod \"e53ccca4-e4ca-4621-b748-47fd9cea24f7\" (UID: \"e53ccca4-e4ca-4621-b748-47fd9cea24f7\") " Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.331376 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53ccca4-e4ca-4621-b748-47fd9cea24f7-operator-scripts\") pod \"e53ccca4-e4ca-4621-b748-47fd9cea24f7\" (UID: \"e53ccca4-e4ca-4621-b748-47fd9cea24f7\") " Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.331482 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2898e153-f18f-4134-bc3c-59928983c1b9-operator-scripts\") pod \"2898e153-f18f-4134-bc3c-59928983c1b9\" (UID: \"2898e153-f18f-4134-bc3c-59928983c1b9\") " Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.331584 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7z78\" (UniqueName: \"kubernetes.io/projected/2898e153-f18f-4134-bc3c-59928983c1b9-kube-api-access-p7z78\") pod \"2898e153-f18f-4134-bc3c-59928983c1b9\" (UID: \"2898e153-f18f-4134-bc3c-59928983c1b9\") " Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.332511 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e53ccca4-e4ca-4621-b748-47fd9cea24f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e53ccca4-e4ca-4621-b748-47fd9cea24f7" (UID: "e53ccca4-e4ca-4621-b748-47fd9cea24f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.332551 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2898e153-f18f-4134-bc3c-59928983c1b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2898e153-f18f-4134-bc3c-59928983c1b9" (UID: "2898e153-f18f-4134-bc3c-59928983c1b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.333320 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf9zm\" (UniqueName: \"kubernetes.io/projected/7fe221db-0528-4bf5-a178-fc56b36d79f0-kube-api-access-wf9zm\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.333345 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fe221db-0528-4bf5-a178-fc56b36d79f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.333357 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e53ccca4-e4ca-4621-b748-47fd9cea24f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.333369 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2898e153-f18f-4134-bc3c-59928983c1b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.337279 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e53ccca4-e4ca-4621-b748-47fd9cea24f7-kube-api-access-t295h" (OuterVolumeSpecName: "kube-api-access-t295h") pod "e53ccca4-e4ca-4621-b748-47fd9cea24f7" (UID: "e53ccca4-e4ca-4621-b748-47fd9cea24f7"). InnerVolumeSpecName "kube-api-access-t295h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.337872 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2898e153-f18f-4134-bc3c-59928983c1b9-kube-api-access-p7z78" (OuterVolumeSpecName: "kube-api-access-p7z78") pod "2898e153-f18f-4134-bc3c-59928983c1b9" (UID: "2898e153-f18f-4134-bc3c-59928983c1b9"). InnerVolumeSpecName "kube-api-access-p7z78". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.435494 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t295h\" (UniqueName: \"kubernetes.io/projected/e53ccca4-e4ca-4621-b748-47fd9cea24f7-kube-api-access-t295h\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.435532 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7z78\" (UniqueName: \"kubernetes.io/projected/2898e153-f18f-4134-bc3c-59928983c1b9-kube-api-access-p7z78\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.656256 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a023-account-create-update-ncblm" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.656228 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-a023-account-create-update-ncblm" event={"ID":"7fe221db-0528-4bf5-a178-fc56b36d79f0","Type":"ContainerDied","Data":"28df1afef7c9882014adf1d9048ade1f8707776aa72f5843c65b00518dac866a"} Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.656410 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28df1afef7c9882014adf1d9048ade1f8707776aa72f5843c65b00518dac866a" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.661802 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-f7de-account-create-update-wvjgd" event={"ID":"2898e153-f18f-4134-bc3c-59928983c1b9","Type":"ContainerDied","Data":"0acffd78b7c892402b2ae9ded149e234f1543a4b727fd976eea33b06d74a55c7"} Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.661841 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0acffd78b7c892402b2ae9ded149e234f1543a4b727fd976eea33b06d74a55c7" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.661904 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-f7de-account-create-update-wvjgd" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.666916 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ed8c-account-create-update-pxbvq" event={"ID":"e53ccca4-e4ca-4621-b748-47fd9cea24f7","Type":"ContainerDied","Data":"74ee59d8f458f172875a8d672ba8d56db70cd68771df796b5431dc6ab2c21976"} Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.667228 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74ee59d8f458f172875a8d672ba8d56db70cd68771df796b5431dc6ab2c21976" Mar 16 15:31:39 crc kubenswrapper[4736]: I0316 15:31:39.667058 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ed8c-account-create-update-pxbvq" Mar 16 15:31:42 crc kubenswrapper[4736]: I0316 15:31:42.886070 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qwxkl" Mar 16 15:31:42 crc kubenswrapper[4736]: I0316 15:31:42.909324 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mm9g\" (UniqueName: \"kubernetes.io/projected/d8ebb150-6fe4-4bfc-92ec-948f02fc5d17-kube-api-access-5mm9g\") pod \"d8ebb150-6fe4-4bfc-92ec-948f02fc5d17\" (UID: \"d8ebb150-6fe4-4bfc-92ec-948f02fc5d17\") " Mar 16 15:31:42 crc kubenswrapper[4736]: I0316 15:31:42.909396 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ebb150-6fe4-4bfc-92ec-948f02fc5d17-operator-scripts\") pod \"d8ebb150-6fe4-4bfc-92ec-948f02fc5d17\" (UID: \"d8ebb150-6fe4-4bfc-92ec-948f02fc5d17\") " Mar 16 15:31:42 crc kubenswrapper[4736]: I0316 15:31:42.916652 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8ebb150-6fe4-4bfc-92ec-948f02fc5d17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8ebb150-6fe4-4bfc-92ec-948f02fc5d17" (UID: "d8ebb150-6fe4-4bfc-92ec-948f02fc5d17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:42 crc kubenswrapper[4736]: I0316 15:31:42.918187 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ebb150-6fe4-4bfc-92ec-948f02fc5d17-kube-api-access-5mm9g" (OuterVolumeSpecName: "kube-api-access-5mm9g") pod "d8ebb150-6fe4-4bfc-92ec-948f02fc5d17" (UID: "d8ebb150-6fe4-4bfc-92ec-948f02fc5d17"). InnerVolumeSpecName "kube-api-access-5mm9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:42 crc kubenswrapper[4736]: I0316 15:31:42.955303 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xbrjm" Mar 16 15:31:42 crc kubenswrapper[4736]: I0316 15:31:42.989467 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tc7zt" Mar 16 15:31:42 crc kubenswrapper[4736]: I0316 15:31:42.995748 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-407f-account-create-update-cnjtc" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.013346 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4l7s\" (UniqueName: \"kubernetes.io/projected/ab1dd355-d321-4c74-be86-7b850f60a065-kube-api-access-m4l7s\") pod \"ab1dd355-d321-4c74-be86-7b850f60a065\" (UID: \"ab1dd355-d321-4c74-be86-7b850f60a065\") " Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.013420 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwvvm\" (UniqueName: \"kubernetes.io/projected/becf0d51-28cb-47fc-9c0d-9d3042fbec0a-kube-api-access-gwvvm\") pod \"becf0d51-28cb-47fc-9c0d-9d3042fbec0a\" (UID: \"becf0d51-28cb-47fc-9c0d-9d3042fbec0a\") " Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.013511 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab1dd355-d321-4c74-be86-7b850f60a065-operator-scripts\") pod \"ab1dd355-d321-4c74-be86-7b850f60a065\" (UID: \"ab1dd355-d321-4c74-be86-7b850f60a065\") " Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.013568 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e29bc35-4ebf-41e4-a7b0-ef90df644ca4-operator-scripts\") pod \"0e29bc35-4ebf-41e4-a7b0-ef90df644ca4\" (UID: \"0e29bc35-4ebf-41e4-a7b0-ef90df644ca4\") " Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.013608 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/becf0d51-28cb-47fc-9c0d-9d3042fbec0a-operator-scripts\") pod \"becf0d51-28cb-47fc-9c0d-9d3042fbec0a\" (UID: \"becf0d51-28cb-47fc-9c0d-9d3042fbec0a\") " Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.013643 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g27qr\" (UniqueName: \"kubernetes.io/projected/0e29bc35-4ebf-41e4-a7b0-ef90df644ca4-kube-api-access-g27qr\") pod \"0e29bc35-4ebf-41e4-a7b0-ef90df644ca4\" (UID: \"0e29bc35-4ebf-41e4-a7b0-ef90df644ca4\") " Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.013905 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mm9g\" (UniqueName: \"kubernetes.io/projected/d8ebb150-6fe4-4bfc-92ec-948f02fc5d17-kube-api-access-5mm9g\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.013917 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8ebb150-6fe4-4bfc-92ec-948f02fc5d17-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.015087 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e29bc35-4ebf-41e4-a7b0-ef90df644ca4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e29bc35-4ebf-41e4-a7b0-ef90df644ca4" (UID: "0e29bc35-4ebf-41e4-a7b0-ef90df644ca4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.018458 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/becf0d51-28cb-47fc-9c0d-9d3042fbec0a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "becf0d51-28cb-47fc-9c0d-9d3042fbec0a" (UID: "becf0d51-28cb-47fc-9c0d-9d3042fbec0a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.019198 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab1dd355-d321-4c74-be86-7b850f60a065-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab1dd355-d321-4c74-be86-7b850f60a065" (UID: "ab1dd355-d321-4c74-be86-7b850f60a065"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.031774 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab1dd355-d321-4c74-be86-7b850f60a065-kube-api-access-m4l7s" (OuterVolumeSpecName: "kube-api-access-m4l7s") pod "ab1dd355-d321-4c74-be86-7b850f60a065" (UID: "ab1dd355-d321-4c74-be86-7b850f60a065"). InnerVolumeSpecName "kube-api-access-m4l7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.032284 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e29bc35-4ebf-41e4-a7b0-ef90df644ca4-kube-api-access-g27qr" (OuterVolumeSpecName: "kube-api-access-g27qr") pod "0e29bc35-4ebf-41e4-a7b0-ef90df644ca4" (UID: "0e29bc35-4ebf-41e4-a7b0-ef90df644ca4"). InnerVolumeSpecName "kube-api-access-g27qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.051229 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becf0d51-28cb-47fc-9c0d-9d3042fbec0a-kube-api-access-gwvvm" (OuterVolumeSpecName: "kube-api-access-gwvvm") pod "becf0d51-28cb-47fc-9c0d-9d3042fbec0a" (UID: "becf0d51-28cb-47fc-9c0d-9d3042fbec0a"). InnerVolumeSpecName "kube-api-access-gwvvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.116306 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g27qr\" (UniqueName: \"kubernetes.io/projected/0e29bc35-4ebf-41e4-a7b0-ef90df644ca4-kube-api-access-g27qr\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.116355 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4l7s\" (UniqueName: \"kubernetes.io/projected/ab1dd355-d321-4c74-be86-7b850f60a065-kube-api-access-m4l7s\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.116370 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwvvm\" (UniqueName: \"kubernetes.io/projected/becf0d51-28cb-47fc-9c0d-9d3042fbec0a-kube-api-access-gwvvm\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.116388 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab1dd355-d321-4c74-be86-7b850f60a065-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.116585 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e29bc35-4ebf-41e4-a7b0-ef90df644ca4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.116598 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/becf0d51-28cb-47fc-9c0d-9d3042fbec0a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.712533 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qwxkl" event={"ID":"d8ebb150-6fe4-4bfc-92ec-948f02fc5d17","Type":"ContainerDied","Data":"390a54e7eb8e4628cc1c5449123960b89d45b7ae03f231d258115e5fef42a340"} Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.712941 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="390a54e7eb8e4628cc1c5449123960b89d45b7ae03f231d258115e5fef42a340" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.712615 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qwxkl" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.714286 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tc7zt" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.714308 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tc7zt" event={"ID":"becf0d51-28cb-47fc-9c0d-9d3042fbec0a","Type":"ContainerDied","Data":"f0deeabf5be19ec141786de2912cf035b7a0cf70ff911f6919bf7e74cb91ed6a"} Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.714353 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0deeabf5be19ec141786de2912cf035b7a0cf70ff911f6919bf7e74cb91ed6a" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.721998 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8spsp" event={"ID":"36d527aa-c409-49d5-8901-5bd60482dfe4","Type":"ContainerStarted","Data":"0a6de9ff336fc1bf45bb128e769d0cb09c466b42651d6bf09d913ae4d5b2bbf1"} Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.724919 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-407f-account-create-update-cnjtc" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.725252 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-407f-account-create-update-cnjtc" event={"ID":"ab1dd355-d321-4c74-be86-7b850f60a065","Type":"ContainerDied","Data":"160a38dba9c2fb2206f4e940d65a4a40aad00d6c3847715e00b2ddb368676914"} Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.725277 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="160a38dba9c2fb2206f4e940d65a4a40aad00d6c3847715e00b2ddb368676914" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.727844 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-xbrjm" event={"ID":"0e29bc35-4ebf-41e4-a7b0-ef90df644ca4","Type":"ContainerDied","Data":"62ecd719fed0517d1a6b2b24a48f3025245b4cf4e929de8a0efe968e6c03ccb2"} Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.728207 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62ecd719fed0517d1a6b2b24a48f3025245b4cf4e929de8a0efe968e6c03ccb2" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.728140 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-xbrjm" Mar 16 15:31:43 crc kubenswrapper[4736]: I0316 15:31:43.754886 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8spsp" podStartSLOduration=6.073072721 podStartE2EDuration="12.75486365s" podCreationTimestamp="2026-03-16 15:31:31 +0000 UTC" firstStartedPulling="2026-03-16 15:31:36.127281289 +0000 UTC m=+1097.854671576" lastFinishedPulling="2026-03-16 15:31:42.809072218 +0000 UTC m=+1104.536462505" observedRunningTime="2026-03-16 15:31:43.748751466 +0000 UTC m=+1105.476141753" watchObservedRunningTime="2026-03-16 15:31:43.75486365 +0000 UTC m=+1105.482253937" Mar 16 15:31:47 crc kubenswrapper[4736]: I0316 15:31:47.789016 4736 generic.go:334] "Generic (PLEG): container finished" podID="36d527aa-c409-49d5-8901-5bd60482dfe4" containerID="0a6de9ff336fc1bf45bb128e769d0cb09c466b42651d6bf09d913ae4d5b2bbf1" exitCode=0 Mar 16 15:31:47 crc kubenswrapper[4736]: I0316 15:31:47.789155 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8spsp" event={"ID":"36d527aa-c409-49d5-8901-5bd60482dfe4","Type":"ContainerDied","Data":"0a6de9ff336fc1bf45bb128e769d0cb09c466b42651d6bf09d913ae4d5b2bbf1"} Mar 16 15:31:48 crc kubenswrapper[4736]: I0316 15:31:48.799323 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7g9td" event={"ID":"1002af8b-786a-47c1-8872-f417cc88561e","Type":"ContainerStarted","Data":"d95bae2195104a52bfcd78054330122fc0dd60b8e564404c345d0771b5e4be63"} Mar 16 15:31:48 crc kubenswrapper[4736]: I0316 15:31:48.818857 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-7g9td" podStartSLOduration=3.607468719 podStartE2EDuration="41.818832099s" podCreationTimestamp="2026-03-16 15:31:07 +0000 UTC" firstStartedPulling="2026-03-16 15:31:08.976051617 +0000 UTC m=+1070.703441894" lastFinishedPulling="2026-03-16 15:31:47.187414957 +0000 UTC m=+1108.914805274" observedRunningTime="2026-03-16 15:31:48.816363623 +0000 UTC m=+1110.543753900" watchObservedRunningTime="2026-03-16 15:31:48.818832099 +0000 UTC m=+1110.546222386" Mar 16 15:31:49 crc kubenswrapper[4736]: I0316 15:31:49.137269 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8spsp" Mar 16 15:31:49 crc kubenswrapper[4736]: I0316 15:31:49.161887 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d527aa-c409-49d5-8901-5bd60482dfe4-config-data\") pod \"36d527aa-c409-49d5-8901-5bd60482dfe4\" (UID: \"36d527aa-c409-49d5-8901-5bd60482dfe4\") " Mar 16 15:31:49 crc kubenswrapper[4736]: I0316 15:31:49.162013 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d527aa-c409-49d5-8901-5bd60482dfe4-combined-ca-bundle\") pod \"36d527aa-c409-49d5-8901-5bd60482dfe4\" (UID: \"36d527aa-c409-49d5-8901-5bd60482dfe4\") " Mar 16 15:31:49 crc kubenswrapper[4736]: I0316 15:31:49.162061 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd857\" (UniqueName: \"kubernetes.io/projected/36d527aa-c409-49d5-8901-5bd60482dfe4-kube-api-access-cd857\") pod \"36d527aa-c409-49d5-8901-5bd60482dfe4\" (UID: \"36d527aa-c409-49d5-8901-5bd60482dfe4\") " Mar 16 15:31:49 crc kubenswrapper[4736]: I0316 15:31:49.174134 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d527aa-c409-49d5-8901-5bd60482dfe4-kube-api-access-cd857" (OuterVolumeSpecName: "kube-api-access-cd857") pod "36d527aa-c409-49d5-8901-5bd60482dfe4" (UID: "36d527aa-c409-49d5-8901-5bd60482dfe4"). InnerVolumeSpecName "kube-api-access-cd857". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:49 crc kubenswrapper[4736]: I0316 15:31:49.207944 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d527aa-c409-49d5-8901-5bd60482dfe4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36d527aa-c409-49d5-8901-5bd60482dfe4" (UID: "36d527aa-c409-49d5-8901-5bd60482dfe4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:31:49 crc kubenswrapper[4736]: I0316 15:31:49.213567 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d527aa-c409-49d5-8901-5bd60482dfe4-config-data" (OuterVolumeSpecName: "config-data") pod "36d527aa-c409-49d5-8901-5bd60482dfe4" (UID: "36d527aa-c409-49d5-8901-5bd60482dfe4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:31:49 crc kubenswrapper[4736]: I0316 15:31:49.265655 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d527aa-c409-49d5-8901-5bd60482dfe4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:49 crc kubenswrapper[4736]: I0316 15:31:49.265919 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd857\" (UniqueName: \"kubernetes.io/projected/36d527aa-c409-49d5-8901-5bd60482dfe4-kube-api-access-cd857\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:49 crc kubenswrapper[4736]: I0316 15:31:49.265997 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d527aa-c409-49d5-8901-5bd60482dfe4-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:49 crc kubenswrapper[4736]: I0316 15:31:49.809425 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8spsp" event={"ID":"36d527aa-c409-49d5-8901-5bd60482dfe4","Type":"ContainerDied","Data":"ff931fb8acad1077a2b0e42a97c569d91f44efca716ee0aece171e8c98287aac"} Mar 16 15:31:49 crc kubenswrapper[4736]: I0316 15:31:49.810091 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff931fb8acad1077a2b0e42a97c569d91f44efca716ee0aece171e8c98287aac" Mar 16 15:31:49 crc kubenswrapper[4736]: I0316 15:31:49.809690 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8spsp" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.151446 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-558574fc65-pczl6"] Mar 16 15:31:50 crc kubenswrapper[4736]: E0316 15:31:50.152157 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d527aa-c409-49d5-8901-5bd60482dfe4" containerName="keystone-db-sync" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.152253 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d527aa-c409-49d5-8901-5bd60482dfe4" containerName="keystone-db-sync" Mar 16 15:31:50 crc kubenswrapper[4736]: E0316 15:31:50.152315 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e29bc35-4ebf-41e4-a7b0-ef90df644ca4" containerName="mariadb-database-create" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.152384 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e29bc35-4ebf-41e4-a7b0-ef90df644ca4" containerName="mariadb-database-create" Mar 16 15:31:50 crc kubenswrapper[4736]: E0316 15:31:50.152455 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e53ccca4-e4ca-4621-b748-47fd9cea24f7" containerName="mariadb-account-create-update" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.152503 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e53ccca4-e4ca-4621-b748-47fd9cea24f7" containerName="mariadb-account-create-update" Mar 16 15:31:50 crc kubenswrapper[4736]: E0316 15:31:50.152560 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe221db-0528-4bf5-a178-fc56b36d79f0" containerName="mariadb-account-create-update" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.152613 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe221db-0528-4bf5-a178-fc56b36d79f0" containerName="mariadb-account-create-update" Mar 16 15:31:50 crc kubenswrapper[4736]: E0316 15:31:50.152664 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbd08ca-3a7b-4539-9032-82c022dba460" containerName="ovn-config" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.152710 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbd08ca-3a7b-4539-9032-82c022dba460" containerName="ovn-config" Mar 16 15:31:50 crc kubenswrapper[4736]: E0316 15:31:50.152761 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="becf0d51-28cb-47fc-9c0d-9d3042fbec0a" containerName="mariadb-database-create" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.152813 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="becf0d51-28cb-47fc-9c0d-9d3042fbec0a" containerName="mariadb-database-create" Mar 16 15:31:50 crc kubenswrapper[4736]: E0316 15:31:50.152948 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2240a5bb-051a-419c-bc1d-f5dc902a10e6" containerName="mariadb-database-create" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.153010 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2240a5bb-051a-419c-bc1d-f5dc902a10e6" containerName="mariadb-database-create" Mar 16 15:31:50 crc kubenswrapper[4736]: E0316 15:31:50.153075 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2898e153-f18f-4134-bc3c-59928983c1b9" containerName="mariadb-account-create-update" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.153154 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2898e153-f18f-4134-bc3c-59928983c1b9" containerName="mariadb-account-create-update" Mar 16 15:31:50 crc kubenswrapper[4736]: E0316 15:31:50.153220 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ebb150-6fe4-4bfc-92ec-948f02fc5d17" containerName="mariadb-database-create" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.153275 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ebb150-6fe4-4bfc-92ec-948f02fc5d17" containerName="mariadb-database-create" Mar 16 15:31:50 crc kubenswrapper[4736]: E0316 15:31:50.153329 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1dd355-d321-4c74-be86-7b850f60a065" containerName="mariadb-account-create-update" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.153376 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1dd355-d321-4c74-be86-7b850f60a065" containerName="mariadb-account-create-update" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.153646 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2898e153-f18f-4134-bc3c-59928983c1b9" containerName="mariadb-account-create-update" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.153737 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e53ccca4-e4ca-4621-b748-47fd9cea24f7" containerName="mariadb-account-create-update" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.153810 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe221db-0528-4bf5-a178-fc56b36d79f0" containerName="mariadb-account-create-update" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.153870 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab1dd355-d321-4c74-be86-7b850f60a065" containerName="mariadb-account-create-update" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.153927 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2240a5bb-051a-419c-bc1d-f5dc902a10e6" containerName="mariadb-database-create" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.153978 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d527aa-c409-49d5-8901-5bd60482dfe4" containerName="keystone-db-sync" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.154076 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbd08ca-3a7b-4539-9032-82c022dba460" containerName="ovn-config" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.154190 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e29bc35-4ebf-41e4-a7b0-ef90df644ca4" containerName="mariadb-database-create" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.154263 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="becf0d51-28cb-47fc-9c0d-9d3042fbec0a" containerName="mariadb-database-create" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.154317 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ebb150-6fe4-4bfc-92ec-948f02fc5d17" containerName="mariadb-database-create" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.155437 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.186680 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwxdv\" (UniqueName: \"kubernetes.io/projected/c761675f-cdac-425e-8762-256604a3cb70-kube-api-access-vwxdv\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.187004 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-dns-svc\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.187188 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-ovsdbserver-sb\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.187290 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-config\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.187372 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-ovsdbserver-nb\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.193395 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-558574fc65-pczl6"] Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.215756 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mkvcz"] Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.217087 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.224457 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4bf5w" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.224695 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.224835 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.228888 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.229151 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.268083 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mkvcz"] Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.292978 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rm7c\" (UniqueName: \"kubernetes.io/projected/95467401-2547-478c-bc49-2f40c26aedfb-kube-api-access-2rm7c\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.293037 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-ovsdbserver-nb\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.293079 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-fernet-keys\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.293136 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwxdv\" (UniqueName: \"kubernetes.io/projected/c761675f-cdac-425e-8762-256604a3cb70-kube-api-access-vwxdv\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.293186 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-scripts\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.293228 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-dns-svc\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.293249 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-credential-keys\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.293276 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-config-data\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.293298 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-ovsdbserver-sb\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.293325 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-config\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.293349 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-combined-ca-bundle\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.294512 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-ovsdbserver-nb\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.295453 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-dns-svc\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.295988 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-ovsdbserver-sb\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.296764 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-config\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.350055 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwxdv\" (UniqueName: \"kubernetes.io/projected/c761675f-cdac-425e-8762-256604a3cb70-kube-api-access-vwxdv\") pod \"dnsmasq-dns-558574fc65-pczl6\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.400455 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-scripts\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.401444 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-credential-keys\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.401590 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-config-data\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.401749 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-combined-ca-bundle\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.401832 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rm7c\" (UniqueName: \"kubernetes.io/projected/95467401-2547-478c-bc49-2f40c26aedfb-kube-api-access-2rm7c\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.402024 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-fernet-keys\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.407787 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-fernet-keys\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.410475 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-scripts\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.415740 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-credential-keys\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.416348 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-combined-ca-bundle\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.449812 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-config-data\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.474447 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rm7c\" (UniqueName: \"kubernetes.io/projected/95467401-2547-478c-bc49-2f40c26aedfb-kube-api-access-2rm7c\") pod \"keystone-bootstrap-mkvcz\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.482428 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.503209 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-r5gq2"] Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.504674 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r5gq2" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.516350 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.516692 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-rngjg" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.541425 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-966f65b8f-98zhj"] Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.543358 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.543497 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.548906 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.549215 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-wqmm6" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.583945 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-r5gq2"] Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.584010 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-wvncr"] Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.585138 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.586139 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.586299 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.600405 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qdc58" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.600658 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.600959 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.605752 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-combined-ca-bundle\") pod \"heat-db-sync-r5gq2\" (UID: \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\") " pod="openstack/heat-db-sync-r5gq2" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.605825 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jdds\" (UniqueName: \"kubernetes.io/projected/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-kube-api-access-4jdds\") pod \"heat-db-sync-r5gq2\" (UID: \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\") " pod="openstack/heat-db-sync-r5gq2" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.605867 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-config-data\") pod \"heat-db-sync-r5gq2\" (UID: \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\") " pod="openstack/heat-db-sync-r5gq2" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.724657 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-config-data\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.724721 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-combined-ca-bundle\") pod \"heat-db-sync-r5gq2\" (UID: \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\") " pod="openstack/heat-db-sync-r5gq2" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.724754 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2b6b\" (UniqueName: \"kubernetes.io/projected/2255bb68-be73-4c4f-8739-83783ae195f0-kube-api-access-z2b6b\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.724777 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shld5\" (UniqueName: \"kubernetes.io/projected/8c3b7767-c1c8-4182-93f6-527e75fed36e-kube-api-access-shld5\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.724810 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-scripts\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.724886 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jdds\" (UniqueName: \"kubernetes.io/projected/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-kube-api-access-4jdds\") pod \"heat-db-sync-r5gq2\" (UID: \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\") " pod="openstack/heat-db-sync-r5gq2" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.724927 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3b7767-c1c8-4182-93f6-527e75fed36e-horizon-secret-key\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.724962 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-config-data\") pod \"heat-db-sync-r5gq2\" (UID: \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\") " pod="openstack/heat-db-sync-r5gq2" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.725019 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-db-sync-config-data\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.725042 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3b7767-c1c8-4182-93f6-527e75fed36e-config-data\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.725067 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3b7767-c1c8-4182-93f6-527e75fed36e-scripts\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.725093 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3b7767-c1c8-4182-93f6-527e75fed36e-logs\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.725158 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-combined-ca-bundle\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.725176 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2255bb68-be73-4c4f-8739-83783ae195f0-etc-machine-id\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.733152 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-combined-ca-bundle\") pod \"heat-db-sync-r5gq2\" (UID: \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\") " pod="openstack/heat-db-sync-r5gq2" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.761970 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-config-data\") pod \"heat-db-sync-r5gq2\" (UID: \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\") " pod="openstack/heat-db-sync-r5gq2" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.777138 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jdds\" (UniqueName: \"kubernetes.io/projected/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-kube-api-access-4jdds\") pod \"heat-db-sync-r5gq2\" (UID: \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\") " pod="openstack/heat-db-sync-r5gq2" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.823409 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wvncr"] Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.839957 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-combined-ca-bundle\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.840030 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2255bb68-be73-4c4f-8739-83783ae195f0-etc-machine-id\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.840144 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2255bb68-be73-4c4f-8739-83783ae195f0-etc-machine-id\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.840640 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-config-data\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.841548 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2b6b\" (UniqueName: \"kubernetes.io/projected/2255bb68-be73-4c4f-8739-83783ae195f0-kube-api-access-z2b6b\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.841603 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shld5\" (UniqueName: \"kubernetes.io/projected/8c3b7767-c1c8-4182-93f6-527e75fed36e-kube-api-access-shld5\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.842361 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-scripts\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.844885 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3b7767-c1c8-4182-93f6-527e75fed36e-horizon-secret-key\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.845170 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-db-sync-config-data\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.845214 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3b7767-c1c8-4182-93f6-527e75fed36e-config-data\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.845246 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3b7767-c1c8-4182-93f6-527e75fed36e-scripts\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.845318 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3b7767-c1c8-4182-93f6-527e75fed36e-logs\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.845761 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3b7767-c1c8-4182-93f6-527e75fed36e-logs\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.847465 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3b7767-c1c8-4182-93f6-527e75fed36e-config-data\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.847964 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3b7767-c1c8-4182-93f6-527e75fed36e-scripts\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.850889 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-config-data\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.854477 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-scripts\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.857029 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-combined-ca-bundle\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.902594 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-db-sync-config-data\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.903020 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2b6b\" (UniqueName: \"kubernetes.io/projected/2255bb68-be73-4c4f-8739-83783ae195f0-kube-api-access-z2b6b\") pod \"cinder-db-sync-wvncr\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.903188 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3b7767-c1c8-4182-93f6-527e75fed36e-horizon-secret-key\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.906532 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shld5\" (UniqueName: \"kubernetes.io/projected/8c3b7767-c1c8-4182-93f6-527e75fed36e-kube-api-access-shld5\") pod \"horizon-966f65b8f-98zhj\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:50 crc kubenswrapper[4736]: I0316 15:31:50.970693 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r5gq2" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.009716 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.046826 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-cgsss"] Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.064118 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wvncr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.065156 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cgsss" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.074775 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-966f65b8f-98zhj"] Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.089256 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.089552 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.089985 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-g99ph" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.120437 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cgsss"] Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.157779 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-l6p6k"] Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.165734 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l6p6k" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.198659 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-l6p6k"] Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.214937 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.217307 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.217947 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-97w2k" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.228669 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.230977 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.247052 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.253836 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.258918 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvfbh\" (UniqueName: \"kubernetes.io/projected/8c0be59e-89c6-45e8-9697-e513c48bb23e-kube-api-access-kvfbh\") pod \"neutron-db-sync-cgsss\" (UID: \"8c0be59e-89c6-45e8-9697-e513c48bb23e\") " pod="openstack/neutron-db-sync-cgsss" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.258969 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.259013 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c0be59e-89c6-45e8-9697-e513c48bb23e-config\") pod \"neutron-db-sync-cgsss\" (UID: \"8c0be59e-89c6-45e8-9697-e513c48bb23e\") " pod="openstack/neutron-db-sync-cgsss" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.259034 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512818ab-2555-491e-8cba-c3192bb85fc2-combined-ca-bundle\") pod \"barbican-db-sync-l6p6k\" (UID: \"512818ab-2555-491e-8cba-c3192bb85fc2\") " pod="openstack/barbican-db-sync-l6p6k" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.259053 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv9pv\" (UniqueName: \"kubernetes.io/projected/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-kube-api-access-gv9pv\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.259092 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512818ab-2555-491e-8cba-c3192bb85fc2-db-sync-config-data\") pod \"barbican-db-sync-l6p6k\" (UID: \"512818ab-2555-491e-8cba-c3192bb85fc2\") " pod="openstack/barbican-db-sync-l6p6k" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.259125 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-scripts\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.259153 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfws4\" (UniqueName: \"kubernetes.io/projected/512818ab-2555-491e-8cba-c3192bb85fc2-kube-api-access-kfws4\") pod \"barbican-db-sync-l6p6k\" (UID: \"512818ab-2555-491e-8cba-c3192bb85fc2\") " pod="openstack/barbican-db-sync-l6p6k" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.259174 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-log-httpd\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.259194 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0be59e-89c6-45e8-9697-e513c48bb23e-combined-ca-bundle\") pod \"neutron-db-sync-cgsss\" (UID: \"8c0be59e-89c6-45e8-9697-e513c48bb23e\") " pod="openstack/neutron-db-sync-cgsss" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.259218 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.259250 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-run-httpd\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.259287 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-config-data\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.263200 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f6d95f5bf-v4prl"] Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.264957 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.310608 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f6d95f5bf-v4prl"] Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.334488 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-558574fc65-pczl6"] Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.361123 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-config-data\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.361168 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvfbh\" (UniqueName: \"kubernetes.io/projected/8c0be59e-89c6-45e8-9697-e513c48bb23e-kube-api-access-kvfbh\") pod \"neutron-db-sync-cgsss\" (UID: \"8c0be59e-89c6-45e8-9697-e513c48bb23e\") " pod="openstack/neutron-db-sync-cgsss" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.361191 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.361227 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f40f136b-2371-4426-8a0d-766b829bb049-horizon-secret-key\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.361247 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f40f136b-2371-4426-8a0d-766b829bb049-scripts\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.361267 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c0be59e-89c6-45e8-9697-e513c48bb23e-config\") pod \"neutron-db-sync-cgsss\" (UID: \"8c0be59e-89c6-45e8-9697-e513c48bb23e\") " pod="openstack/neutron-db-sync-cgsss" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.361281 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40f136b-2371-4426-8a0d-766b829bb049-logs\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.361305 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512818ab-2555-491e-8cba-c3192bb85fc2-combined-ca-bundle\") pod \"barbican-db-sync-l6p6k\" (UID: \"512818ab-2555-491e-8cba-c3192bb85fc2\") " pod="openstack/barbican-db-sync-l6p6k" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.366384 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv9pv\" (UniqueName: \"kubernetes.io/projected/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-kube-api-access-gv9pv\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.366516 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f40f136b-2371-4426-8a0d-766b829bb049-config-data\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.366593 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512818ab-2555-491e-8cba-c3192bb85fc2-db-sync-config-data\") pod \"barbican-db-sync-l6p6k\" (UID: \"512818ab-2555-491e-8cba-c3192bb85fc2\") " pod="openstack/barbican-db-sync-l6p6k" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.366629 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-scripts\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.366688 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfws4\" (UniqueName: \"kubernetes.io/projected/512818ab-2555-491e-8cba-c3192bb85fc2-kube-api-access-kfws4\") pod \"barbican-db-sync-l6p6k\" (UID: \"512818ab-2555-491e-8cba-c3192bb85fc2\") " pod="openstack/barbican-db-sync-l6p6k" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.366728 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-log-httpd\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.366765 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0be59e-89c6-45e8-9697-e513c48bb23e-combined-ca-bundle\") pod \"neutron-db-sync-cgsss\" (UID: \"8c0be59e-89c6-45e8-9697-e513c48bb23e\") " pod="openstack/neutron-db-sync-cgsss" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.366796 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.366878 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-run-httpd\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.366895 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdlg\" (UniqueName: \"kubernetes.io/projected/f40f136b-2371-4426-8a0d-766b829bb049-kube-api-access-lrdlg\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.381736 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512818ab-2555-491e-8cba-c3192bb85fc2-db-sync-config-data\") pod \"barbican-db-sync-l6p6k\" (UID: \"512818ab-2555-491e-8cba-c3192bb85fc2\") " pod="openstack/barbican-db-sync-l6p6k" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.384658 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c0be59e-89c6-45e8-9697-e513c48bb23e-config\") pod \"neutron-db-sync-cgsss\" (UID: \"8c0be59e-89c6-45e8-9697-e513c48bb23e\") " pod="openstack/neutron-db-sync-cgsss" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.394801 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-log-httpd\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.395153 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-run-httpd\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.396370 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-config-data\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.401692 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0be59e-89c6-45e8-9697-e513c48bb23e-combined-ca-bundle\") pod \"neutron-db-sync-cgsss\" (UID: \"8c0be59e-89c6-45e8-9697-e513c48bb23e\") " pod="openstack/neutron-db-sync-cgsss" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.403210 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512818ab-2555-491e-8cba-c3192bb85fc2-combined-ca-bundle\") pod \"barbican-db-sync-l6p6k\" (UID: \"512818ab-2555-491e-8cba-c3192bb85fc2\") " pod="openstack/barbican-db-sync-l6p6k" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.445017 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.451473 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-scripts\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.452255 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.490228 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57bf6b8855-6cwbr"] Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.491231 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrdlg\" (UniqueName: \"kubernetes.io/projected/f40f136b-2371-4426-8a0d-766b829bb049-kube-api-access-lrdlg\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.491568 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f40f136b-2371-4426-8a0d-766b829bb049-horizon-secret-key\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.491692 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f40f136b-2371-4426-8a0d-766b829bb049-scripts\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.491801 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40f136b-2371-4426-8a0d-766b829bb049-logs\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.491898 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f40f136b-2371-4426-8a0d-766b829bb049-config-data\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.492550 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.493278 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f40f136b-2371-4426-8a0d-766b829bb049-config-data\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.493895 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f40f136b-2371-4426-8a0d-766b829bb049-scripts\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.494221 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40f136b-2371-4426-8a0d-766b829bb049-logs\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.494880 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv9pv\" (UniqueName: \"kubernetes.io/projected/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-kube-api-access-gv9pv\") pod \"ceilometer-0\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.502184 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-4xvcj"] Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.503919 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.538274 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4xvcj"] Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.539783 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvfbh\" (UniqueName: \"kubernetes.io/projected/8c0be59e-89c6-45e8-9697-e513c48bb23e-kube-api-access-kvfbh\") pod \"neutron-db-sync-cgsss\" (UID: \"8c0be59e-89c6-45e8-9697-e513c48bb23e\") " pod="openstack/neutron-db-sync-cgsss" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.548273 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-w7kkc" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.548562 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.548683 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.562254 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bf6b8855-6cwbr"] Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.571683 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f40f136b-2371-4426-8a0d-766b829bb049-horizon-secret-key\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.571850 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfws4\" (UniqueName: \"kubernetes.io/projected/512818ab-2555-491e-8cba-c3192bb85fc2-kube-api-access-kfws4\") pod \"barbican-db-sync-l6p6k\" (UID: \"512818ab-2555-491e-8cba-c3192bb85fc2\") " pod="openstack/barbican-db-sync-l6p6k" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.588909 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrdlg\" (UniqueName: \"kubernetes.io/projected/f40f136b-2371-4426-8a0d-766b829bb049-kube-api-access-lrdlg\") pod \"horizon-f6d95f5bf-v4prl\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.595822 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkbzm\" (UniqueName: \"kubernetes.io/projected/494a2ca6-a44a-4f96-8494-9708b72db762-kube-api-access-hkbzm\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.595888 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-ovsdbserver-sb\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.595922 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-dns-svc\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.595951 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-scripts\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.596022 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/494a2ca6-a44a-4f96-8494-9708b72db762-logs\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.596059 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-combined-ca-bundle\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.596132 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-ovsdbserver-nb\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.596186 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-config\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.596208 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-config-data\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.596242 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvrdb\" (UniqueName: \"kubernetes.io/projected/c44337ed-e499-4fae-b3f6-7cc57f3bed86-kube-api-access-kvrdb\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.618314 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.700643 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/494a2ca6-a44a-4f96-8494-9708b72db762-logs\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.700693 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-combined-ca-bundle\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.700731 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-ovsdbserver-nb\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.700780 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-config\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.700798 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-config-data\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.700828 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvrdb\" (UniqueName: \"kubernetes.io/projected/c44337ed-e499-4fae-b3f6-7cc57f3bed86-kube-api-access-kvrdb\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.700858 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkbzm\" (UniqueName: \"kubernetes.io/projected/494a2ca6-a44a-4f96-8494-9708b72db762-kube-api-access-hkbzm\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.700885 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-ovsdbserver-sb\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.700914 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-dns-svc\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.700940 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-scripts\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.707735 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/494a2ca6-a44a-4f96-8494-9708b72db762-logs\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.709644 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-ovsdbserver-nb\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.711862 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-config\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.720295 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-dns-svc\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.721760 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-ovsdbserver-sb\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.730882 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-combined-ca-bundle\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.731198 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-scripts\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.732234 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-config-data\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.769362 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvrdb\" (UniqueName: \"kubernetes.io/projected/c44337ed-e499-4fae-b3f6-7cc57f3bed86-kube-api-access-kvrdb\") pod \"dnsmasq-dns-57bf6b8855-6cwbr\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.776960 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cgsss" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.789955 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkbzm\" (UniqueName: \"kubernetes.io/projected/494a2ca6-a44a-4f96-8494-9708b72db762-kube-api-access-hkbzm\") pod \"placement-db-sync-4xvcj\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.790508 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.863582 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l6p6k" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.898842 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.930577 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4xvcj" Mar 16 15:31:51 crc kubenswrapper[4736]: I0316 15:31:51.975337 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mkvcz"] Mar 16 15:31:52 crc kubenswrapper[4736]: W0316 15:31:52.042937 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95467401_2547_478c_bc49_2f40c26aedfb.slice/crio-df584dc7f6034f7df6ab46efd4c0ba7603f339c6c8534b3357f7dcbeaeaea21e WatchSource:0}: Error finding container df584dc7f6034f7df6ab46efd4c0ba7603f339c6c8534b3357f7dcbeaeaea21e: Status 404 returned error can't find the container with id df584dc7f6034f7df6ab46efd4c0ba7603f339c6c8534b3357f7dcbeaeaea21e Mar 16 15:31:52 crc kubenswrapper[4736]: I0316 15:31:52.120910 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-558574fc65-pczl6"] Mar 16 15:31:52 crc kubenswrapper[4736]: I0316 15:31:52.357536 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-r5gq2"] Mar 16 15:31:52 crc kubenswrapper[4736]: I0316 15:31:52.380998 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-966f65b8f-98zhj"] Mar 16 15:31:52 crc kubenswrapper[4736]: I0316 15:31:52.703882 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-wvncr"] Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.047603 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-558574fc65-pczl6" podUID="c761675f-cdac-425e-8762-256604a3cb70" containerName="init" containerID="cri-o://0f6a148e6090eef884cdc1375a41dcd8eae282f4e4a4367880c52e730fd00ed9" gracePeriod=10 Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.047971 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-558574fc65-pczl6" event={"ID":"c761675f-cdac-425e-8762-256604a3cb70","Type":"ContainerStarted","Data":"0f6a148e6090eef884cdc1375a41dcd8eae282f4e4a4367880c52e730fd00ed9"} Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.048552 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-558574fc65-pczl6" event={"ID":"c761675f-cdac-425e-8762-256604a3cb70","Type":"ContainerStarted","Data":"b9d0e0f03392f9def3b17cf08a2590b5c1e830a8692677df5a34fab70fbc5b35"} Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.057627 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-966f65b8f-98zhj" event={"ID":"8c3b7767-c1c8-4182-93f6-527e75fed36e","Type":"ContainerStarted","Data":"8acfec27db3ebbbddf923a0601f1eeea348f43a139b7b7166cab14c27f685abe"} Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.066145 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.085845 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cgsss"] Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.103011 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f6d95f5bf-v4prl"] Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.108325 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wvncr" event={"ID":"2255bb68-be73-4c4f-8739-83783ae195f0","Type":"ContainerStarted","Data":"fe31aef31069eb5830fe53762b8adfe35617534df7f912b8b3688c601198485a"} Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.145217 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-l6p6k"] Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.163007 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkvcz" event={"ID":"95467401-2547-478c-bc49-2f40c26aedfb","Type":"ContainerStarted","Data":"df584dc7f6034f7df6ab46efd4c0ba7603f339c6c8534b3357f7dcbeaeaea21e"} Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.183527 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r5gq2" event={"ID":"9ea1215a-a5f6-406c-aba9-ba1f9da1a943","Type":"ContainerStarted","Data":"6f29ad708320ff0659466585e6c9575309be50ee860c60d13c64d82d2c1e55cd"} Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.197721 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mkvcz" podStartSLOduration=3.197696571 podStartE2EDuration="3.197696571s" podCreationTimestamp="2026-03-16 15:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:31:53.193996292 +0000 UTC m=+1114.921386589" watchObservedRunningTime="2026-03-16 15:31:53.197696571 +0000 UTC m=+1114.925086858" Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.488795 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4xvcj"] Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.513414 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57bf6b8855-6cwbr"] Mar 16 15:31:53 crc kubenswrapper[4736]: I0316 15:31:53.861970 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.018296 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-ovsdbserver-sb\") pod \"c761675f-cdac-425e-8762-256604a3cb70\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.020166 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwxdv\" (UniqueName: \"kubernetes.io/projected/c761675f-cdac-425e-8762-256604a3cb70-kube-api-access-vwxdv\") pod \"c761675f-cdac-425e-8762-256604a3cb70\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.020217 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-dns-svc\") pod \"c761675f-cdac-425e-8762-256604a3cb70\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.020242 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-config\") pod \"c761675f-cdac-425e-8762-256604a3cb70\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.020308 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-ovsdbserver-nb\") pod \"c761675f-cdac-425e-8762-256604a3cb70\" (UID: \"c761675f-cdac-425e-8762-256604a3cb70\") " Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.042396 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c761675f-cdac-425e-8762-256604a3cb70-kube-api-access-vwxdv" (OuterVolumeSpecName: "kube-api-access-vwxdv") pod "c761675f-cdac-425e-8762-256604a3cb70" (UID: "c761675f-cdac-425e-8762-256604a3cb70"). InnerVolumeSpecName "kube-api-access-vwxdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.084249 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c761675f-cdac-425e-8762-256604a3cb70" (UID: "c761675f-cdac-425e-8762-256604a3cb70"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.088337 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c761675f-cdac-425e-8762-256604a3cb70" (UID: "c761675f-cdac-425e-8762-256604a3cb70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.103793 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c761675f-cdac-425e-8762-256604a3cb70" (UID: "c761675f-cdac-425e-8762-256604a3cb70"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.112374 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-config" (OuterVolumeSpecName: "config") pod "c761675f-cdac-425e-8762-256604a3cb70" (UID: "c761675f-cdac-425e-8762-256604a3cb70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.126553 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwxdv\" (UniqueName: \"kubernetes.io/projected/c761675f-cdac-425e-8762-256604a3cb70-kube-api-access-vwxdv\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.126592 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.126602 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.126612 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.126621 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c761675f-cdac-425e-8762-256604a3cb70-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.244840 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4xvcj" event={"ID":"494a2ca6-a44a-4f96-8494-9708b72db762","Type":"ContainerStarted","Data":"cb1201873f3e122413f298657d940062b5cdaeba2295b3924e20fd7e4a4b5145"} Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.247991 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f6d95f5bf-v4prl" event={"ID":"f40f136b-2371-4426-8a0d-766b829bb049","Type":"ContainerStarted","Data":"ac50d5dd73227c51a8983c7907b81a53e85c51452b86a15996d6d9ca6b5e3bbb"} Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.251965 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cgsss" event={"ID":"8c0be59e-89c6-45e8-9697-e513c48bb23e","Type":"ContainerStarted","Data":"a1926a8abc6a1ababb1388071717ceebc5679efa7f69a8fe41e96624e9c62a11"} Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.252017 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cgsss" event={"ID":"8c0be59e-89c6-45e8-9697-e513c48bb23e","Type":"ContainerStarted","Data":"af6c1dcb6380539098bcfcc28de4ea210a04b0144ef051fc9de33e070945445d"} Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.256284 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" event={"ID":"c44337ed-e499-4fae-b3f6-7cc57f3bed86","Type":"ContainerStarted","Data":"919658deba63a5932dab2bb9aa6b61c48f3d799fa29c2a4ada6fe520e223b749"} Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.260482 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l6p6k" event={"ID":"512818ab-2555-491e-8cba-c3192bb85fc2","Type":"ContainerStarted","Data":"f1f96695af148b6eab6e09eeac9ef1b6fc11cbf2c35416221b3a55db364bd2a9"} Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.272062 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkvcz" event={"ID":"95467401-2547-478c-bc49-2f40c26aedfb","Type":"ContainerStarted","Data":"cb6bf0fe6c0a31f30a0b19798762adbcad06d0261ffb4839609e4359fd8b2c7a"} Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.276307 4736 generic.go:334] "Generic (PLEG): container finished" podID="c761675f-cdac-425e-8762-256604a3cb70" containerID="0f6a148e6090eef884cdc1375a41dcd8eae282f4e4a4367880c52e730fd00ed9" exitCode=0 Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.276404 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-558574fc65-pczl6" event={"ID":"c761675f-cdac-425e-8762-256604a3cb70","Type":"ContainerDied","Data":"0f6a148e6090eef884cdc1375a41dcd8eae282f4e4a4367880c52e730fd00ed9"} Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.276552 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-558574fc65-pczl6" event={"ID":"c761675f-cdac-425e-8762-256604a3cb70","Type":"ContainerDied","Data":"b9d0e0f03392f9def3b17cf08a2590b5c1e830a8692677df5a34fab70fbc5b35"} Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.276578 4736 scope.go:117] "RemoveContainer" containerID="0f6a148e6090eef884cdc1375a41dcd8eae282f4e4a4367880c52e730fd00ed9" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.276800 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-558574fc65-pczl6" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.315255 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78","Type":"ContainerStarted","Data":"9875a2b3a1b5a19021d12954e04c81fe098e2226c941830636703e9974769cd0"} Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.332221 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-cgsss" podStartSLOduration=4.332190338 podStartE2EDuration="4.332190338s" podCreationTimestamp="2026-03-16 15:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:31:54.300885969 +0000 UTC m=+1116.028276256" watchObservedRunningTime="2026-03-16 15:31:54.332190338 +0000 UTC m=+1116.059580625" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.461795 4736 scope.go:117] "RemoveContainer" containerID="0f6a148e6090eef884cdc1375a41dcd8eae282f4e4a4367880c52e730fd00ed9" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.461927 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-558574fc65-pczl6"] Mar 16 15:31:54 crc kubenswrapper[4736]: E0316 15:31:54.466843 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f6a148e6090eef884cdc1375a41dcd8eae282f4e4a4367880c52e730fd00ed9\": container with ID starting with 0f6a148e6090eef884cdc1375a41dcd8eae282f4e4a4367880c52e730fd00ed9 not found: ID does not exist" containerID="0f6a148e6090eef884cdc1375a41dcd8eae282f4e4a4367880c52e730fd00ed9" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.466879 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f6a148e6090eef884cdc1375a41dcd8eae282f4e4a4367880c52e730fd00ed9"} err="failed to get container status \"0f6a148e6090eef884cdc1375a41dcd8eae282f4e4a4367880c52e730fd00ed9\": rpc error: code = NotFound desc = could not find container \"0f6a148e6090eef884cdc1375a41dcd8eae282f4e4a4367880c52e730fd00ed9\": container with ID starting with 0f6a148e6090eef884cdc1375a41dcd8eae282f4e4a4367880c52e730fd00ed9 not found: ID does not exist" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.487847 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-558574fc65-pczl6"] Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.690260 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-966f65b8f-98zhj"] Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.743933 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5d466855cc-htbkt"] Mar 16 15:31:54 crc kubenswrapper[4736]: E0316 15:31:54.753062 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c761675f-cdac-425e-8762-256604a3cb70" containerName="init" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.753127 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c761675f-cdac-425e-8762-256604a3cb70" containerName="init" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.753625 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c761675f-cdac-425e-8762-256604a3cb70" containerName="init" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.755527 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.768429 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/478ebb42-1285-43e0-b14f-3b8d4e63ad23-horizon-secret-key\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.768530 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrvjd\" (UniqueName: \"kubernetes.io/projected/478ebb42-1285-43e0-b14f-3b8d4e63ad23-kube-api-access-lrvjd\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.774346 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/478ebb42-1285-43e0-b14f-3b8d4e63ad23-scripts\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.774395 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/478ebb42-1285-43e0-b14f-3b8d4e63ad23-logs\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.774643 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/478ebb42-1285-43e0-b14f-3b8d4e63ad23-config-data\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.870417 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d466855cc-htbkt"] Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.886602 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/478ebb42-1285-43e0-b14f-3b8d4e63ad23-config-data\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.886672 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/478ebb42-1285-43e0-b14f-3b8d4e63ad23-horizon-secret-key\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.886711 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrvjd\" (UniqueName: \"kubernetes.io/projected/478ebb42-1285-43e0-b14f-3b8d4e63ad23-kube-api-access-lrvjd\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.886795 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/478ebb42-1285-43e0-b14f-3b8d4e63ad23-scripts\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.886819 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/478ebb42-1285-43e0-b14f-3b8d4e63ad23-logs\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.887301 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/478ebb42-1285-43e0-b14f-3b8d4e63ad23-logs\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.888315 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/478ebb42-1285-43e0-b14f-3b8d4e63ad23-config-data\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.894379 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/478ebb42-1285-43e0-b14f-3b8d4e63ad23-scripts\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.939389 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:31:54 crc kubenswrapper[4736]: I0316 15:31:54.949742 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/478ebb42-1285-43e0-b14f-3b8d4e63ad23-horizon-secret-key\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:55 crc kubenswrapper[4736]: I0316 15:31:54.998889 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrvjd\" (UniqueName: \"kubernetes.io/projected/478ebb42-1285-43e0-b14f-3b8d4e63ad23-kube-api-access-lrvjd\") pod \"horizon-5d466855cc-htbkt\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:55 crc kubenswrapper[4736]: I0316 15:31:55.034851 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c761675f-cdac-425e-8762-256604a3cb70" path="/var/lib/kubelet/pods/c761675f-cdac-425e-8762-256604a3cb70/volumes" Mar 16 15:31:55 crc kubenswrapper[4736]: I0316 15:31:55.114603 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:31:55 crc kubenswrapper[4736]: I0316 15:31:55.380876 4736 generic.go:334] "Generic (PLEG): container finished" podID="c44337ed-e499-4fae-b3f6-7cc57f3bed86" containerID="112dceaa56d261c54604ebf2febb21847cbcefc462c94982937d1a50ba9b09ce" exitCode=0 Mar 16 15:31:55 crc kubenswrapper[4736]: I0316 15:31:55.382521 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" event={"ID":"c44337ed-e499-4fae-b3f6-7cc57f3bed86","Type":"ContainerDied","Data":"112dceaa56d261c54604ebf2febb21847cbcefc462c94982937d1a50ba9b09ce"} Mar 16 15:31:55 crc kubenswrapper[4736]: I0316 15:31:55.977618 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d466855cc-htbkt"] Mar 16 15:31:56 crc kubenswrapper[4736]: I0316 15:31:56.434591 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" event={"ID":"c44337ed-e499-4fae-b3f6-7cc57f3bed86","Type":"ContainerStarted","Data":"caf6d5df8b613b6888e552196daff3cf88a4100da3cf5297bd6323b4f78e9786"} Mar 16 15:31:56 crc kubenswrapper[4736]: I0316 15:31:56.436214 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:31:56 crc kubenswrapper[4736]: I0316 15:31:56.448458 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d466855cc-htbkt" event={"ID":"478ebb42-1285-43e0-b14f-3b8d4e63ad23","Type":"ContainerStarted","Data":"45bfeb678a132bddb65611a6039dc83389ea0179364b4b023aaa46851542cb1f"} Mar 16 15:31:56 crc kubenswrapper[4736]: I0316 15:31:56.465028 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" podStartSLOduration=5.465008793 podStartE2EDuration="5.465008793s" podCreationTimestamp="2026-03-16 15:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:31:56.459870896 +0000 UTC m=+1118.187261183" watchObservedRunningTime="2026-03-16 15:31:56.465008793 +0000 UTC m=+1118.192399080" Mar 16 15:32:00 crc kubenswrapper[4736]: I0316 15:32:00.379343 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561252-bhqrv"] Mar 16 15:32:00 crc kubenswrapper[4736]: I0316 15:32:00.388610 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561252-bhqrv" Mar 16 15:32:00 crc kubenswrapper[4736]: I0316 15:32:00.410826 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:32:00 crc kubenswrapper[4736]: I0316 15:32:00.413699 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561252-bhqrv"] Mar 16 15:32:00 crc kubenswrapper[4736]: I0316 15:32:00.414649 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnfvf\" (UniqueName: \"kubernetes.io/projected/9b7b3c49-39de-43da-af30-7a34a07d7022-kube-api-access-mnfvf\") pod \"auto-csr-approver-29561252-bhqrv\" (UID: \"9b7b3c49-39de-43da-af30-7a34a07d7022\") " pod="openshift-infra/auto-csr-approver-29561252-bhqrv" Mar 16 15:32:00 crc kubenswrapper[4736]: I0316 15:32:00.415921 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:32:00 crc kubenswrapper[4736]: I0316 15:32:00.417023 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:32:00 crc kubenswrapper[4736]: I0316 15:32:00.517536 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnfvf\" (UniqueName: \"kubernetes.io/projected/9b7b3c49-39de-43da-af30-7a34a07d7022-kube-api-access-mnfvf\") pod \"auto-csr-approver-29561252-bhqrv\" (UID: \"9b7b3c49-39de-43da-af30-7a34a07d7022\") " pod="openshift-infra/auto-csr-approver-29561252-bhqrv" Mar 16 15:32:00 crc kubenswrapper[4736]: I0316 15:32:00.567308 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnfvf\" (UniqueName: \"kubernetes.io/projected/9b7b3c49-39de-43da-af30-7a34a07d7022-kube-api-access-mnfvf\") pod \"auto-csr-approver-29561252-bhqrv\" (UID: \"9b7b3c49-39de-43da-af30-7a34a07d7022\") " pod="openshift-infra/auto-csr-approver-29561252-bhqrv" Mar 16 15:32:00 crc kubenswrapper[4736]: I0316 15:32:00.728935 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561252-bhqrv" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.197874 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f6d95f5bf-v4prl"] Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.232973 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67c978df54-kdnqn"] Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.242428 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.248391 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-horizon-secret-key\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.248430 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6de0392-402f-47e4-aa1d-19c956a68e1d-config-data\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.248476 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6de0392-402f-47e4-aa1d-19c956a68e1d-logs\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.248499 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-combined-ca-bundle\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.248544 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkwjm\" (UniqueName: \"kubernetes.io/projected/b6de0392-402f-47e4-aa1d-19c956a68e1d-kube-api-access-nkwjm\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.248559 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-horizon-tls-certs\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.248629 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6de0392-402f-47e4-aa1d-19c956a68e1d-scripts\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.260805 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.269459 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67c978df54-kdnqn"] Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.314565 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d466855cc-htbkt"] Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.352924 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-horizon-secret-key\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.353010 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6de0392-402f-47e4-aa1d-19c956a68e1d-config-data\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.353414 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6de0392-402f-47e4-aa1d-19c956a68e1d-logs\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.353447 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-combined-ca-bundle\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.353593 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkwjm\" (UniqueName: \"kubernetes.io/projected/b6de0392-402f-47e4-aa1d-19c956a68e1d-kube-api-access-nkwjm\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.353711 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-horizon-tls-certs\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.353944 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6de0392-402f-47e4-aa1d-19c956a68e1d-scripts\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.355397 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6de0392-402f-47e4-aa1d-19c956a68e1d-logs\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.356041 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6de0392-402f-47e4-aa1d-19c956a68e1d-scripts\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.359056 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-ff55bcd5b-psrsc"] Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.372403 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6de0392-402f-47e4-aa1d-19c956a68e1d-config-data\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.374664 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.386559 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-horizon-secret-key\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.389838 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkwjm\" (UniqueName: \"kubernetes.io/projected/b6de0392-402f-47e4-aa1d-19c956a68e1d-kube-api-access-nkwjm\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.405058 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-horizon-tls-certs\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.407065 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-combined-ca-bundle\") pod \"horizon-67c978df54-kdnqn\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.424700 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-ff55bcd5b-psrsc"] Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.457343 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-combined-ca-bundle\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.457427 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-scripts\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.457449 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-config-data\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.457504 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-horizon-tls-certs\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.457520 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-logs\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.457540 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-horizon-secret-key\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.457582 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxbk2\" (UniqueName: \"kubernetes.io/projected/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-kube-api-access-zxbk2\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.560051 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-horizon-tls-certs\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.560124 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-logs\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.560179 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-horizon-secret-key\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.560330 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxbk2\" (UniqueName: \"kubernetes.io/projected/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-kube-api-access-zxbk2\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.560488 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-combined-ca-bundle\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.560575 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-scripts\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.560616 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-config-data\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.562149 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-config-data\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.565174 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-scripts\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.569695 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-logs\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.578895 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-horizon-secret-key\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.587264 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-horizon-tls-certs\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.588013 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-combined-ca-bundle\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.593489 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxbk2\" (UniqueName: \"kubernetes.io/projected/4a2c18b8-790c-4bb8-ac86-c70f0220ab3f-kube-api-access-zxbk2\") pod \"horizon-ff55bcd5b-psrsc\" (UID: \"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f\") " pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.602733 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.651883 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561252-bhqrv"] Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.812801 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.901013 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.991618 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c8c8d4885-6znjz"] Mar 16 15:32:01 crc kubenswrapper[4736]: I0316 15:32:01.991978 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" podUID="25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" containerName="dnsmasq-dns" containerID="cri-o://771ad06af2d4c9ef36653f83baa4eed9e42d44669bf0c3fdcd58900470917f32" gracePeriod=10 Mar 16 15:32:02 crc kubenswrapper[4736]: I0316 15:32:02.072691 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:32:02 crc kubenswrapper[4736]: E0316 15:32:02.072959 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:32:02 crc kubenswrapper[4736]: E0316 15:32:02.073005 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 16 15:32:02 crc kubenswrapper[4736]: E0316 15:32:02.073074 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift podName:0892ebc9-dbd4-4652-9691-13028da07f80 nodeName:}" failed. No retries permitted until 2026-03-16 15:33:06.07305148 +0000 UTC m=+1187.800441767 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift") pod "swift-storage-0" (UID: "0892ebc9-dbd4-4652-9691-13028da07f80") : configmap "swift-ring-files" not found Mar 16 15:32:02 crc kubenswrapper[4736]: I0316 15:32:02.154259 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67c978df54-kdnqn"] Mar 16 15:32:02 crc kubenswrapper[4736]: I0316 15:32:02.193399 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" podUID="25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: connect: connection refused" Mar 16 15:32:02 crc kubenswrapper[4736]: I0316 15:32:02.622773 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561252-bhqrv" event={"ID":"9b7b3c49-39de-43da-af30-7a34a07d7022","Type":"ContainerStarted","Data":"62a7202d393b9da059eae34ee6548a06414da86f02ad7c675c2d4468dbd2a013"} Mar 16 15:32:02 crc kubenswrapper[4736]: I0316 15:32:02.625132 4736 generic.go:334] "Generic (PLEG): container finished" podID="95467401-2547-478c-bc49-2f40c26aedfb" containerID="cb6bf0fe6c0a31f30a0b19798762adbcad06d0261ffb4839609e4359fd8b2c7a" exitCode=0 Mar 16 15:32:02 crc kubenswrapper[4736]: I0316 15:32:02.625248 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkvcz" event={"ID":"95467401-2547-478c-bc49-2f40c26aedfb","Type":"ContainerDied","Data":"cb6bf0fe6c0a31f30a0b19798762adbcad06d0261ffb4839609e4359fd8b2c7a"} Mar 16 15:32:02 crc kubenswrapper[4736]: I0316 15:32:02.631039 4736 generic.go:334] "Generic (PLEG): container finished" podID="25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" containerID="771ad06af2d4c9ef36653f83baa4eed9e42d44669bf0c3fdcd58900470917f32" exitCode=0 Mar 16 15:32:02 crc kubenswrapper[4736]: I0316 15:32:02.631151 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" event={"ID":"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9","Type":"ContainerDied","Data":"771ad06af2d4c9ef36653f83baa4eed9e42d44669bf0c3fdcd58900470917f32"} Mar 16 15:32:06 crc kubenswrapper[4736]: W0316 15:32:06.069832 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6de0392_402f_47e4_aa1d_19c956a68e1d.slice/crio-127d2153838c8b6d99bb7724793cea56b2f67e573a545b6507d9527b41ea6fb2 WatchSource:0}: Error finding container 127d2153838c8b6d99bb7724793cea56b2f67e573a545b6507d9527b41ea6fb2: Status 404 returned error can't find the container with id 127d2153838c8b6d99bb7724793cea56b2f67e573a545b6507d9527b41ea6fb2 Mar 16 15:32:06 crc kubenswrapper[4736]: I0316 15:32:06.073000 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 15:32:06 crc kubenswrapper[4736]: I0316 15:32:06.717575 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67c978df54-kdnqn" event={"ID":"b6de0392-402f-47e4-aa1d-19c956a68e1d","Type":"ContainerStarted","Data":"127d2153838c8b6d99bb7724793cea56b2f67e573a545b6507d9527b41ea6fb2"} Mar 16 15:32:08 crc kubenswrapper[4736]: I0316 15:32:08.748283 4736 generic.go:334] "Generic (PLEG): container finished" podID="1002af8b-786a-47c1-8872-f417cc88561e" containerID="d95bae2195104a52bfcd78054330122fc0dd60b8e564404c345d0771b5e4be63" exitCode=0 Mar 16 15:32:08 crc kubenswrapper[4736]: I0316 15:32:08.748547 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7g9td" event={"ID":"1002af8b-786a-47c1-8872-f417cc88561e","Type":"ContainerDied","Data":"d95bae2195104a52bfcd78054330122fc0dd60b8e564404c345d0771b5e4be63"} Mar 16 15:32:08 crc kubenswrapper[4736]: I0316 15:32:08.964890 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.048222 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rm7c\" (UniqueName: \"kubernetes.io/projected/95467401-2547-478c-bc49-2f40c26aedfb-kube-api-access-2rm7c\") pod \"95467401-2547-478c-bc49-2f40c26aedfb\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.048277 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-credential-keys\") pod \"95467401-2547-478c-bc49-2f40c26aedfb\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.048408 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-fernet-keys\") pod \"95467401-2547-478c-bc49-2f40c26aedfb\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.048434 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-combined-ca-bundle\") pod \"95467401-2547-478c-bc49-2f40c26aedfb\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.048489 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-config-data\") pod \"95467401-2547-478c-bc49-2f40c26aedfb\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.048550 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-scripts\") pod \"95467401-2547-478c-bc49-2f40c26aedfb\" (UID: \"95467401-2547-478c-bc49-2f40c26aedfb\") " Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.062383 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "95467401-2547-478c-bc49-2f40c26aedfb" (UID: "95467401-2547-478c-bc49-2f40c26aedfb"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.072351 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-scripts" (OuterVolumeSpecName: "scripts") pod "95467401-2547-478c-bc49-2f40c26aedfb" (UID: "95467401-2547-478c-bc49-2f40c26aedfb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.075303 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "95467401-2547-478c-bc49-2f40c26aedfb" (UID: "95467401-2547-478c-bc49-2f40c26aedfb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.079708 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95467401-2547-478c-bc49-2f40c26aedfb-kube-api-access-2rm7c" (OuterVolumeSpecName: "kube-api-access-2rm7c") pod "95467401-2547-478c-bc49-2f40c26aedfb" (UID: "95467401-2547-478c-bc49-2f40c26aedfb"). InnerVolumeSpecName "kube-api-access-2rm7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.097257 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95467401-2547-478c-bc49-2f40c26aedfb" (UID: "95467401-2547-478c-bc49-2f40c26aedfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.111595 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-config-data" (OuterVolumeSpecName: "config-data") pod "95467401-2547-478c-bc49-2f40c26aedfb" (UID: "95467401-2547-478c-bc49-2f40c26aedfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.153981 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rm7c\" (UniqueName: \"kubernetes.io/projected/95467401-2547-478c-bc49-2f40c26aedfb-kube-api-access-2rm7c\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.154030 4736 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.154043 4736 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.154055 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.154068 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.154078 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95467401-2547-478c-bc49-2f40c26aedfb-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.764148 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkvcz" event={"ID":"95467401-2547-478c-bc49-2f40c26aedfb","Type":"ContainerDied","Data":"df584dc7f6034f7df6ab46efd4c0ba7603f339c6c8534b3357f7dcbeaeaea21e"} Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.764932 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df584dc7f6034f7df6ab46efd4c0ba7603f339c6c8534b3357f7dcbeaeaea21e" Mar 16 15:32:09 crc kubenswrapper[4736]: I0316 15:32:09.764176 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkvcz" Mar 16 15:32:10 crc kubenswrapper[4736]: E0316 15:32:10.086450 4736 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.087195 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mkvcz"] Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.113462 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mkvcz"] Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.158186 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tqk4w"] Mar 16 15:32:10 crc kubenswrapper[4736]: E0316 15:32:10.158604 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95467401-2547-478c-bc49-2f40c26aedfb" containerName="keystone-bootstrap" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.158624 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="95467401-2547-478c-bc49-2f40c26aedfb" containerName="keystone-bootstrap" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.158805 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="95467401-2547-478c-bc49-2f40c26aedfb" containerName="keystone-bootstrap" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.160470 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.166546 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.166601 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.166738 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.168373 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.171516 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4bf5w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.179525 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tqk4w"] Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.282065 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzdhq\" (UniqueName: \"kubernetes.io/projected/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-kube-api-access-lzdhq\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.282148 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-combined-ca-bundle\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.282188 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-credential-keys\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.282242 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-config-data\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.282266 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-fernet-keys\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.282330 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-scripts\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.384779 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-combined-ca-bundle\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.384892 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-credential-keys\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.384930 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-config-data\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.384964 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-fernet-keys\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.385024 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-scripts\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.385189 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzdhq\" (UniqueName: \"kubernetes.io/projected/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-kube-api-access-lzdhq\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.392946 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-config-data\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.393683 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-fernet-keys\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.393754 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-scripts\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.400050 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-combined-ca-bundle\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.401577 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-credential-keys\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.405166 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzdhq\" (UniqueName: \"kubernetes.io/projected/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-kube-api-access-lzdhq\") pod \"keystone-bootstrap-tqk4w\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.491975 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:10 crc kubenswrapper[4736]: I0316 15:32:10.995572 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95467401-2547-478c-bc49-2f40c26aedfb" path="/var/lib/kubelet/pods/95467401-2547-478c-bc49-2f40c26aedfb/volumes" Mar 16 15:32:12 crc kubenswrapper[4736]: I0316 15:32:12.193241 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" podUID="25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: i/o timeout" Mar 16 15:32:17 crc kubenswrapper[4736]: I0316 15:32:17.195247 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" podUID="25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: i/o timeout" Mar 16 15:32:17 crc kubenswrapper[4736]: I0316 15:32:17.196453 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:32:18 crc kubenswrapper[4736]: E0316 15:32:18.360333 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:32:18 crc kubenswrapper[4736]: E0316 15:32:18.360813 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:32:18 crc kubenswrapper[4736]: E0316 15:32:18.361099 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n687h696h59bhdh697hcfhchb6h5b7h548h688h574h676h647h54bh594h579h67fh56fhdh5ddh66dh657h575h5fh5fbh7ch8h544h66ch587h668q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrdlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-f6d95f5bf-v4prl_openstack(f40f136b-2371-4426-8a0d-766b829bb049): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:32:18 crc kubenswrapper[4736]: E0316 15:32:18.380637 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9\\\"\"]" pod="openstack/horizon-f6d95f5bf-v4prl" podUID="f40f136b-2371-4426-8a0d-766b829bb049" Mar 16 15:32:18 crc kubenswrapper[4736]: E0316 15:32:18.406521 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:32:18 crc kubenswrapper[4736]: E0316 15:32:18.406609 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:32:18 crc kubenswrapper[4736]: E0316 15:32:18.406843 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n645h57dhc7h8fh55ch589h5f6h7h64fh5bbh5d6hcdh57ch57chbbhc7h55chdch557hb7h5f8hdh665h687h696hffh5fch9ch55ch586h648h5d6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrvjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5d466855cc-htbkt_openstack(478ebb42-1285-43e0-b14f-3b8d4e63ad23): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:32:18 crc kubenswrapper[4736]: E0316 15:32:18.410877 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9\\\"\"]" pod="openstack/horizon-5d466855cc-htbkt" podUID="478ebb42-1285-43e0-b14f-3b8d4e63ad23" Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.481519 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.630261 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-dns-svc\") pod \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.630441 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wkt6\" (UniqueName: \"kubernetes.io/projected/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-kube-api-access-4wkt6\") pod \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.630501 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-ovsdbserver-sb\") pod \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.630589 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-ovsdbserver-nb\") pod \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.630632 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-config\") pod \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\" (UID: \"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9\") " Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.640355 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-kube-api-access-4wkt6" (OuterVolumeSpecName: "kube-api-access-4wkt6") pod "25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" (UID: "25a62c6f-d2ad-4ade-a70a-0c4bbde597c9"). InnerVolumeSpecName "kube-api-access-4wkt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.681626 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-config" (OuterVolumeSpecName: "config") pod "25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" (UID: "25a62c6f-d2ad-4ade-a70a-0c4bbde597c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.685182 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" (UID: "25a62c6f-d2ad-4ade-a70a-0c4bbde597c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.690507 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" (UID: "25a62c6f-d2ad-4ade-a70a-0c4bbde597c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.708527 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" (UID: "25a62c6f-d2ad-4ade-a70a-0c4bbde597c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.732825 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.732855 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wkt6\" (UniqueName: \"kubernetes.io/projected/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-kube-api-access-4wkt6\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.732866 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.732875 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.732886 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.869355 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" event={"ID":"25a62c6f-d2ad-4ade-a70a-0c4bbde597c9","Type":"ContainerDied","Data":"d30810ed002a08d8c57640b2c719af899038ad94462db92a6b9913418b2a10ba"} Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.869478 4736 scope.go:117] "RemoveContainer" containerID="771ad06af2d4c9ef36653f83baa4eed9e42d44669bf0c3fdcd58900470917f32" Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.869408 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.921616 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c8c8d4885-6znjz"] Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.931635 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c8c8d4885-6znjz"] Mar 16 15:32:18 crc kubenswrapper[4736]: I0316 15:32:18.997897 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" path="/var/lib/kubelet/pods/25a62c6f-d2ad-4ade-a70a-0c4bbde597c9/volumes" Mar 16 15:32:19 crc kubenswrapper[4736]: E0316 15:32:19.402622 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:32:19 crc kubenswrapper[4736]: E0316 15:32:19.403140 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:32:19 crc kubenswrapper[4736]: E0316 15:32:19.403340 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfws4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-l6p6k_openstack(512818ab-2555-491e-8cba-c3192bb85fc2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:32:19 crc kubenswrapper[4736]: E0316 15:32:19.404494 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-l6p6k" podUID="512818ab-2555-491e-8cba-c3192bb85fc2" Mar 16 15:32:19 crc kubenswrapper[4736]: E0316 15:32:19.572144 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c0be59e_89c6_45e8_9697_e513c48bb23e.slice/crio-a1926a8abc6a1ababb1388071717ceebc5679efa7f69a8fe41e96624e9c62a11.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c0be59e_89c6_45e8_9697_e513c48bb23e.slice/crio-conmon-a1926a8abc6a1ababb1388071717ceebc5679efa7f69a8fe41e96624e9c62a11.scope\": RecentStats: unable to find data in memory cache]" Mar 16 15:32:19 crc kubenswrapper[4736]: I0316 15:32:19.883813 4736 generic.go:334] "Generic (PLEG): container finished" podID="8c0be59e-89c6-45e8-9697-e513c48bb23e" containerID="a1926a8abc6a1ababb1388071717ceebc5679efa7f69a8fe41e96624e9c62a11" exitCode=0 Mar 16 15:32:19 crc kubenswrapper[4736]: I0316 15:32:19.883923 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cgsss" event={"ID":"8c0be59e-89c6-45e8-9697-e513c48bb23e","Type":"ContainerDied","Data":"a1926a8abc6a1ababb1388071717ceebc5679efa7f69a8fe41e96624e9c62a11"} Mar 16 15:32:19 crc kubenswrapper[4736]: E0316 15:32:19.887519 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-barbican-api:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/barbican-db-sync-l6p6k" podUID="512818ab-2555-491e-8cba-c3192bb85fc2" Mar 16 15:32:22 crc kubenswrapper[4736]: I0316 15:32:22.196505 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c8c8d4885-6znjz" podUID="25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: i/o timeout" Mar 16 15:32:23 crc kubenswrapper[4736]: E0316 15:32:23.891655 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:32:23 crc kubenswrapper[4736]: E0316 15:32:23.891822 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:32:23 crc kubenswrapper[4736]: E0316 15:32:23.891954 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n658h9bh66hb4h596hbfh546hbh5bfhddh8h9bh586h95h8h7hc9h5c9h5b9h59dh5d8hd6hf4h58h64h655h67dh674h658h5cbh55fh5cdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-shld5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-966f65b8f-98zhj_openstack(8c3b7767-c1c8-4182-93f6-527e75fed36e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:32:23 crc kubenswrapper[4736]: E0316 15:32:23.896939 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9\\\"\"]" pod="openstack/horizon-966f65b8f-98zhj" podUID="8c3b7767-c1c8-4182-93f6-527e75fed36e" Mar 16 15:32:23 crc kubenswrapper[4736]: I0316 15:32:23.974042 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7g9td" Mar 16 15:32:23 crc kubenswrapper[4736]: I0316 15:32:23.974735 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7g9td" event={"ID":"1002af8b-786a-47c1-8872-f417cc88561e","Type":"ContainerDied","Data":"da72a72aff60506bbe2b5d55347a5a3bfd72d757c06cd2d3aafdb9054db43bc3"} Mar 16 15:32:23 crc kubenswrapper[4736]: I0316 15:32:23.974771 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da72a72aff60506bbe2b5d55347a5a3bfd72d757c06cd2d3aafdb9054db43bc3" Mar 16 15:32:24 crc kubenswrapper[4736]: I0316 15:32:24.103747 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-db-sync-config-data\") pod \"1002af8b-786a-47c1-8872-f417cc88561e\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " Mar 16 15:32:24 crc kubenswrapper[4736]: I0316 15:32:24.103826 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff7rp\" (UniqueName: \"kubernetes.io/projected/1002af8b-786a-47c1-8872-f417cc88561e-kube-api-access-ff7rp\") pod \"1002af8b-786a-47c1-8872-f417cc88561e\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " Mar 16 15:32:24 crc kubenswrapper[4736]: I0316 15:32:24.103972 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-combined-ca-bundle\") pod \"1002af8b-786a-47c1-8872-f417cc88561e\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " Mar 16 15:32:24 crc kubenswrapper[4736]: I0316 15:32:24.105530 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-config-data\") pod \"1002af8b-786a-47c1-8872-f417cc88561e\" (UID: \"1002af8b-786a-47c1-8872-f417cc88561e\") " Mar 16 15:32:24 crc kubenswrapper[4736]: I0316 15:32:24.177637 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1002af8b-786a-47c1-8872-f417cc88561e-kube-api-access-ff7rp" (OuterVolumeSpecName: "kube-api-access-ff7rp") pod "1002af8b-786a-47c1-8872-f417cc88561e" (UID: "1002af8b-786a-47c1-8872-f417cc88561e"). InnerVolumeSpecName "kube-api-access-ff7rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:24 crc kubenswrapper[4736]: I0316 15:32:24.199395 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1002af8b-786a-47c1-8872-f417cc88561e" (UID: "1002af8b-786a-47c1-8872-f417cc88561e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:24 crc kubenswrapper[4736]: I0316 15:32:24.211955 4736 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:24 crc kubenswrapper[4736]: I0316 15:32:24.212048 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff7rp\" (UniqueName: \"kubernetes.io/projected/1002af8b-786a-47c1-8872-f417cc88561e-kube-api-access-ff7rp\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:24 crc kubenswrapper[4736]: I0316 15:32:24.255940 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-config-data" (OuterVolumeSpecName: "config-data") pod "1002af8b-786a-47c1-8872-f417cc88561e" (UID: "1002af8b-786a-47c1-8872-f417cc88561e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:24 crc kubenswrapper[4736]: I0316 15:32:24.257362 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1002af8b-786a-47c1-8872-f417cc88561e" (UID: "1002af8b-786a-47c1-8872-f417cc88561e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:24 crc kubenswrapper[4736]: I0316 15:32:24.314254 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:24 crc kubenswrapper[4736]: I0316 15:32:24.314294 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1002af8b-786a-47c1-8872-f417cc88561e-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:24 crc kubenswrapper[4736]: I0316 15:32:24.986056 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7g9td" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.480505 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-644979994c-cgn9v"] Mar 16 15:32:25 crc kubenswrapper[4736]: E0316 15:32:25.482577 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" containerName="init" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.482602 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" containerName="init" Mar 16 15:32:25 crc kubenswrapper[4736]: E0316 15:32:25.482616 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" containerName="dnsmasq-dns" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.482623 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" containerName="dnsmasq-dns" Mar 16 15:32:25 crc kubenswrapper[4736]: E0316 15:32:25.482640 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1002af8b-786a-47c1-8872-f417cc88561e" containerName="glance-db-sync" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.482648 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1002af8b-786a-47c1-8872-f417cc88561e" containerName="glance-db-sync" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.482815 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="25a62c6f-d2ad-4ade-a70a-0c4bbde597c9" containerName="dnsmasq-dns" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.482833 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1002af8b-786a-47c1-8872-f417cc88561e" containerName="glance-db-sync" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.483892 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.529599 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-644979994c-cgn9v"] Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.648282 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2d7b\" (UniqueName: \"kubernetes.io/projected/f4079af3-6fd3-4c97-b71e-b137f0ed5415-kube-api-access-f2d7b\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.648472 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-dns-svc\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.648596 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-ovsdbserver-nb\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.649058 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-ovsdbserver-sb\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.649145 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-config\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.751535 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-dns-svc\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.751588 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-ovsdbserver-nb\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.751682 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-ovsdbserver-sb\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.751722 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-config\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.751765 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2d7b\" (UniqueName: \"kubernetes.io/projected/f4079af3-6fd3-4c97-b71e-b137f0ed5415-kube-api-access-f2d7b\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.753340 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-ovsdbserver-nb\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.753353 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-ovsdbserver-sb\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.753915 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-config\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.756540 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-dns-svc\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.800454 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2d7b\" (UniqueName: \"kubernetes.io/projected/f4079af3-6fd3-4c97-b71e-b137f0ed5415-kube-api-access-f2d7b\") pod \"dnsmasq-dns-644979994c-cgn9v\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:25 crc kubenswrapper[4736]: I0316 15:32:25.807053 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.578094 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.579746 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.582507 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.583245 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-l5s4t" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.583394 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.590335 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.772480 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsv5t\" (UniqueName: \"kubernetes.io/projected/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-kube-api-access-xsv5t\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.773005 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.773085 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-scripts\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.773349 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-logs\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.773481 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-config-data\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.773571 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.773592 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.825957 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.827851 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.834474 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.854044 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.876701 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-scripts\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.876791 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-logs\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.876832 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-config-data\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.876866 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.876886 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.876919 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsv5t\" (UniqueName: \"kubernetes.io/projected/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-kube-api-access-xsv5t\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.876957 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.877587 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.877658 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-logs\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.877989 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.884775 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-scripts\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.889683 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.889984 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-config-data\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.903325 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsv5t\" (UniqueName: \"kubernetes.io/projected/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-kube-api-access-xsv5t\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.917851 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.978924 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-config-data\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.978990 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.979043 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-scripts\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.979093 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78e2d9ba-ea6a-4524-9081-af2436b82a87-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.979136 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.979165 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78e2d9ba-ea6a-4524-9081-af2436b82a87-logs\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:26 crc kubenswrapper[4736]: I0316 15:32:26.979185 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d8jk\" (UniqueName: \"kubernetes.io/projected/78e2d9ba-ea6a-4524-9081-af2436b82a87-kube-api-access-6d8jk\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.080918 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-config-data\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.080999 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.081082 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-scripts\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.081145 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78e2d9ba-ea6a-4524-9081-af2436b82a87-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.081220 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.081254 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78e2d9ba-ea6a-4524-9081-af2436b82a87-logs\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.081288 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d8jk\" (UniqueName: \"kubernetes.io/projected/78e2d9ba-ea6a-4524-9081-af2436b82a87-kube-api-access-6d8jk\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.082174 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.082234 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78e2d9ba-ea6a-4524-9081-af2436b82a87-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.082243 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78e2d9ba-ea6a-4524-9081-af2436b82a87-logs\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.085409 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-scripts\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.091353 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.093138 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-config-data\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.099332 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d8jk\" (UniqueName: \"kubernetes.io/projected/78e2d9ba-ea6a-4524-9081-af2436b82a87-kube-api-access-6d8jk\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.150840 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.203485 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 16 15:32:27 crc kubenswrapper[4736]: I0316 15:32:27.450142 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 16 15:32:28 crc kubenswrapper[4736]: I0316 15:32:28.842362 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:32:28 crc kubenswrapper[4736]: I0316 15:32:28.942824 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:32:36 crc kubenswrapper[4736]: E0316 15:32:36.452957 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:32:36 crc kubenswrapper[4736]: E0316 15:32:36.453773 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:32:36 crc kubenswrapper[4736]: E0316 15:32:36.453941 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6bh5c6h55fh8h7dh648hdh65ch85h5cfhddh58h94h569h5d9h554hd9hdhc5h55fh659hf7h655h686h59chd9h5d8h659hc6h5dch57ch5dfq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkwjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-67c978df54-kdnqn_openstack(b6de0392-402f-47e4-aa1d-19c956a68e1d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:32:36 crc kubenswrapper[4736]: E0316 15:32:36.468418 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9\\\"\"]" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.659706 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.668337 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.676356 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cgsss" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.817491 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f40f136b-2371-4426-8a0d-766b829bb049-config-data\") pod \"f40f136b-2371-4426-8a0d-766b829bb049\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.817592 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrdlg\" (UniqueName: \"kubernetes.io/projected/f40f136b-2371-4426-8a0d-766b829bb049-kube-api-access-lrdlg\") pod \"f40f136b-2371-4426-8a0d-766b829bb049\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.817646 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40f136b-2371-4426-8a0d-766b829bb049-logs\") pod \"f40f136b-2371-4426-8a0d-766b829bb049\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.817688 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/478ebb42-1285-43e0-b14f-3b8d4e63ad23-logs\") pod \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.817713 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0be59e-89c6-45e8-9697-e513c48bb23e-combined-ca-bundle\") pod \"8c0be59e-89c6-45e8-9697-e513c48bb23e\" (UID: \"8c0be59e-89c6-45e8-9697-e513c48bb23e\") " Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.817787 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/478ebb42-1285-43e0-b14f-3b8d4e63ad23-horizon-secret-key\") pod \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.817813 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrvjd\" (UniqueName: \"kubernetes.io/projected/478ebb42-1285-43e0-b14f-3b8d4e63ad23-kube-api-access-lrvjd\") pod \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.817852 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c0be59e-89c6-45e8-9697-e513c48bb23e-config\") pod \"8c0be59e-89c6-45e8-9697-e513c48bb23e\" (UID: \"8c0be59e-89c6-45e8-9697-e513c48bb23e\") " Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.817873 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvfbh\" (UniqueName: \"kubernetes.io/projected/8c0be59e-89c6-45e8-9697-e513c48bb23e-kube-api-access-kvfbh\") pod \"8c0be59e-89c6-45e8-9697-e513c48bb23e\" (UID: \"8c0be59e-89c6-45e8-9697-e513c48bb23e\") " Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.817899 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f40f136b-2371-4426-8a0d-766b829bb049-horizon-secret-key\") pod \"f40f136b-2371-4426-8a0d-766b829bb049\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.817920 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/478ebb42-1285-43e0-b14f-3b8d4e63ad23-scripts\") pod \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.817937 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f40f136b-2371-4426-8a0d-766b829bb049-scripts\") pod \"f40f136b-2371-4426-8a0d-766b829bb049\" (UID: \"f40f136b-2371-4426-8a0d-766b829bb049\") " Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.818001 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/478ebb42-1285-43e0-b14f-3b8d4e63ad23-config-data\") pod \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\" (UID: \"478ebb42-1285-43e0-b14f-3b8d4e63ad23\") " Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.818947 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/478ebb42-1285-43e0-b14f-3b8d4e63ad23-config-data" (OuterVolumeSpecName: "config-data") pod "478ebb42-1285-43e0-b14f-3b8d4e63ad23" (UID: "478ebb42-1285-43e0-b14f-3b8d4e63ad23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.819783 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40f136b-2371-4426-8a0d-766b829bb049-config-data" (OuterVolumeSpecName: "config-data") pod "f40f136b-2371-4426-8a0d-766b829bb049" (UID: "f40f136b-2371-4426-8a0d-766b829bb049"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.824372 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f40f136b-2371-4426-8a0d-766b829bb049-logs" (OuterVolumeSpecName: "logs") pod "f40f136b-2371-4426-8a0d-766b829bb049" (UID: "f40f136b-2371-4426-8a0d-766b829bb049"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.824462 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/478ebb42-1285-43e0-b14f-3b8d4e63ad23-scripts" (OuterVolumeSpecName: "scripts") pod "478ebb42-1285-43e0-b14f-3b8d4e63ad23" (UID: "478ebb42-1285-43e0-b14f-3b8d4e63ad23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.824601 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/478ebb42-1285-43e0-b14f-3b8d4e63ad23-logs" (OuterVolumeSpecName: "logs") pod "478ebb42-1285-43e0-b14f-3b8d4e63ad23" (UID: "478ebb42-1285-43e0-b14f-3b8d4e63ad23"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.824954 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40f136b-2371-4426-8a0d-766b829bb049-scripts" (OuterVolumeSpecName: "scripts") pod "f40f136b-2371-4426-8a0d-766b829bb049" (UID: "f40f136b-2371-4426-8a0d-766b829bb049"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.829009 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/478ebb42-1285-43e0-b14f-3b8d4e63ad23-kube-api-access-lrvjd" (OuterVolumeSpecName: "kube-api-access-lrvjd") pod "478ebb42-1285-43e0-b14f-3b8d4e63ad23" (UID: "478ebb42-1285-43e0-b14f-3b8d4e63ad23"). InnerVolumeSpecName "kube-api-access-lrvjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.829580 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/478ebb42-1285-43e0-b14f-3b8d4e63ad23-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "478ebb42-1285-43e0-b14f-3b8d4e63ad23" (UID: "478ebb42-1285-43e0-b14f-3b8d4e63ad23"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.830301 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40f136b-2371-4426-8a0d-766b829bb049-kube-api-access-lrdlg" (OuterVolumeSpecName: "kube-api-access-lrdlg") pod "f40f136b-2371-4426-8a0d-766b829bb049" (UID: "f40f136b-2371-4426-8a0d-766b829bb049"). InnerVolumeSpecName "kube-api-access-lrdlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.831640 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40f136b-2371-4426-8a0d-766b829bb049-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f40f136b-2371-4426-8a0d-766b829bb049" (UID: "f40f136b-2371-4426-8a0d-766b829bb049"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.837388 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c0be59e-89c6-45e8-9697-e513c48bb23e-kube-api-access-kvfbh" (OuterVolumeSpecName: "kube-api-access-kvfbh") pod "8c0be59e-89c6-45e8-9697-e513c48bb23e" (UID: "8c0be59e-89c6-45e8-9697-e513c48bb23e"). InnerVolumeSpecName "kube-api-access-kvfbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.864414 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0be59e-89c6-45e8-9697-e513c48bb23e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c0be59e-89c6-45e8-9697-e513c48bb23e" (UID: "8c0be59e-89c6-45e8-9697-e513c48bb23e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.864542 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0be59e-89c6-45e8-9697-e513c48bb23e-config" (OuterVolumeSpecName: "config") pod "8c0be59e-89c6-45e8-9697-e513c48bb23e" (UID: "8c0be59e-89c6-45e8-9697-e513c48bb23e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.922401 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f40f136b-2371-4426-8a0d-766b829bb049-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.922439 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrdlg\" (UniqueName: \"kubernetes.io/projected/f40f136b-2371-4426-8a0d-766b829bb049-kube-api-access-lrdlg\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.922451 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f40f136b-2371-4426-8a0d-766b829bb049-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.922461 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/478ebb42-1285-43e0-b14f-3b8d4e63ad23-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.922470 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0be59e-89c6-45e8-9697-e513c48bb23e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.922479 4736 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/478ebb42-1285-43e0-b14f-3b8d4e63ad23-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.922487 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrvjd\" (UniqueName: \"kubernetes.io/projected/478ebb42-1285-43e0-b14f-3b8d4e63ad23-kube-api-access-lrvjd\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.922496 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c0be59e-89c6-45e8-9697-e513c48bb23e-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.922505 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvfbh\" (UniqueName: \"kubernetes.io/projected/8c0be59e-89c6-45e8-9697-e513c48bb23e-kube-api-access-kvfbh\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.922513 4736 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f40f136b-2371-4426-8a0d-766b829bb049-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.922522 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/478ebb42-1285-43e0-b14f-3b8d4e63ad23-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.922530 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f40f136b-2371-4426-8a0d-766b829bb049-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:36 crc kubenswrapper[4736]: I0316 15:32:36.922537 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/478ebb42-1285-43e0-b14f-3b8d4e63ad23-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:37 crc kubenswrapper[4736]: E0316 15:32:37.118687 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:32:37 crc kubenswrapper[4736]: E0316 15:32:37.118781 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:32:37 crc kubenswrapper[4736]: E0316 15:32:37.118978 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jdds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-r5gq2_openstack(9ea1215a-a5f6-406c-aba9-ba1f9da1a943): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:32:37 crc kubenswrapper[4736]: E0316 15:32:37.120173 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-r5gq2" podUID="9ea1215a-a5f6-406c-aba9-ba1f9da1a943" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.142396 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-966f65b8f-98zhj" event={"ID":"8c3b7767-c1c8-4182-93f6-527e75fed36e","Type":"ContainerDied","Data":"8acfec27db3ebbbddf923a0601f1eeea348f43a139b7b7166cab14c27f685abe"} Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.142468 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8acfec27db3ebbbddf923a0601f1eeea348f43a139b7b7166cab14c27f685abe" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.144804 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d466855cc-htbkt" event={"ID":"478ebb42-1285-43e0-b14f-3b8d4e63ad23","Type":"ContainerDied","Data":"45bfeb678a132bddb65611a6039dc83389ea0179364b4b023aaa46851542cb1f"} Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.146195 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f6d95f5bf-v4prl" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.146936 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d466855cc-htbkt" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.148294 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f6d95f5bf-v4prl" event={"ID":"f40f136b-2371-4426-8a0d-766b829bb049","Type":"ContainerDied","Data":"ac50d5dd73227c51a8983c7907b81a53e85c51452b86a15996d6d9ca6b5e3bbb"} Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.149855 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cgsss" event={"ID":"8c0be59e-89c6-45e8-9697-e513c48bb23e","Type":"ContainerDied","Data":"af6c1dcb6380539098bcfcc28de4ea210a04b0144ef051fc9de33e070945445d"} Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.149911 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af6c1dcb6380539098bcfcc28de4ea210a04b0144ef051fc9de33e070945445d" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.149915 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cgsss" Mar 16 15:32:37 crc kubenswrapper[4736]: E0316 15:32:37.156949 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-horizon:e43235cb19da04699a53f42b6a75afe9\\\"\"]" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.177716 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.261454 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d466855cc-htbkt"] Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.276951 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5d466855cc-htbkt"] Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.301462 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f6d95f5bf-v4prl"] Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.309809 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f6d95f5bf-v4prl"] Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.336408 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3b7767-c1c8-4182-93f6-527e75fed36e-logs\") pod \"8c3b7767-c1c8-4182-93f6-527e75fed36e\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.336490 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3b7767-c1c8-4182-93f6-527e75fed36e-config-data\") pod \"8c3b7767-c1c8-4182-93f6-527e75fed36e\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.336621 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shld5\" (UniqueName: \"kubernetes.io/projected/8c3b7767-c1c8-4182-93f6-527e75fed36e-kube-api-access-shld5\") pod \"8c3b7767-c1c8-4182-93f6-527e75fed36e\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.336816 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3b7767-c1c8-4182-93f6-527e75fed36e-scripts\") pod \"8c3b7767-c1c8-4182-93f6-527e75fed36e\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.336911 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c3b7767-c1c8-4182-93f6-527e75fed36e-logs" (OuterVolumeSpecName: "logs") pod "8c3b7767-c1c8-4182-93f6-527e75fed36e" (UID: "8c3b7767-c1c8-4182-93f6-527e75fed36e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.336967 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3b7767-c1c8-4182-93f6-527e75fed36e-horizon-secret-key\") pod \"8c3b7767-c1c8-4182-93f6-527e75fed36e\" (UID: \"8c3b7767-c1c8-4182-93f6-527e75fed36e\") " Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.337546 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3b7767-c1c8-4182-93f6-527e75fed36e-scripts" (OuterVolumeSpecName: "scripts") pod "8c3b7767-c1c8-4182-93f6-527e75fed36e" (UID: "8c3b7767-c1c8-4182-93f6-527e75fed36e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.338082 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c3b7767-c1c8-4182-93f6-527e75fed36e-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.338126 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c3b7767-c1c8-4182-93f6-527e75fed36e-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.338403 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3b7767-c1c8-4182-93f6-527e75fed36e-config-data" (OuterVolumeSpecName: "config-data") pod "8c3b7767-c1c8-4182-93f6-527e75fed36e" (UID: "8c3b7767-c1c8-4182-93f6-527e75fed36e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.340442 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3b7767-c1c8-4182-93f6-527e75fed36e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8c3b7767-c1c8-4182-93f6-527e75fed36e" (UID: "8c3b7767-c1c8-4182-93f6-527e75fed36e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.340902 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3b7767-c1c8-4182-93f6-527e75fed36e-kube-api-access-shld5" (OuterVolumeSpecName: "kube-api-access-shld5") pod "8c3b7767-c1c8-4182-93f6-527e75fed36e" (UID: "8c3b7767-c1c8-4182-93f6-527e75fed36e"). InnerVolumeSpecName "kube-api-access-shld5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.440096 4736 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8c3b7767-c1c8-4182-93f6-527e75fed36e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.440150 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c3b7767-c1c8-4182-93f6-527e75fed36e-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:37 crc kubenswrapper[4736]: I0316 15:32:37.440165 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shld5\" (UniqueName: \"kubernetes.io/projected/8c3b7767-c1c8-4182-93f6-527e75fed36e-kube-api-access-shld5\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.014649 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-644979994c-cgn9v"] Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.054055 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b9c6995-vc6g8"] Mar 16 15:32:38 crc kubenswrapper[4736]: E0316 15:32:38.054567 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c0be59e-89c6-45e8-9697-e513c48bb23e" containerName="neutron-db-sync" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.054599 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c0be59e-89c6-45e8-9697-e513c48bb23e" containerName="neutron-db-sync" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.054791 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c0be59e-89c6-45e8-9697-e513c48bb23e" containerName="neutron-db-sync" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.065706 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.083511 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b9c6995-vc6g8"] Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.167140 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.167193 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-config\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.167245 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv268\" (UniqueName: \"kubernetes.io/projected/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-kube-api-access-pv268\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.167678 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-dns-svc\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.167719 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.173453 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-966f65b8f-98zhj" Mar 16 15:32:38 crc kubenswrapper[4736]: E0316 15:32:38.195607 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/heat-db-sync-r5gq2" podUID="9ea1215a-a5f6-406c-aba9-ba1f9da1a943" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.266755 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6779bd586b-h28pq"] Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.273696 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.273759 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-config\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.273796 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv268\" (UniqueName: \"kubernetes.io/projected/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-kube-api-access-pv268\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.273979 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-dns-svc\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.274001 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.276837 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.277547 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-dns-svc\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.278054 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-config\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.278591 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.289863 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.292653 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6779bd586b-h28pq"] Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.294742 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.294746 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.294886 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.295257 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-g99ph" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.313607 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv268\" (UniqueName: \"kubernetes.io/projected/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-kube-api-access-pv268\") pod \"dnsmasq-dns-5d7b9c6995-vc6g8\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.375646 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-ovndb-tls-certs\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.375697 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-combined-ca-bundle\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.375783 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95pbh\" (UniqueName: \"kubernetes.io/projected/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-kube-api-access-95pbh\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.375823 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-httpd-config\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.375844 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-config\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.407508 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-966f65b8f-98zhj"] Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.419031 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.434963 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-966f65b8f-98zhj"] Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.477269 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-httpd-config\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.477322 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-config\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.477419 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-ovndb-tls-certs\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.477437 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-combined-ca-bundle\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.477500 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95pbh\" (UniqueName: \"kubernetes.io/projected/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-kube-api-access-95pbh\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.482217 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-combined-ca-bundle\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.487633 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-httpd-config\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.494802 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-config\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.503983 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95pbh\" (UniqueName: \"kubernetes.io/projected/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-kube-api-access-95pbh\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.508127 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.508183 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.525249 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-ovndb-tls-certs\") pod \"neutron-6779bd586b-h28pq\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.680998 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.998567 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="478ebb42-1285-43e0-b14f-3b8d4e63ad23" path="/var/lib/kubelet/pods/478ebb42-1285-43e0-b14f-3b8d4e63ad23/volumes" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.999255 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c3b7767-c1c8-4182-93f6-527e75fed36e" path="/var/lib/kubelet/pods/8c3b7767-c1c8-4182-93f6-527e75fed36e/volumes" Mar 16 15:32:38 crc kubenswrapper[4736]: I0316 15:32:38.999826 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f40f136b-2371-4426-8a0d-766b829bb049" path="/var/lib/kubelet/pods/f40f136b-2371-4426-8a0d-766b829bb049/volumes" Mar 16 15:32:39 crc kubenswrapper[4736]: I0316 15:32:39.113575 4736 scope.go:117] "RemoveContainer" containerID="4b4e93accd2785e1e689ee30a98b7d3638386496474367678785c525387ee714" Mar 16 15:32:39 crc kubenswrapper[4736]: E0316 15:32:39.132256 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:32:39 crc kubenswrapper[4736]: E0316 15:32:39.132334 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:32:39 crc kubenswrapper[4736]: E0316 15:32:39.132562 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2b6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-wvncr_openstack(2255bb68-be73-4c4f-8739-83783ae195f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:32:39 crc kubenswrapper[4736]: E0316 15:32:39.133918 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-wvncr" podUID="2255bb68-be73-4c4f-8739-83783ae195f0" Mar 16 15:32:39 crc kubenswrapper[4736]: E0316 15:32:39.214179 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-cinder-api:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/cinder-db-sync-wvncr" podUID="2255bb68-be73-4c4f-8739-83783ae195f0" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.212431 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l6p6k" event={"ID":"512818ab-2555-491e-8cba-c3192bb85fc2","Type":"ContainerStarted","Data":"c3c8155b686b37f524cec516858f604d152cdf3bd27984a6fffbcb23462f9f7d"} Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.214755 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78","Type":"ContainerStarted","Data":"6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590"} Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.216046 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4xvcj" event={"ID":"494a2ca6-a44a-4f96-8494-9708b72db762","Type":"ContainerStarted","Data":"9bae3051308e2a907b4302da2df67ec0089a6bc0c225f16f27b88540abcc498c"} Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.244894 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-l6p6k" podStartSLOduration=4.019403202 podStartE2EDuration="50.244866185s" podCreationTimestamp="2026-03-16 15:31:50 +0000 UTC" firstStartedPulling="2026-03-16 15:31:53.136830111 +0000 UTC m=+1114.864220398" lastFinishedPulling="2026-03-16 15:32:39.362293094 +0000 UTC m=+1161.089683381" observedRunningTime="2026-03-16 15:32:40.236567812 +0000 UTC m=+1161.963958099" watchObservedRunningTime="2026-03-16 15:32:40.244866185 +0000 UTC m=+1161.972256462" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.275676 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-4xvcj" podStartSLOduration=5.709102971 podStartE2EDuration="49.275649249s" podCreationTimestamp="2026-03-16 15:31:51 +0000 UTC" firstStartedPulling="2026-03-16 15:31:53.533508393 +0000 UTC m=+1115.260898680" lastFinishedPulling="2026-03-16 15:32:37.100054671 +0000 UTC m=+1158.827444958" observedRunningTime="2026-03-16 15:32:40.264830669 +0000 UTC m=+1161.992220956" watchObservedRunningTime="2026-03-16 15:32:40.275649249 +0000 UTC m=+1162.003039536" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.326675 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tqk4w"] Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.337047 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.380765 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-ff55bcd5b-psrsc"] Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.432889 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b9c6995-vc6g8"] Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.591242 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.766862 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7897f5f54f-2vggz"] Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.771876 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.778678 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.785759 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.812972 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-644979994c-cgn9v"] Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.884404 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-combined-ca-bundle\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.884444 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-internal-tls-certs\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.884481 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-httpd-config\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.884511 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-public-tls-certs\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.884549 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4gwj\" (UniqueName: \"kubernetes.io/projected/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-kube-api-access-n4gwj\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.884605 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-config\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.884634 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-ovndb-tls-certs\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.931535 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7897f5f54f-2vggz"] Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.990444 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-combined-ca-bundle\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.990497 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-internal-tls-certs\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.990541 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-httpd-config\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.990575 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-public-tls-certs\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.990645 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4gwj\" (UniqueName: \"kubernetes.io/projected/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-kube-api-access-n4gwj\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.990724 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-config\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:40 crc kubenswrapper[4736]: I0316 15:32:40.990792 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-ovndb-tls-certs\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.036898 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-internal-tls-certs\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.037437 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-public-tls-certs\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.039700 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-ovndb-tls-certs\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.056901 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-combined-ca-bundle\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.067328 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-config\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.067443 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-httpd-config\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.093603 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4gwj\" (UniqueName: \"kubernetes.io/projected/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-kube-api-access-n4gwj\") pod \"neutron-7897f5f54f-2vggz\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.129188 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.252273 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b","Type":"ContainerStarted","Data":"538e139e7601848f2da540c30511eb03b1c138167ab5a457ca13928d71f83d5c"} Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.262822 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" event={"ID":"08df962e-9bf1-4a47-bcb5-f3535f67ecd6","Type":"ContainerStarted","Data":"11b2b75279071fde532a2fd3b2e6fe073245a4918eb7b3e341b4819f9347f95d"} Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.278290 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-644979994c-cgn9v" event={"ID":"f4079af3-6fd3-4c97-b71e-b137f0ed5415","Type":"ContainerStarted","Data":"31b90636585d441374807bbd19615a5344ab5e6c2ad8613800ad0d19fe569f46"} Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.289003 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tqk4w" event={"ID":"4e65fd8a-c9fa-43b4-b1de-3657226bfac0","Type":"ContainerStarted","Data":"d9fc6c533655946476c1eddcd485509e77ea17774a6cb216fd841494ab409936"} Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.293789 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ff55bcd5b-psrsc" event={"ID":"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f","Type":"ContainerStarted","Data":"654d5c1efb18d324a48d68b433cb7b2af271533eb2d099205953a77df72f818c"} Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.297812 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561252-bhqrv" event={"ID":"9b7b3c49-39de-43da-af30-7a34a07d7022","Type":"ContainerStarted","Data":"968e6936851f09c8695f5b59772bbb4af871706eabfed810e8726f74eeeca3ec"} Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.340567 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561252-bhqrv" podStartSLOduration=3.669542067 podStartE2EDuration="41.340536711s" podCreationTimestamp="2026-03-16 15:32:00 +0000 UTC" firstStartedPulling="2026-03-16 15:32:01.683832319 +0000 UTC m=+1123.411222606" lastFinishedPulling="2026-03-16 15:32:39.354826973 +0000 UTC m=+1161.082217250" observedRunningTime="2026-03-16 15:32:41.327489261 +0000 UTC m=+1163.054879548" watchObservedRunningTime="2026-03-16 15:32:41.340536711 +0000 UTC m=+1163.067926998" Mar 16 15:32:41 crc kubenswrapper[4736]: I0316 15:32:41.390849 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:32:41 crc kubenswrapper[4736]: W0316 15:32:41.423278 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78e2d9ba_ea6a_4524_9081_af2436b82a87.slice/crio-1171f7b9f2c68e1301c84b5a706a96eab025d43acce5def7299f20efcd9b9106 WatchSource:0}: Error finding container 1171f7b9f2c68e1301c84b5a706a96eab025d43acce5def7299f20efcd9b9106: Status 404 returned error can't find the container with id 1171f7b9f2c68e1301c84b5a706a96eab025d43acce5def7299f20efcd9b9106 Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.265795 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7897f5f54f-2vggz"] Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.370151 4736 generic.go:334] "Generic (PLEG): container finished" podID="f4079af3-6fd3-4c97-b71e-b137f0ed5415" containerID="6b04b46e66945cec8a593b5e83978ce64d0bb000cbf8f6557731fa7ff6d80489" exitCode=0 Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.370240 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-644979994c-cgn9v" event={"ID":"f4079af3-6fd3-4c97-b71e-b137f0ed5415","Type":"ContainerDied","Data":"6b04b46e66945cec8a593b5e83978ce64d0bb000cbf8f6557731fa7ff6d80489"} Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.377857 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tqk4w" event={"ID":"4e65fd8a-c9fa-43b4-b1de-3657226bfac0","Type":"ContainerStarted","Data":"03f0cc145fe562138ddc3e31435ea04222731111c6e8d5b00c5c62788542e54f"} Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.383496 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ff55bcd5b-psrsc" event={"ID":"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f","Type":"ContainerStarted","Data":"26b744f5223405683d4d7e83968214c211b9e4c2f95a629814cc76fae0d728ed"} Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.385213 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7897f5f54f-2vggz" event={"ID":"2ac9f390-a870-4309-bb1d-2360a1ae1c7f","Type":"ContainerStarted","Data":"ee2e837ef05a0609463a2f14ce7197a023eb4d66582200f9c0d5eafc4511f5f5"} Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.389446 4736 generic.go:334] "Generic (PLEG): container finished" podID="9b7b3c49-39de-43da-af30-7a34a07d7022" containerID="968e6936851f09c8695f5b59772bbb4af871706eabfed810e8726f74eeeca3ec" exitCode=0 Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.389516 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561252-bhqrv" event={"ID":"9b7b3c49-39de-43da-af30-7a34a07d7022","Type":"ContainerDied","Data":"968e6936851f09c8695f5b59772bbb4af871706eabfed810e8726f74eeeca3ec"} Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.390730 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78e2d9ba-ea6a-4524-9081-af2436b82a87","Type":"ContainerStarted","Data":"1171f7b9f2c68e1301c84b5a706a96eab025d43acce5def7299f20efcd9b9106"} Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.401876 4736 generic.go:334] "Generic (PLEG): container finished" podID="08df962e-9bf1-4a47-bcb5-f3535f67ecd6" containerID="a495361ec4df1d9c4c4be03eb474564a59d9e8a9913b2f2ed4d92124b4479579" exitCode=0 Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.401939 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" event={"ID":"08df962e-9bf1-4a47-bcb5-f3535f67ecd6","Type":"ContainerDied","Data":"a495361ec4df1d9c4c4be03eb474564a59d9e8a9913b2f2ed4d92124b4479579"} Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.528126 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tqk4w" podStartSLOduration=32.528069648 podStartE2EDuration="32.528069648s" podCreationTimestamp="2026-03-16 15:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:32:42.496151913 +0000 UTC m=+1164.223542200" watchObservedRunningTime="2026-03-16 15:32:42.528069648 +0000 UTC m=+1164.255459935" Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.843682 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.871871 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-ovsdbserver-nb\") pod \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.872036 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-ovsdbserver-sb\") pod \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.872079 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2d7b\" (UniqueName: \"kubernetes.io/projected/f4079af3-6fd3-4c97-b71e-b137f0ed5415-kube-api-access-f2d7b\") pod \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.872250 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-dns-svc\") pod \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.872373 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-config\") pod \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\" (UID: \"f4079af3-6fd3-4c97-b71e-b137f0ed5415\") " Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.885665 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4079af3-6fd3-4c97-b71e-b137f0ed5415-kube-api-access-f2d7b" (OuterVolumeSpecName: "kube-api-access-f2d7b") pod "f4079af3-6fd3-4c97-b71e-b137f0ed5415" (UID: "f4079af3-6fd3-4c97-b71e-b137f0ed5415"). InnerVolumeSpecName "kube-api-access-f2d7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:42 crc kubenswrapper[4736]: I0316 15:32:42.975406 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2d7b\" (UniqueName: \"kubernetes.io/projected/f4079af3-6fd3-4c97-b71e-b137f0ed5415-kube-api-access-f2d7b\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.001776 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6779bd586b-h28pq"] Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.059960 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f4079af3-6fd3-4c97-b71e-b137f0ed5415" (UID: "f4079af3-6fd3-4c97-b71e-b137f0ed5415"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.078073 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.144767 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-config" (OuterVolumeSpecName: "config") pod "f4079af3-6fd3-4c97-b71e-b137f0ed5415" (UID: "f4079af3-6fd3-4c97-b71e-b137f0ed5415"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.147906 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4079af3-6fd3-4c97-b71e-b137f0ed5415" (UID: "f4079af3-6fd3-4c97-b71e-b137f0ed5415"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.177038 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f4079af3-6fd3-4c97-b71e-b137f0ed5415" (UID: "f4079af3-6fd3-4c97-b71e-b137f0ed5415"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.180383 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.180492 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.180503 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4079af3-6fd3-4c97-b71e-b137f0ed5415-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.415479 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78e2d9ba-ea6a-4524-9081-af2436b82a87","Type":"ContainerStarted","Data":"c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e"} Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.425134 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" event={"ID":"08df962e-9bf1-4a47-bcb5-f3535f67ecd6","Type":"ContainerStarted","Data":"033205e197ca17b7f608a594a6840348c0a480f10e31c3cf017a0c019a6b51b4"} Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.425359 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.428758 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-644979994c-cgn9v" event={"ID":"f4079af3-6fd3-4c97-b71e-b137f0ed5415","Type":"ContainerDied","Data":"31b90636585d441374807bbd19615a5344ab5e6c2ad8613800ad0d19fe569f46"} Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.428822 4736 scope.go:117] "RemoveContainer" containerID="6b04b46e66945cec8a593b5e83978ce64d0bb000cbf8f6557731fa7ff6d80489" Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.428921 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-644979994c-cgn9v" Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.440349 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ff55bcd5b-psrsc" event={"ID":"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f","Type":"ContainerStarted","Data":"ae03ef4d42b1153ca6b496c7c17ea26b167a6eef22d2794e60fdb93441e529e8"} Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.450730 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7897f5f54f-2vggz" event={"ID":"2ac9f390-a870-4309-bb1d-2360a1ae1c7f","Type":"ContainerStarted","Data":"633590f4c22afa7d5bd76b91d83d0a8b909c99107b2b5195f5bd99e08956b8d7"} Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.455615 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b","Type":"ContainerStarted","Data":"205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a"} Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.463506 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" podStartSLOduration=5.463486554 podStartE2EDuration="5.463486554s" podCreationTimestamp="2026-03-16 15:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:32:43.448676057 +0000 UTC m=+1165.176066344" watchObservedRunningTime="2026-03-16 15:32:43.463486554 +0000 UTC m=+1165.190876841" Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.485737 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-ff55bcd5b-psrsc" podStartSLOduration=41.955930005 podStartE2EDuration="42.485705569s" podCreationTimestamp="2026-03-16 15:32:01 +0000 UTC" firstStartedPulling="2026-03-16 15:32:40.465554384 +0000 UTC m=+1162.192944671" lastFinishedPulling="2026-03-16 15:32:40.995329948 +0000 UTC m=+1162.722720235" observedRunningTime="2026-03-16 15:32:43.478465224 +0000 UTC m=+1165.205855531" watchObservedRunningTime="2026-03-16 15:32:43.485705569 +0000 UTC m=+1165.213095856" Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.535287 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-644979994c-cgn9v"] Mar 16 15:32:43 crc kubenswrapper[4736]: I0316 15:32:43.585456 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-644979994c-cgn9v"] Mar 16 15:32:44 crc kubenswrapper[4736]: I0316 15:32:44.471474 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561252-bhqrv" Mar 16 15:32:44 crc kubenswrapper[4736]: I0316 15:32:44.472912 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561252-bhqrv" event={"ID":"9b7b3c49-39de-43da-af30-7a34a07d7022","Type":"ContainerDied","Data":"62a7202d393b9da059eae34ee6548a06414da86f02ad7c675c2d4468dbd2a013"} Mar 16 15:32:44 crc kubenswrapper[4736]: I0316 15:32:44.472937 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62a7202d393b9da059eae34ee6548a06414da86f02ad7c675c2d4468dbd2a013" Mar 16 15:32:44 crc kubenswrapper[4736]: I0316 15:32:44.477881 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b","Type":"ContainerStarted","Data":"a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1"} Mar 16 15:32:44 crc kubenswrapper[4736]: I0316 15:32:44.477988 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" containerName="glance-log" containerID="cri-o://205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a" gracePeriod=30 Mar 16 15:32:44 crc kubenswrapper[4736]: I0316 15:32:44.478011 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" containerName="glance-httpd" containerID="cri-o://a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1" gracePeriod=30 Mar 16 15:32:44 crc kubenswrapper[4736]: I0316 15:32:44.483590 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6779bd586b-h28pq" event={"ID":"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d","Type":"ContainerStarted","Data":"1eed9c06ed247475117533fcf394c148fc156f55c12b3db21dafb8ebdd41626d"} Mar 16 15:32:44 crc kubenswrapper[4736]: I0316 15:32:44.517632 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnfvf\" (UniqueName: \"kubernetes.io/projected/9b7b3c49-39de-43da-af30-7a34a07d7022-kube-api-access-mnfvf\") pod \"9b7b3c49-39de-43da-af30-7a34a07d7022\" (UID: \"9b7b3c49-39de-43da-af30-7a34a07d7022\") " Mar 16 15:32:44 crc kubenswrapper[4736]: I0316 15:32:44.529172 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b7b3c49-39de-43da-af30-7a34a07d7022-kube-api-access-mnfvf" (OuterVolumeSpecName: "kube-api-access-mnfvf") pod "9b7b3c49-39de-43da-af30-7a34a07d7022" (UID: "9b7b3c49-39de-43da-af30-7a34a07d7022"). InnerVolumeSpecName "kube-api-access-mnfvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:44 crc kubenswrapper[4736]: I0316 15:32:44.566918 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=19.566894238 podStartE2EDuration="19.566894238s" podCreationTimestamp="2026-03-16 15:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:32:44.514365871 +0000 UTC m=+1166.241756158" watchObservedRunningTime="2026-03-16 15:32:44.566894238 +0000 UTC m=+1166.294284525" Mar 16 15:32:44 crc kubenswrapper[4736]: I0316 15:32:44.621477 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnfvf\" (UniqueName: \"kubernetes.io/projected/9b7b3c49-39de-43da-af30-7a34a07d7022-kube-api-access-mnfvf\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.010855 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4079af3-6fd3-4c97-b71e-b137f0ed5415" path="/var/lib/kubelet/pods/f4079af3-6fd3-4c97-b71e-b137f0ed5415/volumes" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.283436 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.347946 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-scripts\") pod \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.348638 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsv5t\" (UniqueName: \"kubernetes.io/projected/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-kube-api-access-xsv5t\") pod \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.348671 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-httpd-run\") pod \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.348802 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-logs\") pod \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.348843 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-combined-ca-bundle\") pod \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.348902 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.348942 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-config-data\") pod \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\" (UID: \"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b\") " Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.349403 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-logs" (OuterVolumeSpecName: "logs") pod "22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" (UID: "22d5c9c4-0b7a-42ac-919f-6815e6bcf06b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.349944 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" (UID: "22d5c9c4-0b7a-42ac-919f-6815e6bcf06b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.359796 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-kube-api-access-xsv5t" (OuterVolumeSpecName: "kube-api-access-xsv5t") pod "22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" (UID: "22d5c9c4-0b7a-42ac-919f-6815e6bcf06b"). InnerVolumeSpecName "kube-api-access-xsv5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.368468 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-scripts" (OuterVolumeSpecName: "scripts") pod "22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" (UID: "22d5c9c4-0b7a-42ac-919f-6815e6bcf06b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.379263 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" (UID: "22d5c9c4-0b7a-42ac-919f-6815e6bcf06b"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.436564 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-config-data" (OuterVolumeSpecName: "config-data") pod "22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" (UID: "22d5c9c4-0b7a-42ac-919f-6815e6bcf06b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.451334 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" (UID: "22d5c9c4-0b7a-42ac-919f-6815e6bcf06b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.451380 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.451435 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.451447 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.451456 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.451465 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsv5t\" (UniqueName: \"kubernetes.io/projected/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-kube-api-access-xsv5t\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.451477 4736 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.473703 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.539619 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78e2d9ba-ea6a-4524-9081-af2436b82a87","Type":"ContainerStarted","Data":"2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9"} Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.539907 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="78e2d9ba-ea6a-4524-9081-af2436b82a87" containerName="glance-log" containerID="cri-o://c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e" gracePeriod=30 Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.540690 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="78e2d9ba-ea6a-4524-9081-af2436b82a87" containerName="glance-httpd" containerID="cri-o://2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9" gracePeriod=30 Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.549001 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78","Type":"ContainerStarted","Data":"cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d"} Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.552918 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.552947 4736 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.555673 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7897f5f54f-2vggz" event={"ID":"2ac9f390-a870-4309-bb1d-2360a1ae1c7f","Type":"ContainerStarted","Data":"b517f49007744252310ef1193e8ab73fdb08c0321bca6d24a51d089b1743c2eb"} Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.556409 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.559395 4736 generic.go:334] "Generic (PLEG): container finished" podID="22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" containerID="a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1" exitCode=143 Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.559422 4736 generic.go:334] "Generic (PLEG): container finished" podID="22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" containerID="205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a" exitCode=143 Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.559465 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b","Type":"ContainerDied","Data":"a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1"} Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.559490 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b","Type":"ContainerDied","Data":"205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a"} Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.559502 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"22d5c9c4-0b7a-42ac-919f-6815e6bcf06b","Type":"ContainerDied","Data":"538e139e7601848f2da540c30511eb03b1c138167ab5a457ca13928d71f83d5c"} Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.559520 4736 scope.go:117] "RemoveContainer" containerID="a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.559633 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.565216 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561252-bhqrv" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.566515 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6779bd586b-h28pq" event={"ID":"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d","Type":"ContainerStarted","Data":"385bb8499b5bf9134d3519b48f9c5b43761cb64555172b1c94cdd0a3890097ef"} Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.566554 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.566567 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6779bd586b-h28pq" event={"ID":"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d","Type":"ContainerStarted","Data":"7ffd6c7641d7f382793b78d637a249602cf34732889e9f01964f23df3673331a"} Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.580819 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561246-2z7qt"] Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.606004 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561246-2z7qt"] Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.612498 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=20.612479113 podStartE2EDuration="20.612479113s" podCreationTimestamp="2026-03-16 15:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:32:45.584041212 +0000 UTC m=+1167.311431489" watchObservedRunningTime="2026-03-16 15:32:45.612479113 +0000 UTC m=+1167.339869400" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.632999 4736 scope.go:117] "RemoveContainer" containerID="205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.641297 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7897f5f54f-2vggz" podStartSLOduration=5.6412672839999995 podStartE2EDuration="5.641267284s" podCreationTimestamp="2026-03-16 15:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:32:45.612678128 +0000 UTC m=+1167.340068415" watchObservedRunningTime="2026-03-16 15:32:45.641267284 +0000 UTC m=+1167.368657571" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.653545 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6779bd586b-h28pq" podStartSLOduration=7.653520652 podStartE2EDuration="7.653520652s" podCreationTimestamp="2026-03-16 15:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:32:45.639396224 +0000 UTC m=+1167.366786511" watchObservedRunningTime="2026-03-16 15:32:45.653520652 +0000 UTC m=+1167.380910939" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.698384 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.741023 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.752882 4736 scope.go:117] "RemoveContainer" containerID="a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1" Mar 16 15:32:45 crc kubenswrapper[4736]: E0316 15:32:45.753495 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1\": container with ID starting with a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1 not found: ID does not exist" containerID="a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.753547 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1"} err="failed to get container status \"a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1\": rpc error: code = NotFound desc = could not find container \"a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1\": container with ID starting with a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1 not found: ID does not exist" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.753586 4736 scope.go:117] "RemoveContainer" containerID="205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.753924 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:32:45 crc kubenswrapper[4736]: E0316 15:32:45.753946 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a\": container with ID starting with 205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a not found: ID does not exist" containerID="205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.753976 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a"} err="failed to get container status \"205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a\": rpc error: code = NotFound desc = could not find container \"205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a\": container with ID starting with 205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a not found: ID does not exist" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.753997 4736 scope.go:117] "RemoveContainer" containerID="a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1" Mar 16 15:32:45 crc kubenswrapper[4736]: E0316 15:32:45.754345 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4079af3-6fd3-4c97-b71e-b137f0ed5415" containerName="init" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.754360 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4079af3-6fd3-4c97-b71e-b137f0ed5415" containerName="init" Mar 16 15:32:45 crc kubenswrapper[4736]: E0316 15:32:45.754379 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b7b3c49-39de-43da-af30-7a34a07d7022" containerName="oc" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.754386 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b7b3c49-39de-43da-af30-7a34a07d7022" containerName="oc" Mar 16 15:32:45 crc kubenswrapper[4736]: E0316 15:32:45.754403 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" containerName="glance-log" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.754409 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" containerName="glance-log" Mar 16 15:32:45 crc kubenswrapper[4736]: E0316 15:32:45.754428 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" containerName="glance-httpd" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.754434 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" containerName="glance-httpd" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.754589 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" containerName="glance-log" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.754605 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b7b3c49-39de-43da-af30-7a34a07d7022" containerName="oc" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.754619 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4079af3-6fd3-4c97-b71e-b137f0ed5415" containerName="init" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.754631 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" containerName="glance-httpd" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.756886 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.758404 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1"} err="failed to get container status \"a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1\": rpc error: code = NotFound desc = could not find container \"a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1\": container with ID starting with a8ea2fc5c93df307941cd7998b3dd08178eba6ff688b5e3a9c0bd0b9308b98e1 not found: ID does not exist" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.758448 4736 scope.go:117] "RemoveContainer" containerID="205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.758733 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a"} err="failed to get container status \"205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a\": rpc error: code = NotFound desc = could not find container \"205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a\": container with ID starting with 205b375ab4e182ed61f5124b0365ebbd5dc65a288b39499daf1ac493d6083f2a not found: ID does not exist" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.762324 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.762611 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.764503 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.960405 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.961489 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.961754 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.961913 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d211807-0eb0-4c50-89c4-f85fe552dcb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.962280 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.962416 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d211807-0eb0-4c50-89c4-f85fe552dcb6-logs\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.962552 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxxw\" (UniqueName: \"kubernetes.io/projected/2d211807-0eb0-4c50-89c4-f85fe552dcb6-kube-api-access-cwxxw\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:45 crc kubenswrapper[4736]: I0316 15:32:45.962676 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.064582 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.064638 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.064696 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.064735 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d211807-0eb0-4c50-89c4-f85fe552dcb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.065317 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d211807-0eb0-4c50-89c4-f85fe552dcb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.065585 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.065615 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d211807-0eb0-4c50-89c4-f85fe552dcb6-logs\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.065770 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.065882 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwxxw\" (UniqueName: \"kubernetes.io/projected/2d211807-0eb0-4c50-89c4-f85fe552dcb6-kube-api-access-cwxxw\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.065903 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.070055 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d211807-0eb0-4c50-89c4-f85fe552dcb6-logs\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.075631 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.077244 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.079981 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.083117 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.101340 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.120546 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwxxw\" (UniqueName: \"kubernetes.io/projected/2d211807-0eb0-4c50-89c4-f85fe552dcb6-kube-api-access-cwxxw\") pod \"glance-default-external-api-0\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.410919 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.435882 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.489554 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-scripts\") pod \"78e2d9ba-ea6a-4524-9081-af2436b82a87\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.490063 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d8jk\" (UniqueName: \"kubernetes.io/projected/78e2d9ba-ea6a-4524-9081-af2436b82a87-kube-api-access-6d8jk\") pod \"78e2d9ba-ea6a-4524-9081-af2436b82a87\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.490138 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78e2d9ba-ea6a-4524-9081-af2436b82a87-logs\") pod \"78e2d9ba-ea6a-4524-9081-af2436b82a87\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.490182 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"78e2d9ba-ea6a-4524-9081-af2436b82a87\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.490788 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78e2d9ba-ea6a-4524-9081-af2436b82a87-httpd-run\") pod \"78e2d9ba-ea6a-4524-9081-af2436b82a87\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.490860 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-config-data\") pod \"78e2d9ba-ea6a-4524-9081-af2436b82a87\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.490973 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-combined-ca-bundle\") pod \"78e2d9ba-ea6a-4524-9081-af2436b82a87\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.494279 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78e2d9ba-ea6a-4524-9081-af2436b82a87-logs" (OuterVolumeSpecName: "logs") pod "78e2d9ba-ea6a-4524-9081-af2436b82a87" (UID: "78e2d9ba-ea6a-4524-9081-af2436b82a87"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.497914 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78e2d9ba-ea6a-4524-9081-af2436b82a87-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "78e2d9ba-ea6a-4524-9081-af2436b82a87" (UID: "78e2d9ba-ea6a-4524-9081-af2436b82a87"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.502777 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "78e2d9ba-ea6a-4524-9081-af2436b82a87" (UID: "78e2d9ba-ea6a-4524-9081-af2436b82a87"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.506818 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e2d9ba-ea6a-4524-9081-af2436b82a87-kube-api-access-6d8jk" (OuterVolumeSpecName: "kube-api-access-6d8jk") pod "78e2d9ba-ea6a-4524-9081-af2436b82a87" (UID: "78e2d9ba-ea6a-4524-9081-af2436b82a87"). InnerVolumeSpecName "kube-api-access-6d8jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.584277 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-scripts" (OuterVolumeSpecName: "scripts") pod "78e2d9ba-ea6a-4524-9081-af2436b82a87" (UID: "78e2d9ba-ea6a-4524-9081-af2436b82a87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.595336 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78e2d9ba-ea6a-4524-9081-af2436b82a87" (UID: "78e2d9ba-ea6a-4524-9081-af2436b82a87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.597506 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-combined-ca-bundle\") pod \"78e2d9ba-ea6a-4524-9081-af2436b82a87\" (UID: \"78e2d9ba-ea6a-4524-9081-af2436b82a87\") " Mar 16 15:32:46 crc kubenswrapper[4736]: W0316 15:32:46.597983 4736 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/78e2d9ba-ea6a-4524-9081-af2436b82a87/volumes/kubernetes.io~secret/combined-ca-bundle Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.598016 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78e2d9ba-ea6a-4524-9081-af2436b82a87" (UID: "78e2d9ba-ea6a-4524-9081-af2436b82a87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.609726 4736 generic.go:334] "Generic (PLEG): container finished" podID="78e2d9ba-ea6a-4524-9081-af2436b82a87" containerID="2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9" exitCode=143 Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.609766 4736 generic.go:334] "Generic (PLEG): container finished" podID="78e2d9ba-ea6a-4524-9081-af2436b82a87" containerID="c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e" exitCode=143 Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.609966 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.609987 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.609999 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78e2d9ba-ea6a-4524-9081-af2436b82a87-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.610008 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d8jk\" (UniqueName: \"kubernetes.io/projected/78e2d9ba-ea6a-4524-9081-af2436b82a87-kube-api-access-6d8jk\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.610044 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.610053 4736 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/78e2d9ba-ea6a-4524-9081-af2436b82a87-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.610211 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.611077 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78e2d9ba-ea6a-4524-9081-af2436b82a87","Type":"ContainerDied","Data":"2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9"} Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.611837 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78e2d9ba-ea6a-4524-9081-af2436b82a87","Type":"ContainerDied","Data":"c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e"} Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.611868 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"78e2d9ba-ea6a-4524-9081-af2436b82a87","Type":"ContainerDied","Data":"1171f7b9f2c68e1301c84b5a706a96eab025d43acce5def7299f20efcd9b9106"} Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.611891 4736 scope.go:117] "RemoveContainer" containerID="2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.691300 4736 scope.go:117] "RemoveContainer" containerID="c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.694454 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.705435 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-config-data" (OuterVolumeSpecName: "config-data") pod "78e2d9ba-ea6a-4524-9081-af2436b82a87" (UID: "78e2d9ba-ea6a-4524-9081-af2436b82a87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.712095 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e2d9ba-ea6a-4524-9081-af2436b82a87-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.712142 4736 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.756836 4736 scope.go:117] "RemoveContainer" containerID="2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9" Mar 16 15:32:46 crc kubenswrapper[4736]: E0316 15:32:46.763611 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9\": container with ID starting with 2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9 not found: ID does not exist" containerID="2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.763655 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9"} err="failed to get container status \"2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9\": rpc error: code = NotFound desc = could not find container \"2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9\": container with ID starting with 2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9 not found: ID does not exist" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.763683 4736 scope.go:117] "RemoveContainer" containerID="c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e" Mar 16 15:32:46 crc kubenswrapper[4736]: E0316 15:32:46.766468 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e\": container with ID starting with c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e not found: ID does not exist" containerID="c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.766493 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e"} err="failed to get container status \"c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e\": rpc error: code = NotFound desc = could not find container \"c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e\": container with ID starting with c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e not found: ID does not exist" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.766511 4736 scope.go:117] "RemoveContainer" containerID="2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.766793 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9"} err="failed to get container status \"2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9\": rpc error: code = NotFound desc = could not find container \"2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9\": container with ID starting with 2d12f171449f57d96c0f7fb6db35fa0ac817c2ece2eca93873fbc92d901773c9 not found: ID does not exist" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.766816 4736 scope.go:117] "RemoveContainer" containerID="c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.767073 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e"} err="failed to get container status \"c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e\": rpc error: code = NotFound desc = could not find container \"c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e\": container with ID starting with c6db04d23a0ee32e5528c2c647479ef8b6ce2da3de04430bf4a874187984d84e not found: ID does not exist" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.954208 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.966160 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.996254 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d5c9c4-0b7a-42ac-919f-6815e6bcf06b" path="/var/lib/kubelet/pods/22d5c9c4-0b7a-42ac-919f-6815e6bcf06b/volumes" Mar 16 15:32:46 crc kubenswrapper[4736]: I0316 15:32:46.999212 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d8fcc47-84e9-4db1-8b0c-d64e06af7733" path="/var/lib/kubelet/pods/5d8fcc47-84e9-4db1-8b0c-d64e06af7733/volumes" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.001668 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78e2d9ba-ea6a-4524-9081-af2436b82a87" path="/var/lib/kubelet/pods/78e2d9ba-ea6a-4524-9081-af2436b82a87/volumes" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.002403 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:32:47 crc kubenswrapper[4736]: E0316 15:32:47.002787 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e2d9ba-ea6a-4524-9081-af2436b82a87" containerName="glance-httpd" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.002809 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e2d9ba-ea6a-4524-9081-af2436b82a87" containerName="glance-httpd" Mar 16 15:32:47 crc kubenswrapper[4736]: E0316 15:32:47.002834 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e2d9ba-ea6a-4524-9081-af2436b82a87" containerName="glance-log" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.002843 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e2d9ba-ea6a-4524-9081-af2436b82a87" containerName="glance-log" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.003051 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e2d9ba-ea6a-4524-9081-af2436b82a87" containerName="glance-log" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.003073 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e2d9ba-ea6a-4524-9081-af2436b82a87" containerName="glance-httpd" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.009140 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.014896 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.015141 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.022385 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.119482 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.119578 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.119647 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.119756 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.119780 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.119810 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkgsl\" (UniqueName: \"kubernetes.io/projected/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-kube-api-access-qkgsl\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.119833 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.119854 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.176325 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.221869 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.221987 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.222014 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.222038 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkgsl\" (UniqueName: \"kubernetes.io/projected/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-kube-api-access-qkgsl\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.222059 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.222085 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.222127 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.222182 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.222389 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.222790 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.222858 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.233288 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.237193 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.240792 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.248528 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkgsl\" (UniqueName: \"kubernetes.io/projected/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-kube-api-access-qkgsl\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.256656 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.279980 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.330021 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.655135 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2d211807-0eb0-4c50-89c4-f85fe552dcb6","Type":"ContainerStarted","Data":"7f42093c8775ea00d361a897e649a0f86f4ab8909282e6398b0ad1668b0c650e"} Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.663455 4736 generic.go:334] "Generic (PLEG): container finished" podID="494a2ca6-a44a-4f96-8494-9708b72db762" containerID="9bae3051308e2a907b4302da2df67ec0089a6bc0c225f16f27b88540abcc498c" exitCode=0 Mar 16 15:32:47 crc kubenswrapper[4736]: I0316 15:32:47.665273 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4xvcj" event={"ID":"494a2ca6-a44a-4f96-8494-9708b72db762","Type":"ContainerDied","Data":"9bae3051308e2a907b4302da2df67ec0089a6bc0c225f16f27b88540abcc498c"} Mar 16 15:32:48 crc kubenswrapper[4736]: I0316 15:32:48.128132 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:32:48 crc kubenswrapper[4736]: W0316 15:32:48.137424 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c6fa22c_e2c3_4ccb_9b2d_84ee1a1e0c54.slice/crio-ccbdf00dbd9ba3115decc20d915847cfd61e4965b0c2c7ed291de5d92b94a643 WatchSource:0}: Error finding container ccbdf00dbd9ba3115decc20d915847cfd61e4965b0c2c7ed291de5d92b94a643: Status 404 returned error can't find the container with id ccbdf00dbd9ba3115decc20d915847cfd61e4965b0c2c7ed291de5d92b94a643 Mar 16 15:32:48 crc kubenswrapper[4736]: I0316 15:32:48.421041 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:32:48 crc kubenswrapper[4736]: I0316 15:32:48.520355 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bf6b8855-6cwbr"] Mar 16 15:32:48 crc kubenswrapper[4736]: I0316 15:32:48.520687 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" podUID="c44337ed-e499-4fae-b3f6-7cc57f3bed86" containerName="dnsmasq-dns" containerID="cri-o://caf6d5df8b613b6888e552196daff3cf88a4100da3cf5297bd6323b4f78e9786" gracePeriod=10 Mar 16 15:32:48 crc kubenswrapper[4736]: I0316 15:32:48.730114 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2d211807-0eb0-4c50-89c4-f85fe552dcb6","Type":"ContainerStarted","Data":"ae8365bdcff13d95406a141eee254832ff2825587f9974cdfda46ed169371201"} Mar 16 15:32:48 crc kubenswrapper[4736]: I0316 15:32:48.731723 4736 generic.go:334] "Generic (PLEG): container finished" podID="512818ab-2555-491e-8cba-c3192bb85fc2" containerID="c3c8155b686b37f524cec516858f604d152cdf3bd27984a6fffbcb23462f9f7d" exitCode=0 Mar 16 15:32:48 crc kubenswrapper[4736]: I0316 15:32:48.731762 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l6p6k" event={"ID":"512818ab-2555-491e-8cba-c3192bb85fc2","Type":"ContainerDied","Data":"c3c8155b686b37f524cec516858f604d152cdf3bd27984a6fffbcb23462f9f7d"} Mar 16 15:32:48 crc kubenswrapper[4736]: I0316 15:32:48.746170 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54","Type":"ContainerStarted","Data":"ccbdf00dbd9ba3115decc20d915847cfd61e4965b0c2c7ed291de5d92b94a643"} Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.253089 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.399802 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvrdb\" (UniqueName: \"kubernetes.io/projected/c44337ed-e499-4fae-b3f6-7cc57f3bed86-kube-api-access-kvrdb\") pod \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.400293 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-ovsdbserver-sb\") pod \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.400385 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-dns-svc\") pod \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.400557 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-config\") pod \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.400592 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-ovsdbserver-nb\") pod \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\" (UID: \"c44337ed-e499-4fae-b3f6-7cc57f3bed86\") " Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.412933 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44337ed-e499-4fae-b3f6-7cc57f3bed86-kube-api-access-kvrdb" (OuterVolumeSpecName: "kube-api-access-kvrdb") pod "c44337ed-e499-4fae-b3f6-7cc57f3bed86" (UID: "c44337ed-e499-4fae-b3f6-7cc57f3bed86"). InnerVolumeSpecName "kube-api-access-kvrdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.503631 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvrdb\" (UniqueName: \"kubernetes.io/projected/c44337ed-e499-4fae-b3f6-7cc57f3bed86-kube-api-access-kvrdb\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.706058 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c44337ed-e499-4fae-b3f6-7cc57f3bed86" (UID: "c44337ed-e499-4fae-b3f6-7cc57f3bed86"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.709697 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.716738 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c44337ed-e499-4fae-b3f6-7cc57f3bed86" (UID: "c44337ed-e499-4fae-b3f6-7cc57f3bed86"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.728636 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-config" (OuterVolumeSpecName: "config") pod "c44337ed-e499-4fae-b3f6-7cc57f3bed86" (UID: "c44337ed-e499-4fae-b3f6-7cc57f3bed86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.730865 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c44337ed-e499-4fae-b3f6-7cc57f3bed86" (UID: "c44337ed-e499-4fae-b3f6-7cc57f3bed86"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.811458 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.811482 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.811490 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c44337ed-e499-4fae-b3f6-7cc57f3bed86-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.829135 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67c978df54-kdnqn" event={"ID":"b6de0392-402f-47e4-aa1d-19c956a68e1d","Type":"ContainerStarted","Data":"afec60def9ccf57438ca3ca1ad9b7e51a3abf15a3e6c9d520c24fac1d0bca222"} Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.835379 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4xvcj" event={"ID":"494a2ca6-a44a-4f96-8494-9708b72db762","Type":"ContainerDied","Data":"cb1201873f3e122413f298657d940062b5cdaeba2295b3924e20fd7e4a4b5145"} Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.835413 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb1201873f3e122413f298657d940062b5cdaeba2295b3924e20fd7e4a4b5145" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.847293 4736 generic.go:334] "Generic (PLEG): container finished" podID="c44337ed-e499-4fae-b3f6-7cc57f3bed86" containerID="caf6d5df8b613b6888e552196daff3cf88a4100da3cf5297bd6323b4f78e9786" exitCode=0 Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.847399 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" event={"ID":"c44337ed-e499-4fae-b3f6-7cc57f3bed86","Type":"ContainerDied","Data":"caf6d5df8b613b6888e552196daff3cf88a4100da3cf5297bd6323b4f78e9786"} Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.847441 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" event={"ID":"c44337ed-e499-4fae-b3f6-7cc57f3bed86","Type":"ContainerDied","Data":"919658deba63a5932dab2bb9aa6b61c48f3d799fa29c2a4ada6fe520e223b749"} Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.847464 4736 scope.go:117] "RemoveContainer" containerID="caf6d5df8b613b6888e552196daff3cf88a4100da3cf5297bd6323b4f78e9786" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.847772 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57bf6b8855-6cwbr" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.876829 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4xvcj" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.878389 4736 generic.go:334] "Generic (PLEG): container finished" podID="4e65fd8a-c9fa-43b4-b1de-3657226bfac0" containerID="03f0cc145fe562138ddc3e31435ea04222731111c6e8d5b00c5c62788542e54f" exitCode=0 Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.878541 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tqk4w" event={"ID":"4e65fd8a-c9fa-43b4-b1de-3657226bfac0","Type":"ContainerDied","Data":"03f0cc145fe562138ddc3e31435ea04222731111c6e8d5b00c5c62788542e54f"} Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.893560 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2d211807-0eb0-4c50-89c4-f85fe552dcb6","Type":"ContainerStarted","Data":"d75869f2cacbe41fdec4476257774b71fa31c01d8ddaf5f9021696548485b669"} Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.914333 4736 scope.go:117] "RemoveContainer" containerID="112dceaa56d261c54604ebf2febb21847cbcefc462c94982937d1a50ba9b09ce" Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.962278 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57bf6b8855-6cwbr"] Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.976474 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57bf6b8855-6cwbr"] Mar 16 15:32:49 crc kubenswrapper[4736]: I0316 15:32:49.993867 4736 scope.go:117] "RemoveContainer" containerID="caf6d5df8b613b6888e552196daff3cf88a4100da3cf5297bd6323b4f78e9786" Mar 16 15:32:50 crc kubenswrapper[4736]: E0316 15:32:50.000681 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf6d5df8b613b6888e552196daff3cf88a4100da3cf5297bd6323b4f78e9786\": container with ID starting with caf6d5df8b613b6888e552196daff3cf88a4100da3cf5297bd6323b4f78e9786 not found: ID does not exist" containerID="caf6d5df8b613b6888e552196daff3cf88a4100da3cf5297bd6323b4f78e9786" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.000725 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf6d5df8b613b6888e552196daff3cf88a4100da3cf5297bd6323b4f78e9786"} err="failed to get container status \"caf6d5df8b613b6888e552196daff3cf88a4100da3cf5297bd6323b4f78e9786\": rpc error: code = NotFound desc = could not find container \"caf6d5df8b613b6888e552196daff3cf88a4100da3cf5297bd6323b4f78e9786\": container with ID starting with caf6d5df8b613b6888e552196daff3cf88a4100da3cf5297bd6323b4f78e9786 not found: ID does not exist" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.000753 4736 scope.go:117] "RemoveContainer" containerID="112dceaa56d261c54604ebf2febb21847cbcefc462c94982937d1a50ba9b09ce" Mar 16 15:32:50 crc kubenswrapper[4736]: E0316 15:32:50.001245 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"112dceaa56d261c54604ebf2febb21847cbcefc462c94982937d1a50ba9b09ce\": container with ID starting with 112dceaa56d261c54604ebf2febb21847cbcefc462c94982937d1a50ba9b09ce not found: ID does not exist" containerID="112dceaa56d261c54604ebf2febb21847cbcefc462c94982937d1a50ba9b09ce" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.001274 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112dceaa56d261c54604ebf2febb21847cbcefc462c94982937d1a50ba9b09ce"} err="failed to get container status \"112dceaa56d261c54604ebf2febb21847cbcefc462c94982937d1a50ba9b09ce\": rpc error: code = NotFound desc = could not find container \"112dceaa56d261c54604ebf2febb21847cbcefc462c94982937d1a50ba9b09ce\": container with ID starting with 112dceaa56d261c54604ebf2febb21847cbcefc462c94982937d1a50ba9b09ce not found: ID does not exist" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.010471 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.01044968 podStartE2EDuration="5.01044968s" podCreationTimestamp="2026-03-16 15:32:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:32:49.982043879 +0000 UTC m=+1171.709434166" watchObservedRunningTime="2026-03-16 15:32:50.01044968 +0000 UTC m=+1171.737839967" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.017261 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-scripts\") pod \"494a2ca6-a44a-4f96-8494-9708b72db762\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.017312 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-combined-ca-bundle\") pod \"494a2ca6-a44a-4f96-8494-9708b72db762\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.017373 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkbzm\" (UniqueName: \"kubernetes.io/projected/494a2ca6-a44a-4f96-8494-9708b72db762-kube-api-access-hkbzm\") pod \"494a2ca6-a44a-4f96-8494-9708b72db762\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.017409 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/494a2ca6-a44a-4f96-8494-9708b72db762-logs\") pod \"494a2ca6-a44a-4f96-8494-9708b72db762\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.017595 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-config-data\") pod \"494a2ca6-a44a-4f96-8494-9708b72db762\" (UID: \"494a2ca6-a44a-4f96-8494-9708b72db762\") " Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.024188 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/494a2ca6-a44a-4f96-8494-9708b72db762-logs" (OuterVolumeSpecName: "logs") pod "494a2ca6-a44a-4f96-8494-9708b72db762" (UID: "494a2ca6-a44a-4f96-8494-9708b72db762"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.040140 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/494a2ca6-a44a-4f96-8494-9708b72db762-kube-api-access-hkbzm" (OuterVolumeSpecName: "kube-api-access-hkbzm") pod "494a2ca6-a44a-4f96-8494-9708b72db762" (UID: "494a2ca6-a44a-4f96-8494-9708b72db762"). InnerVolumeSpecName "kube-api-access-hkbzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.043947 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-scripts" (OuterVolumeSpecName: "scripts") pod "494a2ca6-a44a-4f96-8494-9708b72db762" (UID: "494a2ca6-a44a-4f96-8494-9708b72db762"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.057326 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "494a2ca6-a44a-4f96-8494-9708b72db762" (UID: "494a2ca6-a44a-4f96-8494-9708b72db762"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.079148 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-config-data" (OuterVolumeSpecName: "config-data") pod "494a2ca6-a44a-4f96-8494-9708b72db762" (UID: "494a2ca6-a44a-4f96-8494-9708b72db762"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.119894 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.119924 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.119934 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkbzm\" (UniqueName: \"kubernetes.io/projected/494a2ca6-a44a-4f96-8494-9708b72db762-kube-api-access-hkbzm\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.119946 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/494a2ca6-a44a-4f96-8494-9708b72db762-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.119954 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494a2ca6-a44a-4f96-8494-9708b72db762-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.615355 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l6p6k" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.742545 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfws4\" (UniqueName: \"kubernetes.io/projected/512818ab-2555-491e-8cba-c3192bb85fc2-kube-api-access-kfws4\") pod \"512818ab-2555-491e-8cba-c3192bb85fc2\" (UID: \"512818ab-2555-491e-8cba-c3192bb85fc2\") " Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.743118 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512818ab-2555-491e-8cba-c3192bb85fc2-combined-ca-bundle\") pod \"512818ab-2555-491e-8cba-c3192bb85fc2\" (UID: \"512818ab-2555-491e-8cba-c3192bb85fc2\") " Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.743451 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512818ab-2555-491e-8cba-c3192bb85fc2-db-sync-config-data\") pod \"512818ab-2555-491e-8cba-c3192bb85fc2\" (UID: \"512818ab-2555-491e-8cba-c3192bb85fc2\") " Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.762462 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512818ab-2555-491e-8cba-c3192bb85fc2-kube-api-access-kfws4" (OuterVolumeSpecName: "kube-api-access-kfws4") pod "512818ab-2555-491e-8cba-c3192bb85fc2" (UID: "512818ab-2555-491e-8cba-c3192bb85fc2"). InnerVolumeSpecName "kube-api-access-kfws4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.769367 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512818ab-2555-491e-8cba-c3192bb85fc2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "512818ab-2555-491e-8cba-c3192bb85fc2" (UID: "512818ab-2555-491e-8cba-c3192bb85fc2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.801170 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512818ab-2555-491e-8cba-c3192bb85fc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "512818ab-2555-491e-8cba-c3192bb85fc2" (UID: "512818ab-2555-491e-8cba-c3192bb85fc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.845484 4736 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/512818ab-2555-491e-8cba-c3192bb85fc2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.845520 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfws4\" (UniqueName: \"kubernetes.io/projected/512818ab-2555-491e-8cba-c3192bb85fc2-kube-api-access-kfws4\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.845533 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512818ab-2555-491e-8cba-c3192bb85fc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.947651 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-l6p6k" event={"ID":"512818ab-2555-491e-8cba-c3192bb85fc2","Type":"ContainerDied","Data":"f1f96695af148b6eab6e09eeac9ef1b6fc11cbf2c35416221b3a55db364bd2a9"} Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.947695 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1f96695af148b6eab6e09eeac9ef1b6fc11cbf2c35416221b3a55db364bd2a9" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.947752 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-l6p6k" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.969546 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54","Type":"ContainerStarted","Data":"ad41162d082708d1bd8198588e0c026c653422c0e1613625a57b60a61c492497"} Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.973845 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4xvcj" Mar 16 15:32:50 crc kubenswrapper[4736]: I0316 15:32:50.977420 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67c978df54-kdnqn" event={"ID":"b6de0392-402f-47e4-aa1d-19c956a68e1d","Type":"ContainerStarted","Data":"155657f8a696f56dc49ec2f184c01c7c2d808c7a65337e7a1c2bfaaa3b82d5cf"} Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.012343 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44337ed-e499-4fae-b3f6-7cc57f3bed86" path="/var/lib/kubelet/pods/c44337ed-e499-4fae-b3f6-7cc57f3bed86/volumes" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.022394 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-67c978df54-kdnqn" podStartSLOduration=-9223371986.832403 podStartE2EDuration="50.022373384s" podCreationTimestamp="2026-03-16 15:32:01 +0000 UTC" firstStartedPulling="2026-03-16 15:32:06.072735062 +0000 UTC m=+1127.800125359" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:32:51.018621904 +0000 UTC m=+1172.746012191" watchObservedRunningTime="2026-03-16 15:32:51.022373384 +0000 UTC m=+1172.749763671" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.124952 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5fd955ff9f-lqqft"] Mar 16 15:32:51 crc kubenswrapper[4736]: E0316 15:32:51.125701 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="494a2ca6-a44a-4f96-8494-9708b72db762" containerName="placement-db-sync" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.125716 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="494a2ca6-a44a-4f96-8494-9708b72db762" containerName="placement-db-sync" Mar 16 15:32:51 crc kubenswrapper[4736]: E0316 15:32:51.125728 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512818ab-2555-491e-8cba-c3192bb85fc2" containerName="barbican-db-sync" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.125734 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="512818ab-2555-491e-8cba-c3192bb85fc2" containerName="barbican-db-sync" Mar 16 15:32:51 crc kubenswrapper[4736]: E0316 15:32:51.125749 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44337ed-e499-4fae-b3f6-7cc57f3bed86" containerName="dnsmasq-dns" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.125755 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44337ed-e499-4fae-b3f6-7cc57f3bed86" containerName="dnsmasq-dns" Mar 16 15:32:51 crc kubenswrapper[4736]: E0316 15:32:51.125773 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44337ed-e499-4fae-b3f6-7cc57f3bed86" containerName="init" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.125779 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44337ed-e499-4fae-b3f6-7cc57f3bed86" containerName="init" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.125943 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="512818ab-2555-491e-8cba-c3192bb85fc2" containerName="barbican-db-sync" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.125964 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="494a2ca6-a44a-4f96-8494-9708b72db762" containerName="placement-db-sync" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.125973 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44337ed-e499-4fae-b3f6-7cc57f3bed86" containerName="dnsmasq-dns" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.126922 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.152663 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-combined-ca-bundle\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.152745 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-config-data-custom\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.152804 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-config-data\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.152870 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883da065-2249-44ef-8c33-e8bdd36f824b-logs\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.152930 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8577t\" (UniqueName: \"kubernetes.io/projected/883da065-2249-44ef-8c33-e8bdd36f824b-kube-api-access-8577t\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.159559 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-97w2k" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.176143 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.176405 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.221922 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fd955ff9f-lqqft"] Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.255650 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-657c4c6596-nmfhd"] Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.257253 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-combined-ca-bundle\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.257349 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-config-data-custom\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.257408 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-config-data\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.257504 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883da065-2249-44ef-8c33-e8bdd36f824b-logs\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.257587 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8577t\" (UniqueName: \"kubernetes.io/projected/883da065-2249-44ef-8c33-e8bdd36f824b-kube-api-access-8577t\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.261626 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.266890 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883da065-2249-44ef-8c33-e8bdd36f824b-logs\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.272240 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.282454 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-combined-ca-bundle\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.292135 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.292202 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.292142 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-w7kkc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.292468 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.299019 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-config-data\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.312857 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-config-data-custom\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.327738 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6cf4cbd846-gkgvc"] Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.329529 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.332546 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.333840 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8577t\" (UniqueName: \"kubernetes.io/projected/883da065-2249-44ef-8c33-e8bdd36f824b-kube-api-access-8577t\") pod \"barbican-worker-5fd955ff9f-lqqft\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.364333 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-combined-ca-bundle\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.364395 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-config-data\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.364479 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-config-data-custom\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.364534 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-combined-ca-bundle\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.364634 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfp5q\" (UniqueName: \"kubernetes.io/projected/8dcb0e26-e416-4867-8748-92d2b405bc20-kube-api-access-zfp5q\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.364666 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-internal-tls-certs\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.364793 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dcb0e26-e416-4867-8748-92d2b405bc20-logs\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.364819 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-public-tls-certs\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.364839 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-config-data\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.364854 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-scripts\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.364894 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cswps\" (UniqueName: \"kubernetes.io/projected/2c8512fe-c3bb-4573-b46d-1aada355ac6e-kube-api-access-cswps\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.364926 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c8512fe-c3bb-4573-b46d-1aada355ac6e-logs\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.381293 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-657c4c6596-nmfhd"] Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.407723 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6cf4cbd846-gkgvc"] Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.468425 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dcb0e26-e416-4867-8748-92d2b405bc20-logs\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.468572 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-public-tls-certs\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.468632 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-config-data\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.468671 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-scripts\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.468763 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cswps\" (UniqueName: \"kubernetes.io/projected/2c8512fe-c3bb-4573-b46d-1aada355ac6e-kube-api-access-cswps\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.468836 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c8512fe-c3bb-4573-b46d-1aada355ac6e-logs\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.468913 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-combined-ca-bundle\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.468985 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-config-data\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.469157 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-config-data-custom\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.469241 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-combined-ca-bundle\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.469422 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfp5q\" (UniqueName: \"kubernetes.io/projected/8dcb0e26-e416-4867-8748-92d2b405bc20-kube-api-access-zfp5q\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.469495 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-internal-tls-certs\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.485635 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c8512fe-c3bb-4573-b46d-1aada355ac6e-logs\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.485900 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dcb0e26-e416-4867-8748-92d2b405bc20-logs\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.487891 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.525133 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-combined-ca-bundle\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.558487 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-combined-ca-bundle\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.560579 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-config-data\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.562805 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-public-tls-certs\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.562871 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-scripts\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.563374 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-config-data-custom\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.563877 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-config-data\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.564607 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-internal-tls-certs\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.571816 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfp5q\" (UniqueName: \"kubernetes.io/projected/8dcb0e26-e416-4867-8748-92d2b405bc20-kube-api-access-zfp5q\") pod \"placement-657c4c6596-nmfhd\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.573536 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cswps\" (UniqueName: \"kubernetes.io/projected/2c8512fe-c3bb-4573-b46d-1aada355ac6e-kube-api-access-cswps\") pod \"barbican-keystone-listener-6cf4cbd846-gkgvc\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.577291 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f488c94c5-5j5ct"] Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.592425 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.606044 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.606123 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.628168 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f488c94c5-5j5ct"] Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.715859 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-ovsdbserver-sb\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.717563 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-dns-svc\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.717615 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cctzv\" (UniqueName: \"kubernetes.io/projected/ece0fa37-8578-46d1-879f-4486ae0be3b5-kube-api-access-cctzv\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.717643 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-ovsdbserver-nb\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.717679 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-config\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.736752 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.736752 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-56c87c56bd-8q8cv"] Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.739377 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.755218 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.759442 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56c87c56bd-8q8cv"] Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.814229 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.814739 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.866167 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.883834 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea251deb-26c7-4688-be95-095a296e46fa-logs\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.883963 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-ovsdbserver-sb\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.884042 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-config-data-custom\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.884451 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prw5s\" (UniqueName: \"kubernetes.io/projected/ea251deb-26c7-4688-be95-095a296e46fa-kube-api-access-prw5s\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.884527 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-combined-ca-bundle\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.884606 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-config-data\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.884648 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-dns-svc\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.884747 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cctzv\" (UniqueName: \"kubernetes.io/projected/ece0fa37-8578-46d1-879f-4486ae0be3b5-kube-api-access-cctzv\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.884820 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-ovsdbserver-nb\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.884874 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-config\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.885629 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-ovsdbserver-sb\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.886823 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-dns-svc\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.889399 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-ovsdbserver-nb\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.889941 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-config\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.972338 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cctzv\" (UniqueName: \"kubernetes.io/projected/ece0fa37-8578-46d1-879f-4486ae0be3b5-kube-api-access-cctzv\") pod \"dnsmasq-dns-f488c94c5-5j5ct\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.991905 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea251deb-26c7-4688-be95-095a296e46fa-logs\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.992526 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-config-data-custom\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.992668 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prw5s\" (UniqueName: \"kubernetes.io/projected/ea251deb-26c7-4688-be95-095a296e46fa-kube-api-access-prw5s\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.992701 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-combined-ca-bundle\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.992735 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-config-data\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.993912 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea251deb-26c7-4688-be95-095a296e46fa-logs\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:51 crc kubenswrapper[4736]: I0316 15:32:51.998885 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-config-data-custom\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.022192 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-combined-ca-bundle\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.024661 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prw5s\" (UniqueName: \"kubernetes.io/projected/ea251deb-26c7-4688-be95-095a296e46fa-kube-api-access-prw5s\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.045579 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-config-data\") pod \"barbican-api-56c87c56bd-8q8cv\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.088787 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.176493 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54","Type":"ContainerStarted","Data":"78d1a406a3835d0ca6e8308046fa9b239065cf3315d06d62522ce6ca50a6bd35"} Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.252660 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.252637154 podStartE2EDuration="6.252637154s" podCreationTimestamp="2026-03-16 15:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:32:52.247209509 +0000 UTC m=+1173.974599796" watchObservedRunningTime="2026-03-16 15:32:52.252637154 +0000 UTC m=+1173.980027441" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.253438 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.538215 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.630927 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-config-data\") pod \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.631429 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzdhq\" (UniqueName: \"kubernetes.io/projected/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-kube-api-access-lzdhq\") pod \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.631615 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-combined-ca-bundle\") pod \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.631663 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-scripts\") pod \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.631687 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-credential-keys\") pod \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.631760 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-fernet-keys\") pod \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\" (UID: \"4e65fd8a-c9fa-43b4-b1de-3657226bfac0\") " Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.676612 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-kube-api-access-lzdhq" (OuterVolumeSpecName: "kube-api-access-lzdhq") pod "4e65fd8a-c9fa-43b4-b1de-3657226bfac0" (UID: "4e65fd8a-c9fa-43b4-b1de-3657226bfac0"). InnerVolumeSpecName "kube-api-access-lzdhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.693290 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-scripts" (OuterVolumeSpecName: "scripts") pod "4e65fd8a-c9fa-43b4-b1de-3657226bfac0" (UID: "4e65fd8a-c9fa-43b4-b1de-3657226bfac0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.722446 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4e65fd8a-c9fa-43b4-b1de-3657226bfac0" (UID: "4e65fd8a-c9fa-43b4-b1de-3657226bfac0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.736179 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.742611 4736 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.742821 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzdhq\" (UniqueName: \"kubernetes.io/projected/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-kube-api-access-lzdhq\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.736831 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4e65fd8a-c9fa-43b4-b1de-3657226bfac0" (UID: "4e65fd8a-c9fa-43b4-b1de-3657226bfac0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.764356 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e65fd8a-c9fa-43b4-b1de-3657226bfac0" (UID: "4e65fd8a-c9fa-43b4-b1de-3657226bfac0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.772192 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-config-data" (OuterVolumeSpecName: "config-data") pod "4e65fd8a-c9fa-43b4-b1de-3657226bfac0" (UID: "4e65fd8a-c9fa-43b4-b1de-3657226bfac0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.845625 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.845984 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:52 crc kubenswrapper[4736]: I0316 15:32:52.845996 4736 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e65fd8a-c9fa-43b4-b1de-3657226bfac0-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.012093 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fd955ff9f-lqqft"] Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.022880 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-657c4c6596-nmfhd"] Mar 16 15:32:53 crc kubenswrapper[4736]: W0316 15:32:53.035083 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod883da065_2249_44ef_8c33_e8bdd36f824b.slice/crio-a3f383d73636f9e7c7334b8b9808ef2c7c208c0b44f6e35a094cd5d2eadf4b98 WatchSource:0}: Error finding container a3f383d73636f9e7c7334b8b9808ef2c7c208c0b44f6e35a094cd5d2eadf4b98: Status 404 returned error can't find the container with id a3f383d73636f9e7c7334b8b9808ef2c7c208c0b44f6e35a094cd5d2eadf4b98 Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.220331 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56c87c56bd-8q8cv"] Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.279018 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wvncr" event={"ID":"2255bb68-be73-4c4f-8739-83783ae195f0","Type":"ContainerStarted","Data":"83f02feb1d29de3ec609d84f317177939a50baf56442738d77ad551accdb782f"} Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.285085 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tqk4w" event={"ID":"4e65fd8a-c9fa-43b4-b1de-3657226bfac0","Type":"ContainerDied","Data":"d9fc6c533655946476c1eddcd485509e77ea17774a6cb216fd841494ab409936"} Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.285140 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9fc6c533655946476c1eddcd485509e77ea17774a6cb216fd841494ab409936" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.285219 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tqk4w" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.310589 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-wvncr" podStartSLOduration=5.616976388 podStartE2EDuration="1m3.310562631s" podCreationTimestamp="2026-03-16 15:31:50 +0000 UTC" firstStartedPulling="2026-03-16 15:31:52.7233548 +0000 UTC m=+1114.450745087" lastFinishedPulling="2026-03-16 15:32:50.416941033 +0000 UTC m=+1172.144331330" observedRunningTime="2026-03-16 15:32:53.306044439 +0000 UTC m=+1175.033434726" watchObservedRunningTime="2026-03-16 15:32:53.310562631 +0000 UTC m=+1175.037952928" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.339400 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-657c4c6596-nmfhd" event={"ID":"8dcb0e26-e416-4867-8748-92d2b405bc20","Type":"ContainerStarted","Data":"86c77a0dd86a4de8035b6705d5dd08dd1659c0e6035167ded67226ed33d2f058"} Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.345735 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd955ff9f-lqqft" event={"ID":"883da065-2249-44ef-8c33-e8bdd36f824b","Type":"ContainerStarted","Data":"a3f383d73636f9e7c7334b8b9808ef2c7c208c0b44f6e35a094cd5d2eadf4b98"} Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.370292 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f488c94c5-5j5ct"] Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.397055 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6cf4cbd846-gkgvc"] Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.676532 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-654fb4cdb6-6lld5"] Mar 16 15:32:53 crc kubenswrapper[4736]: E0316 15:32:53.677256 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e65fd8a-c9fa-43b4-b1de-3657226bfac0" containerName="keystone-bootstrap" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.677326 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e65fd8a-c9fa-43b4-b1de-3657226bfac0" containerName="keystone-bootstrap" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.677552 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e65fd8a-c9fa-43b4-b1de-3657226bfac0" containerName="keystone-bootstrap" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.678429 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.686567 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.686857 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.687047 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.687636 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.687919 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.688144 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4bf5w" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.699653 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-654fb4cdb6-6lld5"] Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.776965 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-combined-ca-bundle\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.777019 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-scripts\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.777049 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-internal-tls-certs\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.777076 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-credential-keys\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.777174 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-public-tls-certs\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.777203 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-config-data\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.777228 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl7jd\" (UniqueName: \"kubernetes.io/projected/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-kube-api-access-xl7jd\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.777253 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-fernet-keys\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.881396 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl7jd\" (UniqueName: \"kubernetes.io/projected/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-kube-api-access-xl7jd\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.881452 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-fernet-keys\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.881530 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-combined-ca-bundle\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.881557 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-scripts\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.881575 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-internal-tls-certs\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.881592 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-credential-keys\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.881659 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-public-tls-certs\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.881689 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-config-data\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.893304 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-credential-keys\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.911766 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl7jd\" (UniqueName: \"kubernetes.io/projected/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-kube-api-access-xl7jd\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.913299 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-combined-ca-bundle\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.913667 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-config-data\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.913786 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-internal-tls-certs\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.914433 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-public-tls-certs\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.914564 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-scripts\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:53 crc kubenswrapper[4736]: I0316 15:32:53.919281 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea7a8a52-515b-45e3-8c30-4fd52d65cdc6-fernet-keys\") pod \"keystone-654fb4cdb6-6lld5\" (UID: \"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6\") " pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.034323 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.367274 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" event={"ID":"2c8512fe-c3bb-4573-b46d-1aada355ac6e","Type":"ContainerStarted","Data":"513effc752df252eccc2223e1a1a4e67a6cb81e053c1e36cbff2c5a995126ba3"} Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.369676 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r5gq2" event={"ID":"9ea1215a-a5f6-406c-aba9-ba1f9da1a943","Type":"ContainerStarted","Data":"da1ab425d844aa2f278d9d46131758202b12c16bd610a5f05c16ec131c610fdb"} Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.380563 4736 generic.go:334] "Generic (PLEG): container finished" podID="ece0fa37-8578-46d1-879f-4486ae0be3b5" containerID="6ee095a55e8eea421626c98630666bd9f5215ac2b44878bfffec35b72b21da54" exitCode=0 Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.380641 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" event={"ID":"ece0fa37-8578-46d1-879f-4486ae0be3b5","Type":"ContainerDied","Data":"6ee095a55e8eea421626c98630666bd9f5215ac2b44878bfffec35b72b21da54"} Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.380695 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" event={"ID":"ece0fa37-8578-46d1-879f-4486ae0be3b5","Type":"ContainerStarted","Data":"431b50559b09e3b5ea49d3e3f6d4088da5ff47d0ec667d4b05c5ca12cf858576"} Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.388045 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-657c4c6596-nmfhd" event={"ID":"8dcb0e26-e416-4867-8748-92d2b405bc20","Type":"ContainerStarted","Data":"ecda1de7c141cbe7ee94e87cd039c10e914287a00c96da9567f266214ab678f6"} Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.388090 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-657c4c6596-nmfhd" event={"ID":"8dcb0e26-e416-4867-8748-92d2b405bc20","Type":"ContainerStarted","Data":"ca314bf15850304723004c1b313bdeda120468ecfaf784ef381f814bf6c8a800"} Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.388293 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.388319 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.441639 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-r5gq2" podStartSLOduration=4.134090114 podStartE2EDuration="1m4.441615465s" podCreationTimestamp="2026-03-16 15:31:50 +0000 UTC" firstStartedPulling="2026-03-16 15:31:52.374793288 +0000 UTC m=+1114.102183575" lastFinishedPulling="2026-03-16 15:32:52.682318639 +0000 UTC m=+1174.409708926" observedRunningTime="2026-03-16 15:32:54.40971882 +0000 UTC m=+1176.137109107" watchObservedRunningTime="2026-03-16 15:32:54.441615465 +0000 UTC m=+1176.169005752" Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.452526 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c87c56bd-8q8cv" event={"ID":"ea251deb-26c7-4688-be95-095a296e46fa","Type":"ContainerStarted","Data":"81a59404d5ea2e085f459d6aa0498fb07ae90943e16dd7034255b96ccf1633e7"} Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.452599 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c87c56bd-8q8cv" event={"ID":"ea251deb-26c7-4688-be95-095a296e46fa","Type":"ContainerStarted","Data":"2acf81282adcaf5e5ccec50ff2b5a8bb0fbce0cf7739e4f2f754e27f2b0f09de"} Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.479502 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-657c4c6596-nmfhd" podStartSLOduration=3.479472458 podStartE2EDuration="3.479472458s" podCreationTimestamp="2026-03-16 15:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:32:54.473345804 +0000 UTC m=+1176.200736111" watchObservedRunningTime="2026-03-16 15:32:54.479472458 +0000 UTC m=+1176.206862745" Mar 16 15:32:54 crc kubenswrapper[4736]: I0316 15:32:54.822622 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-654fb4cdb6-6lld5"] Mar 16 15:32:54 crc kubenswrapper[4736]: W0316 15:32:54.860445 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea7a8a52_515b_45e3_8c30_4fd52d65cdc6.slice/crio-2cac0831db8418d22515e921c47283cf58be916546e6ea3ac88bc442a4a6c739 WatchSource:0}: Error finding container 2cac0831db8418d22515e921c47283cf58be916546e6ea3ac88bc442a4a6c739: Status 404 returned error can't find the container with id 2cac0831db8418d22515e921c47283cf58be916546e6ea3ac88bc442a4a6c739 Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.485184 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-648fb9b5bc-8f55h"] Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.487030 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.488987 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-654fb4cdb6-6lld5" event={"ID":"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6","Type":"ContainerStarted","Data":"bf302b4f2cf96de71505334a3156f843c7044957bcca2fffebf7a0f2f30dea4d"} Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.489045 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-654fb4cdb6-6lld5" event={"ID":"ea7a8a52-515b-45e3-8c30-4fd52d65cdc6","Type":"ContainerStarted","Data":"2cac0831db8418d22515e921c47283cf58be916546e6ea3ac88bc442a4a6c739"} Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.490222 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.503789 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" event={"ID":"ece0fa37-8578-46d1-879f-4486ae0be3b5","Type":"ContainerStarted","Data":"6333ff67604da865829bb9fef5a314db61bbf3710b586801fb39251551619648"} Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.505058 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.517579 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c87c56bd-8q8cv" event={"ID":"ea251deb-26c7-4688-be95-095a296e46fa","Type":"ContainerStarted","Data":"d63ced8971ea02f8c1b87b4668549309fed45bcf7d5ff09f4e47c6dd7f93c5c6"} Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.517645 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.517661 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.522146 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-648fb9b5bc-8f55h"] Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.546162 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5868c68fb4-ww9v7"] Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.547944 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.567049 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-logs\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.567097 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-config-data\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.567244 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvkjl\" (UniqueName: \"kubernetes.io/projected/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-kube-api-access-rvkjl\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.567369 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-config-data-custom\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.567427 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-combined-ca-bundle\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.579834 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5868c68fb4-ww9v7"] Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.644831 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-654fb4cdb6-6lld5" podStartSLOduration=2.64480587 podStartE2EDuration="2.64480587s" podCreationTimestamp="2026-03-16 15:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:32:55.57794562 +0000 UTC m=+1177.305335927" watchObservedRunningTime="2026-03-16 15:32:55.64480587 +0000 UTC m=+1177.372196157" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.672670 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" podStartSLOduration=4.672635495 podStartE2EDuration="4.672635495s" podCreationTimestamp="2026-03-16 15:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:32:55.632934402 +0000 UTC m=+1177.360324689" watchObservedRunningTime="2026-03-16 15:32:55.672635495 +0000 UTC m=+1177.400025782" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.678547 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-config-data-custom\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.678617 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-combined-ca-bundle\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.678678 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-config-data-custom\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.678723 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-logs\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.678746 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-config-data\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.678782 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw8vm\" (UniqueName: \"kubernetes.io/projected/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-kube-api-access-sw8vm\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.678806 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-config-data\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.678861 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvkjl\" (UniqueName: \"kubernetes.io/projected/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-kube-api-access-rvkjl\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.678899 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-logs\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.678925 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-combined-ca-bundle\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.681119 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-logs\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.713215 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-config-data-custom\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.714502 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-config-data\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.715177 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-combined-ca-bundle\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.725302 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-56c87c56bd-8q8cv" podStartSLOduration=4.725278095 podStartE2EDuration="4.725278095s" podCreationTimestamp="2026-03-16 15:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:32:55.697788829 +0000 UTC m=+1177.425179116" watchObservedRunningTime="2026-03-16 15:32:55.725278095 +0000 UTC m=+1177.452668372" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.739454 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59554d7c7d-jq5k7"] Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.741030 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.749013 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.749247 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.781444 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw8vm\" (UniqueName: \"kubernetes.io/projected/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-kube-api-access-sw8vm\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.787999 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-config-data\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.788454 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-logs\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.788563 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-combined-ca-bundle\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.789455 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-config-data-custom\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.791322 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-logs\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.805416 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59554d7c7d-jq5k7"] Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.815774 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvkjl\" (UniqueName: \"kubernetes.io/projected/ee82b20e-e4ad-4267-9845-3c5838fa1e0f-kube-api-access-rvkjl\") pod \"barbican-worker-648fb9b5bc-8f55h\" (UID: \"ee82b20e-e4ad-4267-9845-3c5838fa1e0f\") " pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.828573 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-config-data\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.828806 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-combined-ca-bundle\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.831714 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-648fb9b5bc-8f55h" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.840877 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw8vm\" (UniqueName: \"kubernetes.io/projected/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-kube-api-access-sw8vm\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.863291 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad07fdc6-06e5-4045-8049-783bc6e6d5c6-config-data-custom\") pod \"barbican-keystone-listener-5868c68fb4-ww9v7\" (UID: \"ad07fdc6-06e5-4045-8049-783bc6e6d5c6\") " pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.895580 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-internal-tls-certs\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.895653 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c9e0de3-6386-4733-bc2b-b2eec48d8098-logs\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.895674 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg4cd\" (UniqueName: \"kubernetes.io/projected/3c9e0de3-6386-4733-bc2b-b2eec48d8098-kube-api-access-wg4cd\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.895749 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-combined-ca-bundle\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.895795 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-config-data\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.895866 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-config-data-custom\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.895889 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-public-tls-certs\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.915945 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.997982 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-internal-tls-certs\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.998488 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c9e0de3-6386-4733-bc2b-b2eec48d8098-logs\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.998591 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg4cd\" (UniqueName: \"kubernetes.io/projected/3c9e0de3-6386-4733-bc2b-b2eec48d8098-kube-api-access-wg4cd\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.998716 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-combined-ca-bundle\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.998823 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-config-data\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.998945 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-config-data-custom\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:55 crc kubenswrapper[4736]: I0316 15:32:55.999019 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-public-tls-certs\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:56 crc kubenswrapper[4736]: I0316 15:32:56.004282 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c9e0de3-6386-4733-bc2b-b2eec48d8098-logs\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:56 crc kubenswrapper[4736]: I0316 15:32:56.008676 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-config-data-custom\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:56 crc kubenswrapper[4736]: I0316 15:32:56.013563 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-combined-ca-bundle\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:56 crc kubenswrapper[4736]: I0316 15:32:56.014001 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-public-tls-certs\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:56 crc kubenswrapper[4736]: I0316 15:32:56.026470 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-config-data\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:56 crc kubenswrapper[4736]: I0316 15:32:56.035765 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg4cd\" (UniqueName: \"kubernetes.io/projected/3c9e0de3-6386-4733-bc2b-b2eec48d8098-kube-api-access-wg4cd\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:56 crc kubenswrapper[4736]: I0316 15:32:56.043835 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c9e0de3-6386-4733-bc2b-b2eec48d8098-internal-tls-certs\") pod \"barbican-api-59554d7c7d-jq5k7\" (UID: \"3c9e0de3-6386-4733-bc2b-b2eec48d8098\") " pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:56 crc kubenswrapper[4736]: I0316 15:32:56.235604 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:32:56 crc kubenswrapper[4736]: I0316 15:32:56.411351 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 16 15:32:56 crc kubenswrapper[4736]: I0316 15:32:56.411415 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 16 15:32:56 crc kubenswrapper[4736]: I0316 15:32:56.486370 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 16 15:32:56 crc kubenswrapper[4736]: I0316 15:32:56.536442 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 16 15:32:56 crc kubenswrapper[4736]: I0316 15:32:56.550400 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 16 15:32:56 crc kubenswrapper[4736]: I0316 15:32:56.550432 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 16 15:32:57 crc kubenswrapper[4736]: I0316 15:32:57.330837 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 16 15:32:57 crc kubenswrapper[4736]: I0316 15:32:57.331499 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 16 15:32:57 crc kubenswrapper[4736]: I0316 15:32:57.400475 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 16 15:32:57 crc kubenswrapper[4736]: I0316 15:32:57.448823 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 16 15:32:57 crc kubenswrapper[4736]: I0316 15:32:57.557608 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 16 15:32:57 crc kubenswrapper[4736]: I0316 15:32:57.558016 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 16 15:32:58 crc kubenswrapper[4736]: I0316 15:32:58.577449 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 16 15:32:58 crc kubenswrapper[4736]: I0316 15:32:58.577473 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 16 15:32:59 crc kubenswrapper[4736]: I0316 15:32:59.584959 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 16 15:32:59 crc kubenswrapper[4736]: I0316 15:32:59.585323 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 16 15:33:01 crc kubenswrapper[4736]: E0316 15:33:01.031001 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="0892ebc9-dbd4-4652-9691-13028da07f80" Mar 16 15:33:01 crc kubenswrapper[4736]: I0316 15:33:01.534884 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 16 15:33:01 crc kubenswrapper[4736]: I0316 15:33:01.535010 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 16 15:33:01 crc kubenswrapper[4736]: I0316 15:33:01.535636 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 16 15:33:01 crc kubenswrapper[4736]: I0316 15:33:01.535783 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 16 15:33:01 crc kubenswrapper[4736]: I0316 15:33:01.592580 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 16 15:33:01 crc kubenswrapper[4736]: I0316 15:33:01.614833 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 16 15:33:01 crc kubenswrapper[4736]: I0316 15:33:01.618123 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 16 15:33:01 crc kubenswrapper[4736]: I0316 15:33:01.823258 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ff55bcd5b-psrsc" podUID="4a2c18b8-790c-4bb8-ac86-c70f0220ab3f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Mar 16 15:33:02 crc kubenswrapper[4736]: I0316 15:33:02.146770 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 16 15:33:02 crc kubenswrapper[4736]: I0316 15:33:02.255319 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:33:02 crc kubenswrapper[4736]: I0316 15:33:02.373337 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b9c6995-vc6g8"] Mar 16 15:33:02 crc kubenswrapper[4736]: I0316 15:33:02.373715 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" podUID="08df962e-9bf1-4a47-bcb5-f3535f67ecd6" containerName="dnsmasq-dns" containerID="cri-o://033205e197ca17b7f608a594a6840348c0a480f10e31c3cf017a0c019a6b51b4" gracePeriod=10 Mar 16 15:33:02 crc kubenswrapper[4736]: I0316 15:33:02.673752 4736 generic.go:334] "Generic (PLEG): container finished" podID="08df962e-9bf1-4a47-bcb5-f3535f67ecd6" containerID="033205e197ca17b7f608a594a6840348c0a480f10e31c3cf017a0c019a6b51b4" exitCode=0 Mar 16 15:33:02 crc kubenswrapper[4736]: I0316 15:33:02.674144 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" event={"ID":"08df962e-9bf1-4a47-bcb5-f3535f67ecd6","Type":"ContainerDied","Data":"033205e197ca17b7f608a594a6840348c0a480f10e31c3cf017a0c019a6b51b4"} Mar 16 15:33:03 crc kubenswrapper[4736]: I0316 15:33:03.132384 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-56c87c56bd-8q8cv" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 15:33:03 crc kubenswrapper[4736]: I0316 15:33:03.419775 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" podUID="08df962e-9bf1-4a47-bcb5-f3535f67ecd6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: connect: connection refused" Mar 16 15:33:06 crc kubenswrapper[4736]: I0316 15:33:06.092644 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:33:06 crc kubenswrapper[4736]: E0316 15:33:06.092926 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:33:06 crc kubenswrapper[4736]: E0316 15:33:06.092963 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 16 15:33:06 crc kubenswrapper[4736]: E0316 15:33:06.093059 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift podName:0892ebc9-dbd4-4652-9691-13028da07f80 nodeName:}" failed. No retries permitted until 2026-03-16 15:35:08.093035353 +0000 UTC m=+1309.820425630 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift") pod "swift-storage-0" (UID: "0892ebc9-dbd4-4652-9691-13028da07f80") : configmap "swift-ring-files" not found Mar 16 15:33:06 crc kubenswrapper[4736]: I0316 15:33:06.155699 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-56c87c56bd-8q8cv" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 15:33:06 crc kubenswrapper[4736]: I0316 15:33:06.741004 4736 generic.go:334] "Generic (PLEG): container finished" podID="9ea1215a-a5f6-406c-aba9-ba1f9da1a943" containerID="da1ab425d844aa2f278d9d46131758202b12c16bd610a5f05c16ec131c610fdb" exitCode=0 Mar 16 15:33:06 crc kubenswrapper[4736]: I0316 15:33:06.741461 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r5gq2" event={"ID":"9ea1215a-a5f6-406c-aba9-ba1f9da1a943","Type":"ContainerDied","Data":"da1ab425d844aa2f278d9d46131758202b12c16bd610a5f05c16ec131c610fdb"} Mar 16 15:33:06 crc kubenswrapper[4736]: I0316 15:33:06.960747 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:33:07 crc kubenswrapper[4736]: I0316 15:33:07.134239 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56c87c56bd-8q8cv" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 15:33:07 crc kubenswrapper[4736]: I0316 15:33:07.760067 4736 generic.go:334] "Generic (PLEG): container finished" podID="2255bb68-be73-4c4f-8739-83783ae195f0" containerID="83f02feb1d29de3ec609d84f317177939a50baf56442738d77ad551accdb782f" exitCode=0 Mar 16 15:33:07 crc kubenswrapper[4736]: I0316 15:33:07.760298 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wvncr" event={"ID":"2255bb68-be73-4c4f-8739-83783ae195f0","Type":"ContainerDied","Data":"83f02feb1d29de3ec609d84f317177939a50baf56442738d77ad551accdb782f"} Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.419683 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" podUID="08df962e-9bf1-4a47-bcb5-f3535f67ecd6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.158:5353: connect: connection refused" Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.517991 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.518056 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.529844 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r5gq2" Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.551798 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jdds\" (UniqueName: \"kubernetes.io/projected/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-kube-api-access-4jdds\") pod \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\" (UID: \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\") " Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.551884 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-config-data\") pod \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\" (UID: \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\") " Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.552038 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-combined-ca-bundle\") pod \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\" (UID: \"9ea1215a-a5f6-406c-aba9-ba1f9da1a943\") " Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.565523 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-kube-api-access-4jdds" (OuterVolumeSpecName: "kube-api-access-4jdds") pod "9ea1215a-a5f6-406c-aba9-ba1f9da1a943" (UID: "9ea1215a-a5f6-406c-aba9-ba1f9da1a943"). InnerVolumeSpecName "kube-api-access-4jdds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.664321 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jdds\" (UniqueName: \"kubernetes.io/projected/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-kube-api-access-4jdds\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.762623 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.763263 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ea1215a-a5f6-406c-aba9-ba1f9da1a943" (UID: "9ea1215a-a5f6-406c-aba9-ba1f9da1a943"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.767465 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.862127 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r5gq2" Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.862372 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r5gq2" event={"ID":"9ea1215a-a5f6-406c-aba9-ba1f9da1a943","Type":"ContainerDied","Data":"6f29ad708320ff0659466585e6c9575309be50ee860c60d13c64d82d2c1e55cd"} Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.862410 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f29ad708320ff0659466585e6c9575309be50ee860c60d13c64d82d2c1e55cd" Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.869210 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-config-data" (OuterVolumeSpecName: "config-data") pod "9ea1215a-a5f6-406c-aba9-ba1f9da1a943" (UID: "9ea1215a-a5f6-406c-aba9-ba1f9da1a943"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:08 crc kubenswrapper[4736]: I0316 15:33:08.871770 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ea1215a-a5f6-406c-aba9-ba1f9da1a943-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.110153 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7897f5f54f-2vggz"] Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.110977 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7897f5f54f-2vggz" podUID="2ac9f390-a870-4309-bb1d-2360a1ae1c7f" containerName="neutron-api" containerID="cri-o://633590f4c22afa7d5bd76b91d83d0a8b909c99107b2b5195f5bd99e08956b8d7" gracePeriod=30 Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.111185 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7897f5f54f-2vggz" podUID="2ac9f390-a870-4309-bb1d-2360a1ae1c7f" containerName="neutron-httpd" containerID="cri-o://b517f49007744252310ef1193e8ab73fdb08c0321bca6d24a51d089b1743c2eb" gracePeriod=30 Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.170636 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7897f5f54f-2vggz" podUID="2ac9f390-a870-4309-bb1d-2360a1ae1c7f" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.160:9696/\": read tcp 10.217.0.2:37472->10.217.0.160:9696: read: connection reset by peer" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.246026 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.282434 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-ovsdbserver-sb\") pod \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.300689 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv268\" (UniqueName: \"kubernetes.io/projected/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-kube-api-access-pv268\") pod \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.300792 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-config\") pod \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.300875 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-dns-svc\") pod \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.301000 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-ovsdbserver-nb\") pod \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\" (UID: \"08df962e-9bf1-4a47-bcb5-f3535f67ecd6\") " Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.339027 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5db557dd69-7c5jd"] Mar 16 15:33:09 crc kubenswrapper[4736]: E0316 15:33:09.341073 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08df962e-9bf1-4a47-bcb5-f3535f67ecd6" containerName="init" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.341092 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="08df962e-9bf1-4a47-bcb5-f3535f67ecd6" containerName="init" Mar 16 15:33:09 crc kubenswrapper[4736]: E0316 15:33:09.341141 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08df962e-9bf1-4a47-bcb5-f3535f67ecd6" containerName="dnsmasq-dns" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.341153 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="08df962e-9bf1-4a47-bcb5-f3535f67ecd6" containerName="dnsmasq-dns" Mar 16 15:33:09 crc kubenswrapper[4736]: E0316 15:33:09.341205 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea1215a-a5f6-406c-aba9-ba1f9da1a943" containerName="heat-db-sync" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.341211 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea1215a-a5f6-406c-aba9-ba1f9da1a943" containerName="heat-db-sync" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.343186 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ea1215a-a5f6-406c-aba9-ba1f9da1a943" containerName="heat-db-sync" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.343220 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="08df962e-9bf1-4a47-bcb5-f3535f67ecd6" containerName="dnsmasq-dns" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.348819 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.352584 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-kube-api-access-pv268" (OuterVolumeSpecName: "kube-api-access-pv268") pod "08df962e-9bf1-4a47-bcb5-f3535f67ecd6" (UID: "08df962e-9bf1-4a47-bcb5-f3535f67ecd6"). InnerVolumeSpecName "kube-api-access-pv268". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.394384 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5db557dd69-7c5jd"] Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.405735 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-httpd-config\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.405831 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-ovndb-tls-certs\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.405875 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-config\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.405909 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-internal-tls-certs\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.405949 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-combined-ca-bundle\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.405981 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-public-tls-certs\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.406015 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf2qv\" (UniqueName: \"kubernetes.io/projected/f529e0b5-791d-4f53-a035-b0112d59d7b8-kube-api-access-lf2qv\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.406143 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv268\" (UniqueName: \"kubernetes.io/projected/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-kube-api-access-pv268\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.514928 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-public-tls-certs\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.524540 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf2qv\" (UniqueName: \"kubernetes.io/projected/f529e0b5-791d-4f53-a035-b0112d59d7b8-kube-api-access-lf2qv\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.524739 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-httpd-config\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.524855 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-ovndb-tls-certs\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.524921 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-config\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.524966 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-internal-tls-certs\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.525045 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-combined-ca-bundle\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.533415 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-648fb9b5bc-8f55h"] Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.561867 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-httpd-config\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.564273 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-public-tls-certs\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.565138 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-combined-ca-bundle\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.598350 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-ovndb-tls-certs\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.613530 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-config\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.614222 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-internal-tls-certs\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.616478 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf2qv\" (UniqueName: \"kubernetes.io/projected/f529e0b5-791d-4f53-a035-b0112d59d7b8-kube-api-access-lf2qv\") pod \"neutron-5db557dd69-7c5jd\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.678060 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5868c68fb4-ww9v7"] Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.685382 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-config" (OuterVolumeSpecName: "config") pod "08df962e-9bf1-4a47-bcb5-f3535f67ecd6" (UID: "08df962e-9bf1-4a47-bcb5-f3535f67ecd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.728521 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.816469 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "08df962e-9bf1-4a47-bcb5-f3535f67ecd6" (UID: "08df962e-9bf1-4a47-bcb5-f3535f67ecd6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.830627 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59554d7c7d-jq5k7"] Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.831666 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.832442 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "08df962e-9bf1-4a47-bcb5-f3535f67ecd6" (UID: "08df962e-9bf1-4a47-bcb5-f3535f67ecd6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:09 crc kubenswrapper[4736]: W0316 15:33:09.844575 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c9e0de3_6386_4733_bc2b_b2eec48d8098.slice/crio-2ee57aa9335bcb91f77a986e3f198c7d709cdc605dda1cc8bb3fd3cde5df9037 WatchSource:0}: Error finding container 2ee57aa9335bcb91f77a986e3f198c7d709cdc605dda1cc8bb3fd3cde5df9037: Status 404 returned error can't find the container with id 2ee57aa9335bcb91f77a986e3f198c7d709cdc605dda1cc8bb3fd3cde5df9037 Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.882230 4736 generic.go:334] "Generic (PLEG): container finished" podID="2ac9f390-a870-4309-bb1d-2360a1ae1c7f" containerID="b517f49007744252310ef1193e8ab73fdb08c0321bca6d24a51d089b1743c2eb" exitCode=0 Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.882617 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7897f5f54f-2vggz" event={"ID":"2ac9f390-a870-4309-bb1d-2360a1ae1c7f","Type":"ContainerDied","Data":"b517f49007744252310ef1193e8ab73fdb08c0321bca6d24a51d089b1743c2eb"} Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.885750 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "08df962e-9bf1-4a47-bcb5-f3535f67ecd6" (UID: "08df962e-9bf1-4a47-bcb5-f3535f67ecd6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.889348 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59554d7c7d-jq5k7" event={"ID":"3c9e0de3-6386-4733-bc2b-b2eec48d8098","Type":"ContainerStarted","Data":"2ee57aa9335bcb91f77a986e3f198c7d709cdc605dda1cc8bb3fd3cde5df9037"} Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.897684 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-648fb9b5bc-8f55h" event={"ID":"ee82b20e-e4ad-4267-9845-3c5838fa1e0f","Type":"ContainerStarted","Data":"6ad119724a1e5a674848ca08a34d62e950199d6ea2a07a09aee9d7a1d1dd70ab"} Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.907663 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" event={"ID":"ad07fdc6-06e5-4045-8049-783bc6e6d5c6","Type":"ContainerStarted","Data":"57eadc197504ee10bdba2fb0cde13481c443595246c70fffb9bd3e9e14be3816"} Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.914848 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78","Type":"ContainerStarted","Data":"6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe"} Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.927833 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" event={"ID":"2c8512fe-c3bb-4573-b46d-1aada355ac6e","Type":"ContainerStarted","Data":"f24d671fb965d82c3b93b00d1f27dcfb39c7e973d0ce70d47b20081fc8639b74"} Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.937052 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.937077 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08df962e-9bf1-4a47-bcb5-f3535f67ecd6-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.973544 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" event={"ID":"08df962e-9bf1-4a47-bcb5-f3535f67ecd6","Type":"ContainerDied","Data":"11b2b75279071fde532a2fd3b2e6fe073245a4918eb7b3e341b4819f9347f95d"} Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.973604 4736 scope.go:117] "RemoveContainer" containerID="033205e197ca17b7f608a594a6840348c0a480f10e31c3cf017a0c019a6b51b4" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.973752 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b9c6995-vc6g8" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.974951 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.985312 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd955ff9f-lqqft" event={"ID":"883da065-2249-44ef-8c33-e8bdd36f824b","Type":"ContainerStarted","Data":"4c8de0fefbba647f4dc0ab4c0e5384a3eb159a12cb000e51d9695b8475151fc2"} Mar 16 15:33:09 crc kubenswrapper[4736]: I0316 15:33:09.988089 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wvncr" Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.070560 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b9c6995-vc6g8"] Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.095712 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b9c6995-vc6g8"] Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.125160 4736 scope.go:117] "RemoveContainer" containerID="a495361ec4df1d9c4c4be03eb474564a59d9e8a9913b2f2ed4d92124b4479579" Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.155618 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-scripts\") pod \"2255bb68-be73-4c4f-8739-83783ae195f0\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.155687 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-config-data\") pod \"2255bb68-be73-4c4f-8739-83783ae195f0\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.156247 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2b6b\" (UniqueName: \"kubernetes.io/projected/2255bb68-be73-4c4f-8739-83783ae195f0-kube-api-access-z2b6b\") pod \"2255bb68-be73-4c4f-8739-83783ae195f0\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.156303 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-combined-ca-bundle\") pod \"2255bb68-be73-4c4f-8739-83783ae195f0\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.156348 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-db-sync-config-data\") pod \"2255bb68-be73-4c4f-8739-83783ae195f0\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.156393 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2255bb68-be73-4c4f-8739-83783ae195f0-etc-machine-id\") pod \"2255bb68-be73-4c4f-8739-83783ae195f0\" (UID: \"2255bb68-be73-4c4f-8739-83783ae195f0\") " Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.156804 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2255bb68-be73-4c4f-8739-83783ae195f0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2255bb68-be73-4c4f-8739-83783ae195f0" (UID: "2255bb68-be73-4c4f-8739-83783ae195f0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.163134 4736 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2255bb68-be73-4c4f-8739-83783ae195f0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.174854 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-scripts" (OuterVolumeSpecName: "scripts") pod "2255bb68-be73-4c4f-8739-83783ae195f0" (UID: "2255bb68-be73-4c4f-8739-83783ae195f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.200187 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2255bb68-be73-4c4f-8739-83783ae195f0" (UID: "2255bb68-be73-4c4f-8739-83783ae195f0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.207334 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2255bb68-be73-4c4f-8739-83783ae195f0-kube-api-access-z2b6b" (OuterVolumeSpecName: "kube-api-access-z2b6b") pod "2255bb68-be73-4c4f-8739-83783ae195f0" (UID: "2255bb68-be73-4c4f-8739-83783ae195f0"). InnerVolumeSpecName "kube-api-access-z2b6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.265518 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.265544 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2b6b\" (UniqueName: \"kubernetes.io/projected/2255bb68-be73-4c4f-8739-83783ae195f0-kube-api-access-z2b6b\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.265556 4736 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.473440 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2255bb68-be73-4c4f-8739-83783ae195f0" (UID: "2255bb68-be73-4c4f-8739-83783ae195f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.566658 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-config-data" (OuterVolumeSpecName: "config-data") pod "2255bb68-be73-4c4f-8739-83783ae195f0" (UID: "2255bb68-be73-4c4f-8739-83783ae195f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.596563 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.596597 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2255bb68-be73-4c4f-8739-83783ae195f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:10 crc kubenswrapper[4736]: I0316 15:33:10.848723 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5db557dd69-7c5jd"] Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.036650 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08df962e-9bf1-4a47-bcb5-f3535f67ecd6" path="/var/lib/kubelet/pods/08df962e-9bf1-4a47-bcb5-f3535f67ecd6/volumes" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.059258 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd955ff9f-lqqft" event={"ID":"883da065-2249-44ef-8c33-e8bdd36f824b","Type":"ContainerStarted","Data":"276691327ae0a7b95558d58c3837d0a0da26128449d24edd3c17a7a46f2e2c0d"} Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.083129 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-wvncr" event={"ID":"2255bb68-be73-4c4f-8739-83783ae195f0","Type":"ContainerDied","Data":"fe31aef31069eb5830fe53762b8adfe35617534df7f912b8b3688c601198485a"} Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.083180 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe31aef31069eb5830fe53762b8adfe35617534df7f912b8b3688c601198485a" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.083254 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-wvncr" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.117432 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db557dd69-7c5jd" event={"ID":"f529e0b5-791d-4f53-a035-b0112d59d7b8","Type":"ContainerStarted","Data":"05f6466cd4d85ef1b03314db78912c872fc2a088ec525bb45628389fa1dece00"} Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.118866 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5fd955ff9f-lqqft" podStartSLOduration=4.813443574 podStartE2EDuration="20.118837889s" podCreationTimestamp="2026-03-16 15:32:51 +0000 UTC" firstStartedPulling="2026-03-16 15:32:53.07639758 +0000 UTC m=+1174.803787867" lastFinishedPulling="2026-03-16 15:33:08.381791895 +0000 UTC m=+1190.109182182" observedRunningTime="2026-03-16 15:33:11.087915311 +0000 UTC m=+1192.815305608" watchObservedRunningTime="2026-03-16 15:33:11.118837889 +0000 UTC m=+1192.846228176" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.124295 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59554d7c7d-jq5k7" event={"ID":"3c9e0de3-6386-4733-bc2b-b2eec48d8098","Type":"ContainerStarted","Data":"fd52de3438a5cc009398183d537de0649f28d4859372fc2a0098f13e910bb688"} Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.135047 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7897f5f54f-2vggz" podUID="2ac9f390-a870-4309-bb1d-2360a1ae1c7f" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.160:9696/\": dial tcp 10.217.0.160:9696: connect: connection refused" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.137078 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-648fb9b5bc-8f55h" event={"ID":"ee82b20e-e4ad-4267-9845-3c5838fa1e0f","Type":"ContainerStarted","Data":"ddc8602fd1bc045d7b7c660cb4767b05b11ff1f83fceb83d74374309ebaf7b31"} Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.159529 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" event={"ID":"ad07fdc6-06e5-4045-8049-783bc6e6d5c6","Type":"ContainerStarted","Data":"70a029b7789450c3486bd436e58c5f467ced2ee9e3c0c0c20bf5f379e35add15"} Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.170847 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" event={"ID":"2c8512fe-c3bb-4573-b46d-1aada355ac6e","Type":"ContainerStarted","Data":"00b3937b599439e5190241abf1a4edb069a1ea6b1af9b0691999882836b0cb41"} Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.178058 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-648fb9b5bc-8f55h" podStartSLOduration=16.178030074 podStartE2EDuration="16.178030074s" podCreationTimestamp="2026-03-16 15:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:11.178031664 +0000 UTC m=+1192.905421951" watchObservedRunningTime="2026-03-16 15:33:11.178030074 +0000 UTC m=+1192.905420381" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.244543 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" podStartSLOduration=5.315874408 podStartE2EDuration="20.244512664s" podCreationTimestamp="2026-03-16 15:32:51 +0000 UTC" firstStartedPulling="2026-03-16 15:32:53.473793291 +0000 UTC m=+1175.201183578" lastFinishedPulling="2026-03-16 15:33:08.402431547 +0000 UTC m=+1190.129821834" observedRunningTime="2026-03-16 15:33:11.215277481 +0000 UTC m=+1192.942667768" watchObservedRunningTime="2026-03-16 15:33:11.244512664 +0000 UTC m=+1192.971902951" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.306686 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5fd955ff9f-lqqft"] Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.340658 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 16 15:33:11 crc kubenswrapper[4736]: E0316 15:33:11.341346 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2255bb68-be73-4c4f-8739-83783ae195f0" containerName="cinder-db-sync" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.341371 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2255bb68-be73-4c4f-8739-83783ae195f0" containerName="cinder-db-sync" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.351400 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2255bb68-be73-4c4f-8739-83783ae195f0" containerName="cinder-db-sync" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.359386 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.368761 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qdc58" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.380811 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.381258 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.398248 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.438022 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj9tg\" (UniqueName: \"kubernetes.io/projected/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-kube-api-access-wj9tg\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.452629 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.452803 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.452863 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-scripts\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.453082 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.453136 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-config-data\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.483601 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.550619 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98bb585-hhxkb"] Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.557695 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.557746 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-config-data\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.557779 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj9tg\" (UniqueName: \"kubernetes.io/projected/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-kube-api-access-wj9tg\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.557852 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.557889 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.557913 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-scripts\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.561251 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.570073 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.600464 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.603960 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.605302 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj9tg\" (UniqueName: \"kubernetes.io/projected/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-kube-api-access-wj9tg\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.608993 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-scripts\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.609287 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-config-data\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.618472 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98bb585-hhxkb"] Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.619540 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.659790 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjn9\" (UniqueName: \"kubernetes.io/projected/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-kube-api-access-4fjn9\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.659852 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-ovsdbserver-sb\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.659886 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-config\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.659905 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-dns-svc\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.660059 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-ovsdbserver-nb\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.737852 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.765376 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-ovsdbserver-nb\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.765434 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fjn9\" (UniqueName: \"kubernetes.io/projected/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-kube-api-access-4fjn9\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.765466 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-ovsdbserver-sb\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.765498 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-config\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.765515 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-dns-svc\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.766519 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-dns-svc\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.772452 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-config\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.776223 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-ovsdbserver-sb\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.778046 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-ovsdbserver-nb\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.795988 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.797727 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.810451 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.814608 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ff55bcd5b-psrsc" podUID="4a2c18b8-790c-4bb8-ac86-c70f0220ab3f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.831976 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fjn9\" (UniqueName: \"kubernetes.io/projected/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-kube-api-access-4fjn9\") pod \"dnsmasq-dns-98bb585-hhxkb\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.833892 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.984947 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.987671 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-scripts\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.994712 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.994956 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-config-data-custom\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.995119 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/590b1c05-0ddc-4332-986b-525b6720d465-etc-machine-id\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.995240 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-config-data\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.995518 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/590b1c05-0ddc-4332-986b-525b6720d465-logs\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.995683 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bptdx\" (UniqueName: \"kubernetes.io/projected/590b1c05-0ddc-4332-986b-525b6720d465-kube-api-access-bptdx\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:11 crc kubenswrapper[4736]: I0316 15:33:11.990336 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.100855 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-scripts\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.100915 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.100978 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-config-data-custom\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.101006 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/590b1c05-0ddc-4332-986b-525b6720d465-etc-machine-id\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.101030 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-config-data\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.101138 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/590b1c05-0ddc-4332-986b-525b6720d465-logs\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.101190 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bptdx\" (UniqueName: \"kubernetes.io/projected/590b1c05-0ddc-4332-986b-525b6720d465-kube-api-access-bptdx\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.103263 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/590b1c05-0ddc-4332-986b-525b6720d465-logs\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.103352 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/590b1c05-0ddc-4332-986b-525b6720d465-etc-machine-id\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.117189 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-config-data\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.124648 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bptdx\" (UniqueName: \"kubernetes.io/projected/590b1c05-0ddc-4332-986b-525b6720d465-kube-api-access-bptdx\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.137370 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56c87c56bd-8q8cv" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.151527 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.151816 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-scripts\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.156559 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-config-data-custom\") pod \"cinder-api-0\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.242777 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.275171 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59554d7c7d-jq5k7" event={"ID":"3c9e0de3-6386-4733-bc2b-b2eec48d8098","Type":"ContainerStarted","Data":"d4d28ce7189970d66ebfb5cfb43a1dfcb47d283e35445e09d81f5c697687215b"} Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.275263 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.275300 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.318993 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59554d7c7d-jq5k7" podStartSLOduration=17.318959432 podStartE2EDuration="17.318959432s" podCreationTimestamp="2026-03-16 15:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:12.304700761 +0000 UTC m=+1194.032091048" watchObservedRunningTime="2026-03-16 15:33:12.318959432 +0000 UTC m=+1194.046349719" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.324593 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-648fb9b5bc-8f55h" event={"ID":"ee82b20e-e4ad-4267-9845-3c5838fa1e0f","Type":"ContainerStarted","Data":"5c3508b8b26dfb6d2bf836668e38403234ebae23c55d5d9b4ace72187c16f8c4"} Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.379412 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" event={"ID":"ad07fdc6-06e5-4045-8049-783bc6e6d5c6","Type":"ContainerStarted","Data":"31bd9254c15ae6e8edc5bdd5b91728e91f768a2a3f55be94e9138d611e147415"} Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.436884 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5868c68fb4-ww9v7" podStartSLOduration=17.436841459 podStartE2EDuration="17.436841459s" podCreationTimestamp="2026-03-16 15:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:12.42976024 +0000 UTC m=+1194.157150527" watchObservedRunningTime="2026-03-16 15:33:12.436841459 +0000 UTC m=+1194.164231746" Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.502717 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6cf4cbd846-gkgvc"] Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.590279 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 16 15:33:12 crc kubenswrapper[4736]: I0316 15:33:12.882877 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98bb585-hhxkb"] Mar 16 15:33:12 crc kubenswrapper[4736]: W0316 15:33:12.934655 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2436adc_ebdd_4481_92f8_b1c1d9010ab7.slice/crio-49d0be8f62d591a2d97146487c354fdbf0faa4d57128fffdb163d99791483be4 WatchSource:0}: Error finding container 49d0be8f62d591a2d97146487c354fdbf0faa4d57128fffdb163d99791483be4: Status 404 returned error can't find the container with id 49d0be8f62d591a2d97146487c354fdbf0faa4d57128fffdb163d99791483be4 Mar 16 15:33:13 crc kubenswrapper[4736]: I0316 15:33:13.190258 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 16 15:33:13 crc kubenswrapper[4736]: I0316 15:33:13.442674 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db557dd69-7c5jd" event={"ID":"f529e0b5-791d-4f53-a035-b0112d59d7b8","Type":"ContainerStarted","Data":"e10c42b60a1ab70081f329e9150dce172acdc088ad3d2aecca633c9c890a9497"} Mar 16 15:33:13 crc kubenswrapper[4736]: I0316 15:33:13.461562 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa","Type":"ContainerStarted","Data":"271e224fe6cd13ece725fcf859eb2dda680e661d50dcaa7dabc9b4e1b5946208"} Mar 16 15:33:13 crc kubenswrapper[4736]: I0316 15:33:13.465488 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98bb585-hhxkb" event={"ID":"b2436adc-ebdd-4481-92f8-b1c1d9010ab7","Type":"ContainerStarted","Data":"49d0be8f62d591a2d97146487c354fdbf0faa4d57128fffdb163d99791483be4"} Mar 16 15:33:13 crc kubenswrapper[4736]: I0316 15:33:13.466668 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5fd955ff9f-lqqft" podUID="883da065-2249-44ef-8c33-e8bdd36f824b" containerName="barbican-worker-log" containerID="cri-o://4c8de0fefbba647f4dc0ab4c0e5384a3eb159a12cb000e51d9695b8475151fc2" gracePeriod=30 Mar 16 15:33:13 crc kubenswrapper[4736]: I0316 15:33:13.467095 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" podUID="2c8512fe-c3bb-4573-b46d-1aada355ac6e" containerName="barbican-keystone-listener-log" containerID="cri-o://f24d671fb965d82c3b93b00d1f27dcfb39c7e973d0ce70d47b20081fc8639b74" gracePeriod=30 Mar 16 15:33:13 crc kubenswrapper[4736]: I0316 15:33:13.467742 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5fd955ff9f-lqqft" podUID="883da065-2249-44ef-8c33-e8bdd36f824b" containerName="barbican-worker" containerID="cri-o://276691327ae0a7b95558d58c3837d0a0da26128449d24edd3c17a7a46f2e2c0d" gracePeriod=30 Mar 16 15:33:13 crc kubenswrapper[4736]: I0316 15:33:13.467816 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" podUID="2c8512fe-c3bb-4573-b46d-1aada355ac6e" containerName="barbican-keystone-listener" containerID="cri-o://00b3937b599439e5190241abf1a4edb069a1ea6b1af9b0691999882836b0cb41" gracePeriod=30 Mar 16 15:33:13 crc kubenswrapper[4736]: I0316 15:33:13.467864 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"590b1c05-0ddc-4332-986b-525b6720d465","Type":"ContainerStarted","Data":"33fe39cfc8a8f6b5f4015e6274f0de9637477274b0c363318c733ee107e362fd"} Mar 16 15:33:14 crc kubenswrapper[4736]: I0316 15:33:14.544711 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db557dd69-7c5jd" event={"ID":"f529e0b5-791d-4f53-a035-b0112d59d7b8","Type":"ContainerStarted","Data":"a9da06a2bea8b11a8dcf443efedb3eb7c850a37344810275093138e32c14867a"} Mar 16 15:33:14 crc kubenswrapper[4736]: I0316 15:33:14.547124 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:14 crc kubenswrapper[4736]: I0316 15:33:14.555645 4736 generic.go:334] "Generic (PLEG): container finished" podID="b2436adc-ebdd-4481-92f8-b1c1d9010ab7" containerID="c5f956b340f9ec106af69986aaf62f09070ac88eeaa93d7bd8a3af0ef40e69fc" exitCode=0 Mar 16 15:33:14 crc kubenswrapper[4736]: I0316 15:33:14.555711 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98bb585-hhxkb" event={"ID":"b2436adc-ebdd-4481-92f8-b1c1d9010ab7","Type":"ContainerDied","Data":"c5f956b340f9ec106af69986aaf62f09070ac88eeaa93d7bd8a3af0ef40e69fc"} Mar 16 15:33:14 crc kubenswrapper[4736]: I0316 15:33:14.588846 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5db557dd69-7c5jd" podStartSLOduration=5.588819829 podStartE2EDuration="5.588819829s" podCreationTimestamp="2026-03-16 15:33:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:14.578524663 +0000 UTC m=+1196.305914950" watchObservedRunningTime="2026-03-16 15:33:14.588819829 +0000 UTC m=+1196.316210116" Mar 16 15:33:14 crc kubenswrapper[4736]: I0316 15:33:14.594537 4736 generic.go:334] "Generic (PLEG): container finished" podID="2c8512fe-c3bb-4573-b46d-1aada355ac6e" containerID="f24d671fb965d82c3b93b00d1f27dcfb39c7e973d0ce70d47b20081fc8639b74" exitCode=143 Mar 16 15:33:14 crc kubenswrapper[4736]: I0316 15:33:14.594657 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" event={"ID":"2c8512fe-c3bb-4573-b46d-1aada355ac6e","Type":"ContainerDied","Data":"f24d671fb965d82c3b93b00d1f27dcfb39c7e973d0ce70d47b20081fc8639b74"} Mar 16 15:33:14 crc kubenswrapper[4736]: I0316 15:33:14.627123 4736 generic.go:334] "Generic (PLEG): container finished" podID="883da065-2249-44ef-8c33-e8bdd36f824b" containerID="4c8de0fefbba647f4dc0ab4c0e5384a3eb159a12cb000e51d9695b8475151fc2" exitCode=143 Mar 16 15:33:14 crc kubenswrapper[4736]: I0316 15:33:14.627630 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd955ff9f-lqqft" event={"ID":"883da065-2249-44ef-8c33-e8bdd36f824b","Type":"ContainerDied","Data":"4c8de0fefbba647f4dc0ab4c0e5384a3eb159a12cb000e51d9695b8475151fc2"} Mar 16 15:33:14 crc kubenswrapper[4736]: I0316 15:33:14.664055 4736 generic.go:334] "Generic (PLEG): container finished" podID="2ac9f390-a870-4309-bb1d-2360a1ae1c7f" containerID="633590f4c22afa7d5bd76b91d83d0a8b909c99107b2b5195f5bd99e08956b8d7" exitCode=0 Mar 16 15:33:14 crc kubenswrapper[4736]: I0316 15:33:14.664151 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7897f5f54f-2vggz" event={"ID":"2ac9f390-a870-4309-bb1d-2360a1ae1c7f","Type":"ContainerDied","Data":"633590f4c22afa7d5bd76b91d83d0a8b909c99107b2b5195f5bd99e08956b8d7"} Mar 16 15:33:14 crc kubenswrapper[4736]: I0316 15:33:14.849828 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.612698 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.705242 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98bb585-hhxkb" event={"ID":"b2436adc-ebdd-4481-92f8-b1c1d9010ab7","Type":"ContainerStarted","Data":"1c90f848d4197065da6c945e6f4a35ab2ed0a74b1d1739ef43359b6a86d447b1"} Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.705442 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.710820 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7897f5f54f-2vggz" Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.710870 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7897f5f54f-2vggz" event={"ID":"2ac9f390-a870-4309-bb1d-2360a1ae1c7f","Type":"ContainerDied","Data":"ee2e837ef05a0609463a2f14ce7197a023eb4d66582200f9c0d5eafc4511f5f5"} Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.710913 4736 scope.go:117] "RemoveContainer" containerID="b517f49007744252310ef1193e8ab73fdb08c0321bca6d24a51d089b1743c2eb" Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.759577 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-config\") pod \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.760070 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-public-tls-certs\") pod \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.760180 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-ovndb-tls-certs\") pod \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.760209 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-httpd-config\") pod \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.760277 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-internal-tls-certs\") pod \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.760304 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-combined-ca-bundle\") pod \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.760352 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4gwj\" (UniqueName: \"kubernetes.io/projected/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-kube-api-access-n4gwj\") pod \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\" (UID: \"2ac9f390-a870-4309-bb1d-2360a1ae1c7f\") " Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.788892 4736 scope.go:117] "RemoveContainer" containerID="633590f4c22afa7d5bd76b91d83d0a8b909c99107b2b5195f5bd99e08956b8d7" Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.831506 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-kube-api-access-n4gwj" (OuterVolumeSpecName: "kube-api-access-n4gwj") pod "2ac9f390-a870-4309-bb1d-2360a1ae1c7f" (UID: "2ac9f390-a870-4309-bb1d-2360a1ae1c7f"). InnerVolumeSpecName "kube-api-access-n4gwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.866051 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4gwj\" (UniqueName: \"kubernetes.io/projected/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-kube-api-access-n4gwj\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.916881 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2ac9f390-a870-4309-bb1d-2360a1ae1c7f" (UID: "2ac9f390-a870-4309-bb1d-2360a1ae1c7f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.925371 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98bb585-hhxkb" podStartSLOduration=4.91625061 podStartE2EDuration="4.91625061s" podCreationTimestamp="2026-03-16 15:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:15.90164825 +0000 UTC m=+1197.629038537" watchObservedRunningTime="2026-03-16 15:33:15.91625061 +0000 UTC m=+1197.643640897" Mar 16 15:33:15 crc kubenswrapper[4736]: I0316 15:33:15.968795 4736 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:16 crc kubenswrapper[4736]: I0316 15:33:16.017243 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2ac9f390-a870-4309-bb1d-2360a1ae1c7f" (UID: "2ac9f390-a870-4309-bb1d-2360a1ae1c7f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:16 crc kubenswrapper[4736]: I0316 15:33:16.071538 4736 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:16 crc kubenswrapper[4736]: I0316 15:33:16.100083 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ac9f390-a870-4309-bb1d-2360a1ae1c7f" (UID: "2ac9f390-a870-4309-bb1d-2360a1ae1c7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:16 crc kubenswrapper[4736]: I0316 15:33:16.130283 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2ac9f390-a870-4309-bb1d-2360a1ae1c7f" (UID: "2ac9f390-a870-4309-bb1d-2360a1ae1c7f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:16 crc kubenswrapper[4736]: I0316 15:33:16.178086 4736 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:16 crc kubenswrapper[4736]: I0316 15:33:16.178171 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:16 crc kubenswrapper[4736]: I0316 15:33:16.189282 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2ac9f390-a870-4309-bb1d-2360a1ae1c7f" (UID: "2ac9f390-a870-4309-bb1d-2360a1ae1c7f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:16 crc kubenswrapper[4736]: I0316 15:33:16.194987 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-config" (OuterVolumeSpecName: "config") pod "2ac9f390-a870-4309-bb1d-2360a1ae1c7f" (UID: "2ac9f390-a870-4309-bb1d-2360a1ae1c7f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:16 crc kubenswrapper[4736]: I0316 15:33:16.280762 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:16 crc kubenswrapper[4736]: I0316 15:33:16.280823 4736 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac9f390-a870-4309-bb1d-2360a1ae1c7f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:16 crc kubenswrapper[4736]: I0316 15:33:16.373164 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7897f5f54f-2vggz"] Mar 16 15:33:16 crc kubenswrapper[4736]: I0316 15:33:16.420934 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7897f5f54f-2vggz"] Mar 16 15:33:16 crc kubenswrapper[4736]: I0316 15:33:16.724429 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa","Type":"ContainerStarted","Data":"8b8210fd0c32f5a847cfe7b9a95682a87c64f2ca1fa216d4cb5cd86ff6ef2af6"} Mar 16 15:33:16 crc kubenswrapper[4736]: I0316 15:33:16.994753 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ac9f390-a870-4309-bb1d-2360a1ae1c7f" path="/var/lib/kubelet/pods/2ac9f390-a870-4309-bb1d-2360a1ae1c7f/volumes" Mar 16 15:33:17 crc kubenswrapper[4736]: I0316 15:33:17.872332 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"590b1c05-0ddc-4332-986b-525b6720d465","Type":"ContainerStarted","Data":"4f7ac9b1ab5fe218b305f86862caab1e143d479ca30195e1bdf96922bb986e04"} Mar 16 15:33:19 crc kubenswrapper[4736]: I0316 15:33:19.930224 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"590b1c05-0ddc-4332-986b-525b6720d465","Type":"ContainerStarted","Data":"ac08c9a3e6bd6ce3cee5ab1b052c0b950d836b66787cf6c518c56ba050606815"} Mar 16 15:33:19 crc kubenswrapper[4736]: I0316 15:33:19.931002 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="590b1c05-0ddc-4332-986b-525b6720d465" containerName="cinder-api-log" containerID="cri-o://4f7ac9b1ab5fe218b305f86862caab1e143d479ca30195e1bdf96922bb986e04" gracePeriod=30 Mar 16 15:33:19 crc kubenswrapper[4736]: I0316 15:33:19.931334 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 16 15:33:19 crc kubenswrapper[4736]: I0316 15:33:19.931610 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="590b1c05-0ddc-4332-986b-525b6720d465" containerName="cinder-api" containerID="cri-o://ac08c9a3e6bd6ce3cee5ab1b052c0b950d836b66787cf6c518c56ba050606815" gracePeriod=30 Mar 16 15:33:19 crc kubenswrapper[4736]: I0316 15:33:19.945002 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa","Type":"ContainerStarted","Data":"19f8d992174e5ab13728ac3a79447c10ae503761fa34c42408fb7aa69bd7801e"} Mar 16 15:33:19 crc kubenswrapper[4736]: I0316 15:33:19.965543 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.965518121 podStartE2EDuration="8.965518121s" podCreationTimestamp="2026-03-16 15:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:19.956277594 +0000 UTC m=+1201.683667881" watchObservedRunningTime="2026-03-16 15:33:19.965518121 +0000 UTC m=+1201.692908408" Mar 16 15:33:19 crc kubenswrapper[4736]: I0316 15:33:19.999822 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.917535119 podStartE2EDuration="8.999799928s" podCreationTimestamp="2026-03-16 15:33:11 +0000 UTC" firstStartedPulling="2026-03-16 15:33:12.666896228 +0000 UTC m=+1194.394286515" lastFinishedPulling="2026-03-16 15:33:13.749161047 +0000 UTC m=+1195.476551324" observedRunningTime="2026-03-16 15:33:19.99087268 +0000 UTC m=+1201.718262957" watchObservedRunningTime="2026-03-16 15:33:19.999799928 +0000 UTC m=+1201.727190215" Mar 16 15:33:20 crc kubenswrapper[4736]: I0316 15:33:20.569445 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:33:20 crc kubenswrapper[4736]: I0316 15:33:20.968090 4736 generic.go:334] "Generic (PLEG): container finished" podID="590b1c05-0ddc-4332-986b-525b6720d465" containerID="4f7ac9b1ab5fe218b305f86862caab1e143d479ca30195e1bdf96922bb986e04" exitCode=143 Mar 16 15:33:20 crc kubenswrapper[4736]: I0316 15:33:20.968192 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"590b1c05-0ddc-4332-986b-525b6720d465","Type":"ContainerDied","Data":"4f7ac9b1ab5fe218b305f86862caab1e143d479ca30195e1bdf96922bb986e04"} Mar 16 15:33:21 crc kubenswrapper[4736]: I0316 15:33:21.243420 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59554d7c7d-jq5k7" podUID="3c9e0de3-6386-4733-bc2b-b2eec48d8098" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.171:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:33:21 crc kubenswrapper[4736]: I0316 15:33:21.607024 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 16 15:33:21 crc kubenswrapper[4736]: I0316 15:33:21.607127 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:33:21 crc kubenswrapper[4736]: I0316 15:33:21.608163 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"155657f8a696f56dc49ec2f184c01c7c2d808c7a65337e7a1c2bfaaa3b82d5cf"} pod="openstack/horizon-67c978df54-kdnqn" containerMessage="Container horizon failed startup probe, will be restarted" Mar 16 15:33:21 crc kubenswrapper[4736]: I0316 15:33:21.608195 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" containerID="cri-o://155657f8a696f56dc49ec2f184c01c7c2d808c7a65337e7a1c2bfaaa3b82d5cf" gracePeriod=30 Mar 16 15:33:21 crc kubenswrapper[4736]: I0316 15:33:21.741901 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 16 15:33:21 crc kubenswrapper[4736]: I0316 15:33:21.747444 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.173:8080/\": dial tcp 10.217.0.173:8080: connect: connection refused" Mar 16 15:33:21 crc kubenswrapper[4736]: I0316 15:33:21.815669 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ff55bcd5b-psrsc" podUID="4a2c18b8-790c-4bb8-ac86-c70f0220ab3f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Mar 16 15:33:21 crc kubenswrapper[4736]: I0316 15:33:21.815754 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:33:21 crc kubenswrapper[4736]: I0316 15:33:21.816637 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"ae03ef4d42b1153ca6b496c7c17ea26b167a6eef22d2794e60fdb93441e529e8"} pod="openstack/horizon-ff55bcd5b-psrsc" containerMessage="Container horizon failed startup probe, will be restarted" Mar 16 15:33:21 crc kubenswrapper[4736]: I0316 15:33:21.816677 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-ff55bcd5b-psrsc" podUID="4a2c18b8-790c-4bb8-ac86-c70f0220ab3f" containerName="horizon" containerID="cri-o://ae03ef4d42b1153ca6b496c7c17ea26b167a6eef22d2794e60fdb93441e529e8" gracePeriod=30 Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:21.999442 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.008890 4736 generic.go:334] "Generic (PLEG): container finished" podID="590b1c05-0ddc-4332-986b-525b6720d465" containerID="ac08c9a3e6bd6ce3cee5ab1b052c0b950d836b66787cf6c518c56ba050606815" exitCode=0 Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.009270 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"590b1c05-0ddc-4332-986b-525b6720d465","Type":"ContainerDied","Data":"ac08c9a3e6bd6ce3cee5ab1b052c0b950d836b66787cf6c518c56ba050606815"} Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.115652 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f488c94c5-5j5ct"] Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.115990 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" podUID="ece0fa37-8578-46d1-879f-4486ae0be3b5" containerName="dnsmasq-dns" containerID="cri-o://6333ff67604da865829bb9fef5a314db61bbf3710b586801fb39251551619648" gracePeriod=10 Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.255637 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" podUID="ece0fa37-8578-46d1-879f-4486ae0be3b5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: connect: connection refused" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.342874 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.392766 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-combined-ca-bundle\") pod \"590b1c05-0ddc-4332-986b-525b6720d465\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.392919 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/590b1c05-0ddc-4332-986b-525b6720d465-logs\") pod \"590b1c05-0ddc-4332-986b-525b6720d465\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.393028 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-config-data-custom\") pod \"590b1c05-0ddc-4332-986b-525b6720d465\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.393055 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/590b1c05-0ddc-4332-986b-525b6720d465-etc-machine-id\") pod \"590b1c05-0ddc-4332-986b-525b6720d465\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.393091 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-config-data\") pod \"590b1c05-0ddc-4332-986b-525b6720d465\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.393189 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bptdx\" (UniqueName: \"kubernetes.io/projected/590b1c05-0ddc-4332-986b-525b6720d465-kube-api-access-bptdx\") pod \"590b1c05-0ddc-4332-986b-525b6720d465\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.393241 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-scripts\") pod \"590b1c05-0ddc-4332-986b-525b6720d465\" (UID: \"590b1c05-0ddc-4332-986b-525b6720d465\") " Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.410433 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "590b1c05-0ddc-4332-986b-525b6720d465" (UID: "590b1c05-0ddc-4332-986b-525b6720d465"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.410836 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/590b1c05-0ddc-4332-986b-525b6720d465-logs" (OuterVolumeSpecName: "logs") pod "590b1c05-0ddc-4332-986b-525b6720d465" (UID: "590b1c05-0ddc-4332-986b-525b6720d465"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.411594 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/590b1c05-0ddc-4332-986b-525b6720d465-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "590b1c05-0ddc-4332-986b-525b6720d465" (UID: "590b1c05-0ddc-4332-986b-525b6720d465"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.449391 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/590b1c05-0ddc-4332-986b-525b6720d465-kube-api-access-bptdx" (OuterVolumeSpecName: "kube-api-access-bptdx") pod "590b1c05-0ddc-4332-986b-525b6720d465" (UID: "590b1c05-0ddc-4332-986b-525b6720d465"). InnerVolumeSpecName "kube-api-access-bptdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.466383 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-scripts" (OuterVolumeSpecName: "scripts") pod "590b1c05-0ddc-4332-986b-525b6720d465" (UID: "590b1c05-0ddc-4332-986b-525b6720d465"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.499624 4736 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/590b1c05-0ddc-4332-986b-525b6720d465-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.499653 4736 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.499663 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bptdx\" (UniqueName: \"kubernetes.io/projected/590b1c05-0ddc-4332-986b-525b6720d465-kube-api-access-bptdx\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.499672 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.499681 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/590b1c05-0ddc-4332-986b-525b6720d465-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.535393 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "590b1c05-0ddc-4332-986b-525b6720d465" (UID: "590b1c05-0ddc-4332-986b-525b6720d465"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.607166 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.677321 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-config-data" (OuterVolumeSpecName: "config-data") pod "590b1c05-0ddc-4332-986b-525b6720d465" (UID: "590b1c05-0ddc-4332-986b-525b6720d465"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:22 crc kubenswrapper[4736]: I0316 15:33:22.712263 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590b1c05-0ddc-4332-986b-525b6720d465-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.072432 4736 generic.go:334] "Generic (PLEG): container finished" podID="ece0fa37-8578-46d1-879f-4486ae0be3b5" containerID="6333ff67604da865829bb9fef5a314db61bbf3710b586801fb39251551619648" exitCode=0 Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.072574 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" event={"ID":"ece0fa37-8578-46d1-879f-4486ae0be3b5","Type":"ContainerDied","Data":"6333ff67604da865829bb9fef5a314db61bbf3710b586801fb39251551619648"} Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.072610 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" event={"ID":"ece0fa37-8578-46d1-879f-4486ae0be3b5","Type":"ContainerDied","Data":"431b50559b09e3b5ea49d3e3f6d4088da5ff47d0ec667d4b05c5ca12cf858576"} Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.072623 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="431b50559b09e3b5ea49d3e3f6d4088da5ff47d0ec667d4b05c5ca12cf858576" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.081604 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"590b1c05-0ddc-4332-986b-525b6720d465","Type":"ContainerDied","Data":"33fe39cfc8a8f6b5f4015e6274f0de9637477274b0c363318c733ee107e362fd"} Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.081683 4736 scope.go:117] "RemoveContainer" containerID="ac08c9a3e6bd6ce3cee5ab1b052c0b950d836b66787cf6c518c56ba050606815" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.081882 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.273565 4736 scope.go:117] "RemoveContainer" containerID="4f7ac9b1ab5fe218b305f86862caab1e143d479ca30195e1bdf96922bb986e04" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.293845 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.448573 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-dns-svc\") pod \"ece0fa37-8578-46d1-879f-4486ae0be3b5\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.448680 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cctzv\" (UniqueName: \"kubernetes.io/projected/ece0fa37-8578-46d1-879f-4486ae0be3b5-kube-api-access-cctzv\") pod \"ece0fa37-8578-46d1-879f-4486ae0be3b5\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.448786 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-config\") pod \"ece0fa37-8578-46d1-879f-4486ae0be3b5\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.448833 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-ovsdbserver-sb\") pod \"ece0fa37-8578-46d1-879f-4486ae0be3b5\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.448933 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-ovsdbserver-nb\") pod \"ece0fa37-8578-46d1-879f-4486ae0be3b5\" (UID: \"ece0fa37-8578-46d1-879f-4486ae0be3b5\") " Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.465406 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece0fa37-8578-46d1-879f-4486ae0be3b5-kube-api-access-cctzv" (OuterVolumeSpecName: "kube-api-access-cctzv") pod "ece0fa37-8578-46d1-879f-4486ae0be3b5" (UID: "ece0fa37-8578-46d1-879f-4486ae0be3b5"). InnerVolumeSpecName "kube-api-access-cctzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.546961 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ece0fa37-8578-46d1-879f-4486ae0be3b5" (UID: "ece0fa37-8578-46d1-879f-4486ae0be3b5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.551950 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cctzv\" (UniqueName: \"kubernetes.io/projected/ece0fa37-8578-46d1-879f-4486ae0be3b5-kube-api-access-cctzv\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.551981 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.565000 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ece0fa37-8578-46d1-879f-4486ae0be3b5" (UID: "ece0fa37-8578-46d1-879f-4486ae0be3b5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.571505 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ece0fa37-8578-46d1-879f-4486ae0be3b5" (UID: "ece0fa37-8578-46d1-879f-4486ae0be3b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.577814 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-config" (OuterVolumeSpecName: "config") pod "ece0fa37-8578-46d1-879f-4486ae0be3b5" (UID: "ece0fa37-8578-46d1-879f-4486ae0be3b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.653998 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.654039 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:23 crc kubenswrapper[4736]: I0316 15:33:23.654053 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ece0fa37-8578-46d1-879f-4486ae0be3b5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:24 crc kubenswrapper[4736]: I0316 15:33:24.094865 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f488c94c5-5j5ct" Mar 16 15:33:24 crc kubenswrapper[4736]: I0316 15:33:24.144143 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f488c94c5-5j5ct"] Mar 16 15:33:24 crc kubenswrapper[4736]: I0316 15:33:24.173575 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f488c94c5-5j5ct"] Mar 16 15:33:24 crc kubenswrapper[4736]: I0316 15:33:24.992144 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece0fa37-8578-46d1-879f-4486ae0be3b5" path="/var/lib/kubelet/pods/ece0fa37-8578-46d1-879f-4486ae0be3b5/volumes" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.229956 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59554d7c7d-jq5k7" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.298751 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.385435 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-56c87c56bd-8q8cv"] Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.385773 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-56c87c56bd-8q8cv" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api-log" containerID="cri-o://81a59404d5ea2e085f459d6aa0498fb07ae90943e16dd7034255b96ccf1633e7" gracePeriod=30 Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.386488 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-56c87c56bd-8q8cv" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api" containerID="cri-o://d63ced8971ea02f8c1b87b4668549309fed45bcf7d5ff09f4e47c6dd7f93c5c6" gracePeriod=30 Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.615849 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.938009 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-784f554468-tgz6j"] Mar 16 15:33:25 crc kubenswrapper[4736]: E0316 15:33:25.939734 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac9f390-a870-4309-bb1d-2360a1ae1c7f" containerName="neutron-api" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.939756 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac9f390-a870-4309-bb1d-2360a1ae1c7f" containerName="neutron-api" Mar 16 15:33:25 crc kubenswrapper[4736]: E0316 15:33:25.939782 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece0fa37-8578-46d1-879f-4486ae0be3b5" containerName="dnsmasq-dns" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.939789 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece0fa37-8578-46d1-879f-4486ae0be3b5" containerName="dnsmasq-dns" Mar 16 15:33:25 crc kubenswrapper[4736]: E0316 15:33:25.939826 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590b1c05-0ddc-4332-986b-525b6720d465" containerName="cinder-api-log" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.939833 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="590b1c05-0ddc-4332-986b-525b6720d465" containerName="cinder-api-log" Mar 16 15:33:25 crc kubenswrapper[4736]: E0316 15:33:25.939859 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece0fa37-8578-46d1-879f-4486ae0be3b5" containerName="init" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.939866 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece0fa37-8578-46d1-879f-4486ae0be3b5" containerName="init" Mar 16 15:33:25 crc kubenswrapper[4736]: E0316 15:33:25.939881 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590b1c05-0ddc-4332-986b-525b6720d465" containerName="cinder-api" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.939887 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="590b1c05-0ddc-4332-986b-525b6720d465" containerName="cinder-api" Mar 16 15:33:25 crc kubenswrapper[4736]: E0316 15:33:25.939900 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac9f390-a870-4309-bb1d-2360a1ae1c7f" containerName="neutron-httpd" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.939908 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac9f390-a870-4309-bb1d-2360a1ae1c7f" containerName="neutron-httpd" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.953276 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="590b1c05-0ddc-4332-986b-525b6720d465" containerName="cinder-api-log" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.953349 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac9f390-a870-4309-bb1d-2360a1ae1c7f" containerName="neutron-httpd" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.953377 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece0fa37-8578-46d1-879f-4486ae0be3b5" containerName="dnsmasq-dns" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.953397 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac9f390-a870-4309-bb1d-2360a1ae1c7f" containerName="neutron-api" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.953424 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="590b1c05-0ddc-4332-986b-525b6720d465" containerName="cinder-api" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.955725 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:25 crc kubenswrapper[4736]: I0316 15:33:25.979024 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-784f554468-tgz6j"] Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.000484 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-59554d7c7d-jq5k7" podUID="3c9e0de3-6386-4733-bc2b-b2eec48d8098" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.171:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.042280 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-public-tls-certs\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.042371 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-internal-tls-certs\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.042406 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-272gp\" (UniqueName: \"kubernetes.io/projected/f744bb37-172a-4e29-b348-5b70d53c5d16-kube-api-access-272gp\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.042467 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-combined-ca-bundle\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.042487 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-config-data\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.042545 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f744bb37-172a-4e29-b348-5b70d53c5d16-logs\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.042579 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-scripts\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.148151 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f744bb37-172a-4e29-b348-5b70d53c5d16-logs\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.148268 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-scripts\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.148331 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-public-tls-certs\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.148545 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-internal-tls-certs\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.148624 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-272gp\" (UniqueName: \"kubernetes.io/projected/f744bb37-172a-4e29-b348-5b70d53c5d16-kube-api-access-272gp\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.148793 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-combined-ca-bundle\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.148816 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-config-data\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.150323 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f744bb37-172a-4e29-b348-5b70d53c5d16-logs\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.167478 4736 generic.go:334] "Generic (PLEG): container finished" podID="ea251deb-26c7-4688-be95-095a296e46fa" containerID="81a59404d5ea2e085f459d6aa0498fb07ae90943e16dd7034255b96ccf1633e7" exitCode=143 Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.168523 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c87c56bd-8q8cv" event={"ID":"ea251deb-26c7-4688-be95-095a296e46fa","Type":"ContainerDied","Data":"81a59404d5ea2e085f459d6aa0498fb07ae90943e16dd7034255b96ccf1633e7"} Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.183724 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-internal-tls-certs\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.186358 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-scripts\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.186460 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-config-data\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.186547 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-public-tls-certs\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.190573 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-272gp\" (UniqueName: \"kubernetes.io/projected/f744bb37-172a-4e29-b348-5b70d53c5d16-kube-api-access-272gp\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.203463 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f744bb37-172a-4e29-b348-5b70d53c5d16-combined-ca-bundle\") pod \"placement-784f554468-tgz6j\" (UID: \"f744bb37-172a-4e29-b348-5b70d53c5d16\") " pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:26 crc kubenswrapper[4736]: I0316 15:33:26.327435 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:27 crc kubenswrapper[4736]: I0316 15:33:27.054815 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 16 15:33:27 crc kubenswrapper[4736]: I0316 15:33:27.131522 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 16 15:33:27 crc kubenswrapper[4736]: I0316 15:33:27.177065 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" containerName="cinder-scheduler" containerID="cri-o://8b8210fd0c32f5a847cfe7b9a95682a87c64f2ca1fa216d4cb5cd86ff6ef2af6" gracePeriod=30 Mar 16 15:33:27 crc kubenswrapper[4736]: I0316 15:33:27.178840 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" containerName="probe" containerID="cri-o://19f8d992174e5ab13728ac3a79447c10ae503761fa34c42408fb7aa69bd7801e" gracePeriod=30 Mar 16 15:33:28 crc kubenswrapper[4736]: I0316 15:33:28.290458 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-654fb4cdb6-6lld5" Mar 16 15:33:28 crc kubenswrapper[4736]: I0316 15:33:28.917455 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56c87c56bd-8q8cv" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:58134->10.217.0.167:9311: read: connection reset by peer" Mar 16 15:33:28 crc kubenswrapper[4736]: I0316 15:33:28.917564 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56c87c56bd-8q8cv" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:58126->10.217.0.167:9311: read: connection reset by peer" Mar 16 15:33:29 crc kubenswrapper[4736]: I0316 15:33:29.213961 4736 generic.go:334] "Generic (PLEG): container finished" podID="ea251deb-26c7-4688-be95-095a296e46fa" containerID="d63ced8971ea02f8c1b87b4668549309fed45bcf7d5ff09f4e47c6dd7f93c5c6" exitCode=0 Mar 16 15:33:29 crc kubenswrapper[4736]: I0316 15:33:29.214074 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c87c56bd-8q8cv" event={"ID":"ea251deb-26c7-4688-be95-095a296e46fa","Type":"ContainerDied","Data":"d63ced8971ea02f8c1b87b4668549309fed45bcf7d5ff09f4e47c6dd7f93c5c6"} Mar 16 15:33:29 crc kubenswrapper[4736]: I0316 15:33:29.224537 4736 generic.go:334] "Generic (PLEG): container finished" podID="1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" containerID="19f8d992174e5ab13728ac3a79447c10ae503761fa34c42408fb7aa69bd7801e" exitCode=0 Mar 16 15:33:29 crc kubenswrapper[4736]: I0316 15:33:29.224583 4736 generic.go:334] "Generic (PLEG): container finished" podID="1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" containerID="8b8210fd0c32f5a847cfe7b9a95682a87c64f2ca1fa216d4cb5cd86ff6ef2af6" exitCode=0 Mar 16 15:33:29 crc kubenswrapper[4736]: I0316 15:33:29.224636 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa","Type":"ContainerDied","Data":"19f8d992174e5ab13728ac3a79447c10ae503761fa34c42408fb7aa69bd7801e"} Mar 16 15:33:29 crc kubenswrapper[4736]: I0316 15:33:29.224736 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa","Type":"ContainerDied","Data":"8b8210fd0c32f5a847cfe7b9a95682a87c64f2ca1fa216d4cb5cd86ff6ef2af6"} Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.364848 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.381013 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.389621 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.390163 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-hm9bm" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.390553 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.403124 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.461417 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/421bab10-ac4a-458f-98e3-18cd0adef038-openstack-config-secret\") pod \"openstackclient\" (UID: \"421bab10-ac4a-458f-98e3-18cd0adef038\") " pod="openstack/openstackclient" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.461488 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421bab10-ac4a-458f-98e3-18cd0adef038-combined-ca-bundle\") pod \"openstackclient\" (UID: \"421bab10-ac4a-458f-98e3-18cd0adef038\") " pod="openstack/openstackclient" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.461515 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/421bab10-ac4a-458f-98e3-18cd0adef038-openstack-config\") pod \"openstackclient\" (UID: \"421bab10-ac4a-458f-98e3-18cd0adef038\") " pod="openstack/openstackclient" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.461619 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkdnm\" (UniqueName: \"kubernetes.io/projected/421bab10-ac4a-458f-98e3-18cd0adef038-kube-api-access-pkdnm\") pod \"openstackclient\" (UID: \"421bab10-ac4a-458f-98e3-18cd0adef038\") " pod="openstack/openstackclient" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.563529 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkdnm\" (UniqueName: \"kubernetes.io/projected/421bab10-ac4a-458f-98e3-18cd0adef038-kube-api-access-pkdnm\") pod \"openstackclient\" (UID: \"421bab10-ac4a-458f-98e3-18cd0adef038\") " pod="openstack/openstackclient" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.563646 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/421bab10-ac4a-458f-98e3-18cd0adef038-openstack-config-secret\") pod \"openstackclient\" (UID: \"421bab10-ac4a-458f-98e3-18cd0adef038\") " pod="openstack/openstackclient" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.563683 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421bab10-ac4a-458f-98e3-18cd0adef038-combined-ca-bundle\") pod \"openstackclient\" (UID: \"421bab10-ac4a-458f-98e3-18cd0adef038\") " pod="openstack/openstackclient" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.563708 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/421bab10-ac4a-458f-98e3-18cd0adef038-openstack-config\") pod \"openstackclient\" (UID: \"421bab10-ac4a-458f-98e3-18cd0adef038\") " pod="openstack/openstackclient" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.564638 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/421bab10-ac4a-458f-98e3-18cd0adef038-openstack-config\") pod \"openstackclient\" (UID: \"421bab10-ac4a-458f-98e3-18cd0adef038\") " pod="openstack/openstackclient" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.572925 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/421bab10-ac4a-458f-98e3-18cd0adef038-openstack-config-secret\") pod \"openstackclient\" (UID: \"421bab10-ac4a-458f-98e3-18cd0adef038\") " pod="openstack/openstackclient" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.581848 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421bab10-ac4a-458f-98e3-18cd0adef038-combined-ca-bundle\") pod \"openstackclient\" (UID: \"421bab10-ac4a-458f-98e3-18cd0adef038\") " pod="openstack/openstackclient" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.585062 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkdnm\" (UniqueName: \"kubernetes.io/projected/421bab10-ac4a-458f-98e3-18cd0adef038-kube-api-access-pkdnm\") pod \"openstackclient\" (UID: \"421bab10-ac4a-458f-98e3-18cd0adef038\") " pod="openstack/openstackclient" Mar 16 15:33:30 crc kubenswrapper[4736]: I0316 15:33:30.720349 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.086448 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.102691 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.238139 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prw5s\" (UniqueName: \"kubernetes.io/projected/ea251deb-26c7-4688-be95-095a296e46fa-kube-api-access-prw5s\") pod \"ea251deb-26c7-4688-be95-095a296e46fa\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.238232 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-config-data\") pod \"ea251deb-26c7-4688-be95-095a296e46fa\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.238266 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-config-data\") pod \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.238344 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea251deb-26c7-4688-be95-095a296e46fa-logs\") pod \"ea251deb-26c7-4688-be95-095a296e46fa\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.238460 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-combined-ca-bundle\") pod \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.239400 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-config-data-custom\") pod \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.239521 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-etc-machine-id\") pod \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.239671 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-scripts\") pod \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.239726 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-config-data-custom\") pod \"ea251deb-26c7-4688-be95-095a296e46fa\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.239806 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-combined-ca-bundle\") pod \"ea251deb-26c7-4688-be95-095a296e46fa\" (UID: \"ea251deb-26c7-4688-be95-095a296e46fa\") " Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.239876 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj9tg\" (UniqueName: \"kubernetes.io/projected/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-kube-api-access-wj9tg\") pod \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\" (UID: \"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa\") " Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.246677 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea251deb-26c7-4688-be95-095a296e46fa-logs" (OuterVolumeSpecName: "logs") pod "ea251deb-26c7-4688-be95-095a296e46fa" (UID: "ea251deb-26c7-4688-be95-095a296e46fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.251051 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea251deb-26c7-4688-be95-095a296e46fa-kube-api-access-prw5s" (OuterVolumeSpecName: "kube-api-access-prw5s") pod "ea251deb-26c7-4688-be95-095a296e46fa" (UID: "ea251deb-26c7-4688-be95-095a296e46fa"). InnerVolumeSpecName "kube-api-access-prw5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.258290 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" (UID: "1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.265364 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-scripts" (OuterVolumeSpecName: "scripts") pod "1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" (UID: "1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.284821 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-kube-api-access-wj9tg" (OuterVolumeSpecName: "kube-api-access-wj9tg") pod "1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" (UID: "1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa"). InnerVolumeSpecName "kube-api-access-wj9tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.285006 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ea251deb-26c7-4688-be95-095a296e46fa" (UID: "ea251deb-26c7-4688-be95-095a296e46fa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.309041 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" (UID: "1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.330568 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa","Type":"ContainerDied","Data":"271e224fe6cd13ece725fcf859eb2dda680e661d50dcaa7dabc9b4e1b5946208"} Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.330650 4736 scope.go:117] "RemoveContainer" containerID="19f8d992174e5ab13728ac3a79447c10ae503761fa34c42408fb7aa69bd7801e" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.330865 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.344369 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.344687 4736 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.344778 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj9tg\" (UniqueName: \"kubernetes.io/projected/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-kube-api-access-wj9tg\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.344868 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prw5s\" (UniqueName: \"kubernetes.io/projected/ea251deb-26c7-4688-be95-095a296e46fa-kube-api-access-prw5s\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.344941 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea251deb-26c7-4688-be95-095a296e46fa-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.345009 4736 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.345076 4736 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.359914 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56c87c56bd-8q8cv" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.359638 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56c87c56bd-8q8cv" event={"ID":"ea251deb-26c7-4688-be95-095a296e46fa","Type":"ContainerDied","Data":"2acf81282adcaf5e5ccec50ff2b5a8bb0fbce0cf7739e4f2f754e27f2b0f09de"} Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.366805 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea251deb-26c7-4688-be95-095a296e46fa" (UID: "ea251deb-26c7-4688-be95-095a296e46fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.411994 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" (UID: "1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.441281 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-config-data" (OuterVolumeSpecName: "config-data") pod "ea251deb-26c7-4688-be95-095a296e46fa" (UID: "ea251deb-26c7-4688-be95-095a296e46fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.447776 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.448227 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.448274 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea251deb-26c7-4688-be95-095a296e46fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.515093 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-config-data" (OuterVolumeSpecName: "config-data") pod "1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" (UID: "1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.550839 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.672297 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.684900 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.719127 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-56c87c56bd-8q8cv"] Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.742185 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-56c87c56bd-8q8cv"] Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.757604 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 16 15:33:33 crc kubenswrapper[4736]: E0316 15:33:33.758327 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api-log" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.758413 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api-log" Mar 16 15:33:33 crc kubenswrapper[4736]: E0316 15:33:33.758481 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" containerName="cinder-scheduler" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.758530 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" containerName="cinder-scheduler" Mar 16 15:33:33 crc kubenswrapper[4736]: E0316 15:33:33.758606 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" containerName="probe" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.758664 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" containerName="probe" Mar 16 15:33:33 crc kubenswrapper[4736]: E0316 15:33:33.758719 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.758783 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.759035 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" containerName="cinder-scheduler" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.759120 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api-log" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.759174 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" containerName="probe" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.760398 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.762710 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.775875 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.776353 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.776591 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qdc58" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.782438 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.787830 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.857823 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d62491e-6f65-49ab-8baf-3c653e7df95e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.857916 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whzj7\" (UniqueName: \"kubernetes.io/projected/1d62491e-6f65-49ab-8baf-3c653e7df95e-kube-api-access-whzj7\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.857999 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d62491e-6f65-49ab-8baf-3c653e7df95e-scripts\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.858020 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d62491e-6f65-49ab-8baf-3c653e7df95e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.858080 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d62491e-6f65-49ab-8baf-3c653e7df95e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.858159 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d62491e-6f65-49ab-8baf-3c653e7df95e-config-data\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.960303 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d62491e-6f65-49ab-8baf-3c653e7df95e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.960375 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d62491e-6f65-49ab-8baf-3c653e7df95e-config-data\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.960544 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d62491e-6f65-49ab-8baf-3c653e7df95e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.960639 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whzj7\" (UniqueName: \"kubernetes.io/projected/1d62491e-6f65-49ab-8baf-3c653e7df95e-kube-api-access-whzj7\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.960669 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d62491e-6f65-49ab-8baf-3c653e7df95e-scripts\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.960700 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d62491e-6f65-49ab-8baf-3c653e7df95e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.960822 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1d62491e-6f65-49ab-8baf-3c653e7df95e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.968118 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d62491e-6f65-49ab-8baf-3c653e7df95e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.968444 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d62491e-6f65-49ab-8baf-3c653e7df95e-config-data\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.971680 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d62491e-6f65-49ab-8baf-3c653e7df95e-scripts\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.971881 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d62491e-6f65-49ab-8baf-3c653e7df95e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:33 crc kubenswrapper[4736]: I0316 15:33:33.990020 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whzj7\" (UniqueName: \"kubernetes.io/projected/1d62491e-6f65-49ab-8baf-3c653e7df95e-kube-api-access-whzj7\") pod \"cinder-scheduler-0\" (UID: \"1d62491e-6f65-49ab-8baf-3c653e7df95e\") " pod="openstack/cinder-scheduler-0" Mar 16 15:33:34 crc kubenswrapper[4736]: I0316 15:33:34.083265 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 16 15:33:34 crc kubenswrapper[4736]: E0316 15:33:34.233293 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Mar 16 15:33:34 crc kubenswrapper[4736]: E0316 15:33:34.233496 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gv9pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4dbfbe46-5ac1-4ba4-bea6-e1712d671e78): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 16 15:33:34 crc kubenswrapper[4736]: E0316 15:33:34.234851 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" Mar 16 15:33:34 crc kubenswrapper[4736]: I0316 15:33:34.267876 4736 scope.go:117] "RemoveContainer" containerID="8b8210fd0c32f5a847cfe7b9a95682a87c64f2ca1fa216d4cb5cd86ff6ef2af6" Mar 16 15:33:34 crc kubenswrapper[4736]: I0316 15:33:34.387827 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerName="ceilometer-central-agent" containerID="cri-o://6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590" gracePeriod=30 Mar 16 15:33:34 crc kubenswrapper[4736]: I0316 15:33:34.388535 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerName="sg-core" containerID="cri-o://6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe" gracePeriod=30 Mar 16 15:33:34 crc kubenswrapper[4736]: I0316 15:33:34.388591 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerName="ceilometer-notification-agent" containerID="cri-o://cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d" gracePeriod=30 Mar 16 15:33:34 crc kubenswrapper[4736]: I0316 15:33:34.407288 4736 scope.go:117] "RemoveContainer" containerID="d63ced8971ea02f8c1b87b4668549309fed45bcf7d5ff09f4e47c6dd7f93c5c6" Mar 16 15:33:34 crc kubenswrapper[4736]: I0316 15:33:34.509371 4736 scope.go:117] "RemoveContainer" containerID="81a59404d5ea2e085f459d6aa0498fb07ae90943e16dd7034255b96ccf1633e7" Mar 16 15:33:34 crc kubenswrapper[4736]: I0316 15:33:34.625615 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 16 15:33:34 crc kubenswrapper[4736]: I0316 15:33:34.998473 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa" path="/var/lib/kubelet/pods/1726cfcf-6fcf-4acd-9a3e-dbcc26a797aa/volumes" Mar 16 15:33:34 crc kubenswrapper[4736]: I0316 15:33:34.999333 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea251deb-26c7-4688-be95-095a296e46fa" path="/var/lib/kubelet/pods/ea251deb-26c7-4688-be95-095a296e46fa/volumes" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.000033 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.051216 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-784f554468-tgz6j"] Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.321483 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.402214 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1d62491e-6f65-49ab-8baf-3c653e7df95e","Type":"ContainerStarted","Data":"80d206a6c61110446940ba43339b20a96a0018aa90a91e1fe5133050f3e02ca5"} Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.403805 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"421bab10-ac4a-458f-98e3-18cd0adef038","Type":"ContainerStarted","Data":"0d7ecf81b677456abd043e9b67210b9bfe084a3c7e0e8b697c40374794eb7887"} Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.410285 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-784f554468-tgz6j" event={"ID":"f744bb37-172a-4e29-b348-5b70d53c5d16","Type":"ContainerStarted","Data":"7220b4304daaa7835c9e38be0fa8267f756435c05c98d395bc22c3abbc18b1d7"} Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.418645 4736 generic.go:334] "Generic (PLEG): container finished" podID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerID="6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe" exitCode=2 Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.418677 4736 generic.go:334] "Generic (PLEG): container finished" podID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerID="cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d" exitCode=0 Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.418686 4736 generic.go:334] "Generic (PLEG): container finished" podID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerID="6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590" exitCode=0 Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.418730 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.418749 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78","Type":"ContainerDied","Data":"6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe"} Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.418822 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78","Type":"ContainerDied","Data":"cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d"} Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.418835 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78","Type":"ContainerDied","Data":"6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590"} Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.418850 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78","Type":"ContainerDied","Data":"9875a2b3a1b5a19021d12954e04c81fe098e2226c941830636703e9974769cd0"} Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.418868 4736 scope.go:117] "RemoveContainer" containerID="6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.421996 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-config-data\") pod \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.422092 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-sg-core-conf-yaml\") pod \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.422280 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-log-httpd\") pod \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.422352 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-run-httpd\") pod \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.422812 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" (UID: "4dbfbe46-5ac1-4ba4-bea6-e1712d671e78"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.422897 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv9pv\" (UniqueName: \"kubernetes.io/projected/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-kube-api-access-gv9pv\") pod \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.422917 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-scripts\") pod \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.423087 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" (UID: "4dbfbe46-5ac1-4ba4-bea6-e1712d671e78"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.423257 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-combined-ca-bundle\") pod \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\" (UID: \"4dbfbe46-5ac1-4ba4-bea6-e1712d671e78\") " Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.423807 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.423826 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.432125 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-kube-api-access-gv9pv" (OuterVolumeSpecName: "kube-api-access-gv9pv") pod "4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" (UID: "4dbfbe46-5ac1-4ba4-bea6-e1712d671e78"). InnerVolumeSpecName "kube-api-access-gv9pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.435789 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-scripts" (OuterVolumeSpecName: "scripts") pod "4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" (UID: "4dbfbe46-5ac1-4ba4-bea6-e1712d671e78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.462584 4736 scope.go:117] "RemoveContainer" containerID="cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.464793 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" (UID: "4dbfbe46-5ac1-4ba4-bea6-e1712d671e78"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.508101 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-config-data" (OuterVolumeSpecName: "config-data") pod "4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" (UID: "4dbfbe46-5ac1-4ba4-bea6-e1712d671e78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.512300 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" (UID: "4dbfbe46-5ac1-4ba4-bea6-e1712d671e78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.521667 4736 scope.go:117] "RemoveContainer" containerID="6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.525389 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.525408 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv9pv\" (UniqueName: \"kubernetes.io/projected/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-kube-api-access-gv9pv\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.525419 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.525428 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.525439 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.586691 4736 scope.go:117] "RemoveContainer" containerID="6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe" Mar 16 15:33:35 crc kubenswrapper[4736]: E0316 15:33:35.587213 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe\": container with ID starting with 6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe not found: ID does not exist" containerID="6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.587248 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe"} err="failed to get container status \"6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe\": rpc error: code = NotFound desc = could not find container \"6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe\": container with ID starting with 6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe not found: ID does not exist" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.587285 4736 scope.go:117] "RemoveContainer" containerID="cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d" Mar 16 15:33:35 crc kubenswrapper[4736]: E0316 15:33:35.587790 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d\": container with ID starting with cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d not found: ID does not exist" containerID="cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.587861 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d"} err="failed to get container status \"cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d\": rpc error: code = NotFound desc = could not find container \"cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d\": container with ID starting with cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d not found: ID does not exist" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.587883 4736 scope.go:117] "RemoveContainer" containerID="6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590" Mar 16 15:33:35 crc kubenswrapper[4736]: E0316 15:33:35.588282 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590\": container with ID starting with 6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590 not found: ID does not exist" containerID="6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.588325 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590"} err="failed to get container status \"6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590\": rpc error: code = NotFound desc = could not find container \"6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590\": container with ID starting with 6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590 not found: ID does not exist" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.588356 4736 scope.go:117] "RemoveContainer" containerID="6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.588798 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe"} err="failed to get container status \"6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe\": rpc error: code = NotFound desc = could not find container \"6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe\": container with ID starting with 6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe not found: ID does not exist" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.588822 4736 scope.go:117] "RemoveContainer" containerID="cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.589047 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d"} err="failed to get container status \"cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d\": rpc error: code = NotFound desc = could not find container \"cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d\": container with ID starting with cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d not found: ID does not exist" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.589067 4736 scope.go:117] "RemoveContainer" containerID="6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.589539 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590"} err="failed to get container status \"6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590\": rpc error: code = NotFound desc = could not find container \"6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590\": container with ID starting with 6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590 not found: ID does not exist" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.589563 4736 scope.go:117] "RemoveContainer" containerID="6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.589995 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe"} err="failed to get container status \"6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe\": rpc error: code = NotFound desc = could not find container \"6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe\": container with ID starting with 6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe not found: ID does not exist" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.590014 4736 scope.go:117] "RemoveContainer" containerID="cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.591194 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d"} err="failed to get container status \"cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d\": rpc error: code = NotFound desc = could not find container \"cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d\": container with ID starting with cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d not found: ID does not exist" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.591216 4736 scope.go:117] "RemoveContainer" containerID="6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590" Mar 16 15:33:35 crc kubenswrapper[4736]: I0316 15:33:35.591554 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590"} err="failed to get container status \"6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590\": rpc error: code = NotFound desc = could not find container \"6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590\": container with ID starting with 6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590 not found: ID does not exist" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.014193 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-78d5db5598-m7dnz"] Mar 16 15:33:36 crc kubenswrapper[4736]: E0316 15:33:36.014975 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerName="ceilometer-notification-agent" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.014990 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerName="ceilometer-notification-agent" Mar 16 15:33:36 crc kubenswrapper[4736]: E0316 15:33:36.015014 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerName="sg-core" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.015020 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerName="sg-core" Mar 16 15:33:36 crc kubenswrapper[4736]: E0316 15:33:36.015055 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerName="ceilometer-central-agent" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.015064 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerName="ceilometer-central-agent" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.015421 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerName="sg-core" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.015454 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerName="ceilometer-central-agent" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.015469 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" containerName="ceilometer-notification-agent" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.016349 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.035232 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.038205 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.038430 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-rngjg" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.058442 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.063145 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.094023 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-78d5db5598-m7dnz"] Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.132963 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.141209 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.141639 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fskwq\" (UniqueName: \"kubernetes.io/projected/2a16c147-1622-4470-a54d-e331fae4ea8a-kube-api-access-fskwq\") pod \"heat-engine-78d5db5598-m7dnz\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.141716 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-config-data-custom\") pod \"heat-engine-78d5db5598-m7dnz\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.141810 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-config-data\") pod \"heat-engine-78d5db5598-m7dnz\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.141897 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-combined-ca-bundle\") pod \"heat-engine-78d5db5598-m7dnz\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.151743 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.151936 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.154161 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.192418 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b87666975-blf44"] Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.210015 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.246774 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-combined-ca-bundle\") pod \"heat-engine-78d5db5598-m7dnz\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.246855 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/979cb634-6ed1-418b-8b87-1e5064da96c8-log-httpd\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.246882 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fskwq\" (UniqueName: \"kubernetes.io/projected/2a16c147-1622-4470-a54d-e331fae4ea8a-kube-api-access-fskwq\") pod \"heat-engine-78d5db5598-m7dnz\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.246906 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/979cb634-6ed1-418b-8b87-1e5064da96c8-run-httpd\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.246922 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-config-data\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.246954 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-config-data-custom\") pod \"heat-engine-78d5db5598-m7dnz\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.247025 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-config-data\") pod \"heat-engine-78d5db5598-m7dnz\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.247064 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.247091 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-scripts\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.247147 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j4xz\" (UniqueName: \"kubernetes.io/projected/979cb634-6ed1-418b-8b87-1e5064da96c8-kube-api-access-6j4xz\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.247170 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.256751 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-combined-ca-bundle\") pod \"heat-engine-78d5db5598-m7dnz\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.272026 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-config-data\") pod \"heat-engine-78d5db5598-m7dnz\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.272442 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-56b5646679-zf58k"] Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.273900 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.279486 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fskwq\" (UniqueName: \"kubernetes.io/projected/2a16c147-1622-4470-a54d-e331fae4ea8a-kube-api-access-fskwq\") pod \"heat-engine-78d5db5598-m7dnz\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.287766 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.289269 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-config-data-custom\") pod \"heat-engine-78d5db5598-m7dnz\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.297521 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b87666975-blf44"] Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.330222 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-56b5646679-zf58k"] Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.348908 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7nzz\" (UniqueName: \"kubernetes.io/projected/5f712fb8-4a0f-400d-b21d-1fc72a671b31-kube-api-access-n7nzz\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.348976 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-config\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.349015 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-config-data-custom\") pod \"heat-cfnapi-56b5646679-zf58k\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.349071 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.349095 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-ovsdbserver-nb\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.349165 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-dns-svc\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.349193 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-scripts\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.349217 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m85r4\" (UniqueName: \"kubernetes.io/projected/55d9a0a2-2842-4c23-98b1-2f97469a951f-kube-api-access-m85r4\") pod \"heat-cfnapi-56b5646679-zf58k\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.349237 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j4xz\" (UniqueName: \"kubernetes.io/projected/979cb634-6ed1-418b-8b87-1e5064da96c8-kube-api-access-6j4xz\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.349262 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-config-data\") pod \"heat-cfnapi-56b5646679-zf58k\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.349285 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.349318 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-ovsdbserver-sb\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.349350 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/979cb634-6ed1-418b-8b87-1e5064da96c8-log-httpd\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.349372 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-combined-ca-bundle\") pod \"heat-cfnapi-56b5646679-zf58k\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.349394 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/979cb634-6ed1-418b-8b87-1e5064da96c8-run-httpd\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.349410 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-config-data\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.352426 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/979cb634-6ed1-418b-8b87-1e5064da96c8-log-httpd\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.352840 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/979cb634-6ed1-418b-8b87-1e5064da96c8-run-httpd\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.358296 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-config-data\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.368488 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-scripts\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.368563 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.369823 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.394378 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.395039 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j4xz\" (UniqueName: \"kubernetes.io/projected/979cb634-6ed1-418b-8b87-1e5064da96c8-kube-api-access-6j4xz\") pod \"ceilometer-0\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.396704 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-799b5776df-77jmf"] Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.399319 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.407415 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.421534 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-799b5776df-77jmf"] Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.455542 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m85r4\" (UniqueName: \"kubernetes.io/projected/55d9a0a2-2842-4c23-98b1-2f97469a951f-kube-api-access-m85r4\") pod \"heat-cfnapi-56b5646679-zf58k\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.455617 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-config-data\") pod \"heat-cfnapi-56b5646679-zf58k\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.455675 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-ovsdbserver-sb\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.455724 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-combined-ca-bundle\") pod \"heat-cfnapi-56b5646679-zf58k\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.455834 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7nzz\" (UniqueName: \"kubernetes.io/projected/5f712fb8-4a0f-400d-b21d-1fc72a671b31-kube-api-access-n7nzz\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.455870 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-config\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.455905 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-config-data-custom\") pod \"heat-cfnapi-56b5646679-zf58k\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.455961 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-ovsdbserver-nb\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.455986 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-dns-svc\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.457094 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-dns-svc\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.458223 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-ovsdbserver-sb\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.460567 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-config\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.461583 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-ovsdbserver-nb\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.466423 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-config-data\") pod \"heat-cfnapi-56b5646679-zf58k\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.474860 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-config-data-custom\") pod \"heat-cfnapi-56b5646679-zf58k\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.491958 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7nzz\" (UniqueName: \"kubernetes.io/projected/5f712fb8-4a0f-400d-b21d-1fc72a671b31-kube-api-access-n7nzz\") pod \"dnsmasq-dns-7b87666975-blf44\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.494011 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.496033 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m85r4\" (UniqueName: \"kubernetes.io/projected/55d9a0a2-2842-4c23-98b1-2f97469a951f-kube-api-access-m85r4\") pod \"heat-cfnapi-56b5646679-zf58k\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.499864 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-combined-ca-bundle\") pod \"heat-cfnapi-56b5646679-zf58k\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.500165 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1d62491e-6f65-49ab-8baf-3c653e7df95e","Type":"ContainerStarted","Data":"d2168ce30863ebcf42235a084011def65eb99b812e73b9e9440ea7225ed42683"} Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.519822 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.524438 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-784f554468-tgz6j" event={"ID":"f744bb37-172a-4e29-b348-5b70d53c5d16","Type":"ContainerStarted","Data":"47361223ebfe472518b754c9b5ce8fa52e3c8f2cb55ed009d3783e865e5e51ef"} Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.529679 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-784f554468-tgz6j" event={"ID":"f744bb37-172a-4e29-b348-5b70d53c5d16","Type":"ContainerStarted","Data":"f12a1dd30ef01d6b0bc09a87971d12c162417effa1cb002d3ee615472c2a0023"} Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.530923 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.531052 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.552475 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.557550 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-config-data\") pod \"heat-api-799b5776df-77jmf\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.557843 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-config-data-custom\") pod \"heat-api-799b5776df-77jmf\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.558154 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dczsj\" (UniqueName: \"kubernetes.io/projected/de867ad2-920b-4719-972b-71b4cbb83e2e-kube-api-access-dczsj\") pod \"heat-api-799b5776df-77jmf\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.558393 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-combined-ca-bundle\") pod \"heat-api-799b5776df-77jmf\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.565122 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-784f554468-tgz6j" podStartSLOduration=11.565076046 podStartE2EDuration="11.565076046s" podCreationTimestamp="2026-03-16 15:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:36.555161161 +0000 UTC m=+1218.282551448" watchObservedRunningTime="2026-03-16 15:33:36.565076046 +0000 UTC m=+1218.292466343" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.662848 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-config-data\") pod \"heat-api-799b5776df-77jmf\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.665547 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-config-data-custom\") pod \"heat-api-799b5776df-77jmf\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.665886 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dczsj\" (UniqueName: \"kubernetes.io/projected/de867ad2-920b-4719-972b-71b4cbb83e2e-kube-api-access-dczsj\") pod \"heat-api-799b5776df-77jmf\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.666201 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-combined-ca-bundle\") pod \"heat-api-799b5776df-77jmf\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.697470 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-config-data\") pod \"heat-api-799b5776df-77jmf\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.697716 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-combined-ca-bundle\") pod \"heat-api-799b5776df-77jmf\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.703586 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dczsj\" (UniqueName: \"kubernetes.io/projected/de867ad2-920b-4719-972b-71b4cbb83e2e-kube-api-access-dczsj\") pod \"heat-api-799b5776df-77jmf\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.714507 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-config-data-custom\") pod \"heat-api-799b5776df-77jmf\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:36 crc kubenswrapper[4736]: I0316 15:33:36.848314 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:37 crc kubenswrapper[4736]: I0316 15:33:37.068533 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dbfbe46-5ac1-4ba4-bea6-e1712d671e78" path="/var/lib/kubelet/pods/4dbfbe46-5ac1-4ba4-bea6-e1712d671e78/volumes" Mar 16 15:33:37 crc kubenswrapper[4736]: I0316 15:33:37.085216 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-78d5db5598-m7dnz"] Mar 16 15:33:37 crc kubenswrapper[4736]: I0316 15:33:37.093886 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56c87c56bd-8q8cv" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 15:33:37 crc kubenswrapper[4736]: I0316 15:33:37.094355 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56c87c56bd-8q8cv" podUID="ea251deb-26c7-4688-be95-095a296e46fa" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": dial tcp 10.217.0.167:9311: i/o timeout (Client.Timeout exceeded while awaiting headers)" Mar 16 15:33:37 crc kubenswrapper[4736]: I0316 15:33:37.276767 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:33:37 crc kubenswrapper[4736]: I0316 15:33:37.601683 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-56b5646679-zf58k"] Mar 16 15:33:37 crc kubenswrapper[4736]: I0316 15:33:37.603978 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-78d5db5598-m7dnz" event={"ID":"2a16c147-1622-4470-a54d-e331fae4ea8a","Type":"ContainerStarted","Data":"b3970329543467f63c17fe569e5ed8e82ada55c03202f066deeb45a36ba40f65"} Mar 16 15:33:37 crc kubenswrapper[4736]: I0316 15:33:37.615355 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"979cb634-6ed1-418b-8b87-1e5064da96c8","Type":"ContainerStarted","Data":"52440f446e2eb427ca8b6e76e7acad42f9fb19d4abdf19939e01a264b1089c43"} Mar 16 15:33:37 crc kubenswrapper[4736]: I0316 15:33:37.656042 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b87666975-blf44"] Mar 16 15:33:37 crc kubenswrapper[4736]: I0316 15:33:37.905576 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-799b5776df-77jmf"] Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.507608 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.512649 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.512700 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.513713 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42060cddfe4c59472f3be42cb2e0bb18ea86207173b4fad3d63b9e861b6fe74e"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.513781 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://42060cddfe4c59472f3be42cb2e0bb18ea86207173b4fad3d63b9e861b6fe74e" gracePeriod=600 Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.637994 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-78d5db5598-m7dnz" event={"ID":"2a16c147-1622-4470-a54d-e331fae4ea8a","Type":"ContainerStarted","Data":"ebcc15e8677362de260b8c26909e58b6d9aa895b94bd769385e9f9c2c80a2601"} Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.638997 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.659350 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-799b5776df-77jmf" event={"ID":"de867ad2-920b-4719-972b-71b4cbb83e2e","Type":"ContainerStarted","Data":"408f9f1de266a5aa2a78b12eaadb1dfdfb6b328d07103fb08f269b2dc8e96b04"} Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.687701 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-56b5646679-zf58k" event={"ID":"55d9a0a2-2842-4c23-98b1-2f97469a951f","Type":"ContainerStarted","Data":"5e5eccee78369ffaddead3274e32e3495180636a1612230380a70f32ea04eef8"} Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.704562 4736 generic.go:334] "Generic (PLEG): container finished" podID="5f712fb8-4a0f-400d-b21d-1fc72a671b31" containerID="5265934484f061fe78b8bb8f456241117f6ebf2461459504fdd39ccb8385dc56" exitCode=0 Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.706190 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b87666975-blf44" event={"ID":"5f712fb8-4a0f-400d-b21d-1fc72a671b31","Type":"ContainerDied","Data":"5265934484f061fe78b8bb8f456241117f6ebf2461459504fdd39ccb8385dc56"} Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.706230 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b87666975-blf44" event={"ID":"5f712fb8-4a0f-400d-b21d-1fc72a671b31","Type":"ContainerStarted","Data":"9b7b7c334e0f45c052302d3eaba8d1a61e61e9a850f1d4beb95d1a990d50b594"} Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.730380 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1d62491e-6f65-49ab-8baf-3c653e7df95e","Type":"ContainerStarted","Data":"fb48b51c0505bf1cc9bd7da10369457338074d713c46269ddba5d81f6ca3bbd7"} Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.733910 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"979cb634-6ed1-418b-8b87-1e5064da96c8","Type":"ContainerStarted","Data":"b9fb6e699f3ea391a43c6ebb50fd2a1b84b5414735c34127821eead6962bec2b"} Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.748800 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-78d5db5598-m7dnz" podStartSLOduration=3.748776855 podStartE2EDuration="3.748776855s" podCreationTimestamp="2026-03-16 15:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:38.697133792 +0000 UTC m=+1220.424524089" watchObservedRunningTime="2026-03-16 15:33:38.748776855 +0000 UTC m=+1220.476167142" Mar 16 15:33:38 crc kubenswrapper[4736]: I0316 15:33:38.784169 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.784144642 podStartE2EDuration="5.784144642s" podCreationTimestamp="2026-03-16 15:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:38.775730857 +0000 UTC m=+1220.503121144" watchObservedRunningTime="2026-03-16 15:33:38.784144642 +0000 UTC m=+1220.511534929" Mar 16 15:33:39 crc kubenswrapper[4736]: I0316 15:33:39.085347 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 16 15:33:39 crc kubenswrapper[4736]: I0316 15:33:39.245706 4736 scope.go:117] "RemoveContainer" containerID="bef68a4da7a6b16c38e5d85bfec01d2efb557997c8be7ad0c002012a174d8c4f" Mar 16 15:33:39 crc kubenswrapper[4736]: E0316 15:33:39.732285 4736 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cd5b402795716dca74d3a2209dec9635aaec38dd9de2397e0fd4a1e9f7514b60/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cd5b402795716dca74d3a2209dec9635aaec38dd9de2397e0fd4a1e9f7514b60/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_4dbfbe46-5ac1-4ba4-bea6-e1712d671e78/ceilometer-central-agent/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_4dbfbe46-5ac1-4ba4-bea6-e1712d671e78/ceilometer-central-agent/0.log: no such file or directory Mar 16 15:33:39 crc kubenswrapper[4736]: I0316 15:33:39.849792 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="42060cddfe4c59472f3be42cb2e0bb18ea86207173b4fad3d63b9e861b6fe74e" exitCode=0 Mar 16 15:33:39 crc kubenswrapper[4736]: I0316 15:33:39.851054 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"42060cddfe4c59472f3be42cb2e0bb18ea86207173b4fad3d63b9e861b6fe74e"} Mar 16 15:33:39 crc kubenswrapper[4736]: I0316 15:33:39.851088 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"08f32cddeb0a32067a3f6c160d0a3610c524e1e7ffc4c4430b9902318c96ac8e"} Mar 16 15:33:39 crc kubenswrapper[4736]: I0316 15:33:39.851127 4736 scope.go:117] "RemoveContainer" containerID="d913506ed85bd453632f44b051d57a127bbfa0326b3ffb2cb8d536a6f22597ce" Mar 16 15:33:40 crc kubenswrapper[4736]: I0316 15:33:40.021658 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 15:33:40 crc kubenswrapper[4736]: I0316 15:33:40.185296 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6779bd586b-h28pq"] Mar 16 15:33:40 crc kubenswrapper[4736]: I0316 15:33:40.191670 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6779bd586b-h28pq" podUID="55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" containerName="neutron-api" containerID="cri-o://7ffd6c7641d7f382793b78d637a249602cf34732889e9f01964f23df3673331a" gracePeriod=30 Mar 16 15:33:40 crc kubenswrapper[4736]: I0316 15:33:40.192777 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6779bd586b-h28pq" podUID="55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" containerName="neutron-httpd" containerID="cri-o://385bb8499b5bf9134d3519b48f9c5b43761cb64555172b1c94cdd0a3890097ef" gracePeriod=30 Mar 16 15:33:40 crc kubenswrapper[4736]: I0316 15:33:40.868348 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b87666975-blf44" event={"ID":"5f712fb8-4a0f-400d-b21d-1fc72a671b31","Type":"ContainerStarted","Data":"c5b423c00c460f9a66eeab99191397a8cfd3e65e0e00d9710ca0facad764d09c"} Mar 16 15:33:40 crc kubenswrapper[4736]: I0316 15:33:40.870237 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:40 crc kubenswrapper[4736]: I0316 15:33:40.900877 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"979cb634-6ed1-418b-8b87-1e5064da96c8","Type":"ContainerStarted","Data":"5575127eab2bf22fa7b9d9a952e86aa33867feb7857f0fea413c7cfaa3c4b977"} Mar 16 15:33:40 crc kubenswrapper[4736]: I0316 15:33:40.901619 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b87666975-blf44" podStartSLOduration=4.901597927 podStartE2EDuration="4.901597927s" podCreationTimestamp="2026-03-16 15:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:40.892647418 +0000 UTC m=+1222.620037705" watchObservedRunningTime="2026-03-16 15:33:40.901597927 +0000 UTC m=+1222.628988214" Mar 16 15:33:40 crc kubenswrapper[4736]: I0316 15:33:40.904696 4736 generic.go:334] "Generic (PLEG): container finished" podID="55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" containerID="385bb8499b5bf9134d3519b48f9c5b43761cb64555172b1c94cdd0a3890097ef" exitCode=0 Mar 16 15:33:40 crc kubenswrapper[4736]: I0316 15:33:40.905445 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6779bd586b-h28pq" event={"ID":"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d","Type":"ContainerDied","Data":"385bb8499b5bf9134d3519b48f9c5b43761cb64555172b1c94cdd0a3890097ef"} Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.662512 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-kslp8"] Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.664715 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kslp8" Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.688996 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kslp8"] Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.780539 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-7c80-account-create-update-2q7s7"] Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.782697 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7c80-account-create-update-2q7s7" Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.795704 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.804380 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9763d87e-5b6c-45e2-9b19-a9799e96f9fd-operator-scripts\") pod \"nova-api-db-create-kslp8\" (UID: \"9763d87e-5b6c-45e2-9b19-a9799e96f9fd\") " pod="openstack/nova-api-db-create-kslp8" Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.804516 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6kmd\" (UniqueName: \"kubernetes.io/projected/9763d87e-5b6c-45e2-9b19-a9799e96f9fd-kube-api-access-t6kmd\") pod \"nova-api-db-create-kslp8\" (UID: \"9763d87e-5b6c-45e2-9b19-a9799e96f9fd\") " pod="openstack/nova-api-db-create-kslp8" Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.807350 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dtt9x"] Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.808689 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dtt9x" Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.823960 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dtt9x"] Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.853183 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7c80-account-create-update-2q7s7"] Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.930459 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6kmd\" (UniqueName: \"kubernetes.io/projected/9763d87e-5b6c-45e2-9b19-a9799e96f9fd-kube-api-access-t6kmd\") pod \"nova-api-db-create-kslp8\" (UID: \"9763d87e-5b6c-45e2-9b19-a9799e96f9fd\") " pod="openstack/nova-api-db-create-kslp8" Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.931356 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmtvc\" (UniqueName: \"kubernetes.io/projected/1d25ea99-429a-495b-b0db-f30af232a75f-kube-api-access-lmtvc\") pod \"nova-api-7c80-account-create-update-2q7s7\" (UID: \"1d25ea99-429a-495b-b0db-f30af232a75f\") " pod="openstack/nova-api-7c80-account-create-update-2q7s7" Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.931547 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f7edb4e-af8b-4049-b0d9-cd107b5cb783-operator-scripts\") pod \"nova-cell0-db-create-dtt9x\" (UID: \"3f7edb4e-af8b-4049-b0d9-cd107b5cb783\") " pod="openstack/nova-cell0-db-create-dtt9x" Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.931890 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9763d87e-5b6c-45e2-9b19-a9799e96f9fd-operator-scripts\") pod \"nova-api-db-create-kslp8\" (UID: \"9763d87e-5b6c-45e2-9b19-a9799e96f9fd\") " pod="openstack/nova-api-db-create-kslp8" Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.935404 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d25ea99-429a-495b-b0db-f30af232a75f-operator-scripts\") pod \"nova-api-7c80-account-create-update-2q7s7\" (UID: \"1d25ea99-429a-495b-b0db-f30af232a75f\") " pod="openstack/nova-api-7c80-account-create-update-2q7s7" Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.938583 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxcm7\" (UniqueName: \"kubernetes.io/projected/3f7edb4e-af8b-4049-b0d9-cd107b5cb783-kube-api-access-vxcm7\") pod \"nova-cell0-db-create-dtt9x\" (UID: \"3f7edb4e-af8b-4049-b0d9-cd107b5cb783\") " pod="openstack/nova-cell0-db-create-dtt9x" Mar 16 15:33:42 crc kubenswrapper[4736]: I0316 15:33:42.933390 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9763d87e-5b6c-45e2-9b19-a9799e96f9fd-operator-scripts\") pod \"nova-api-db-create-kslp8\" (UID: \"9763d87e-5b6c-45e2-9b19-a9799e96f9fd\") " pod="openstack/nova-api-db-create-kslp8" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:42.999937 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6kmd\" (UniqueName: \"kubernetes.io/projected/9763d87e-5b6c-45e2-9b19-a9799e96f9fd-kube-api-access-t6kmd\") pod \"nova-api-db-create-kslp8\" (UID: \"9763d87e-5b6c-45e2-9b19-a9799e96f9fd\") " pod="openstack/nova-api-db-create-kslp8" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.026069 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-snm2s"] Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.027395 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-snm2s" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.040718 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxcm7\" (UniqueName: \"kubernetes.io/projected/3f7edb4e-af8b-4049-b0d9-cd107b5cb783-kube-api-access-vxcm7\") pod \"nova-cell0-db-create-dtt9x\" (UID: \"3f7edb4e-af8b-4049-b0d9-cd107b5cb783\") " pod="openstack/nova-cell0-db-create-dtt9x" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.040856 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmtvc\" (UniqueName: \"kubernetes.io/projected/1d25ea99-429a-495b-b0db-f30af232a75f-kube-api-access-lmtvc\") pod \"nova-api-7c80-account-create-update-2q7s7\" (UID: \"1d25ea99-429a-495b-b0db-f30af232a75f\") " pod="openstack/nova-api-7c80-account-create-update-2q7s7" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.040895 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f7edb4e-af8b-4049-b0d9-cd107b5cb783-operator-scripts\") pod \"nova-cell0-db-create-dtt9x\" (UID: \"3f7edb4e-af8b-4049-b0d9-cd107b5cb783\") " pod="openstack/nova-cell0-db-create-dtt9x" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.040977 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d25ea99-429a-495b-b0db-f30af232a75f-operator-scripts\") pod \"nova-api-7c80-account-create-update-2q7s7\" (UID: \"1d25ea99-429a-495b-b0db-f30af232a75f\") " pod="openstack/nova-api-7c80-account-create-update-2q7s7" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.042606 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f7edb4e-af8b-4049-b0d9-cd107b5cb783-operator-scripts\") pod \"nova-cell0-db-create-dtt9x\" (UID: \"3f7edb4e-af8b-4049-b0d9-cd107b5cb783\") " pod="openstack/nova-cell0-db-create-dtt9x" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.043524 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d25ea99-429a-495b-b0db-f30af232a75f-operator-scripts\") pod \"nova-api-7c80-account-create-update-2q7s7\" (UID: \"1d25ea99-429a-495b-b0db-f30af232a75f\") " pod="openstack/nova-api-7c80-account-create-update-2q7s7" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.074504 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-snm2s"] Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.102021 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8b9d-account-create-update-jnndn"] Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.111538 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8b9d-account-create-update-jnndn" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.122364 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmtvc\" (UniqueName: \"kubernetes.io/projected/1d25ea99-429a-495b-b0db-f30af232a75f-kube-api-access-lmtvc\") pod \"nova-api-7c80-account-create-update-2q7s7\" (UID: \"1d25ea99-429a-495b-b0db-f30af232a75f\") " pod="openstack/nova-api-7c80-account-create-update-2q7s7" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.123665 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.129669 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxcm7\" (UniqueName: \"kubernetes.io/projected/3f7edb4e-af8b-4049-b0d9-cd107b5cb783-kube-api-access-vxcm7\") pod \"nova-cell0-db-create-dtt9x\" (UID: \"3f7edb4e-af8b-4049-b0d9-cd107b5cb783\") " pod="openstack/nova-cell0-db-create-dtt9x" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.142481 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59qxh\" (UniqueName: \"kubernetes.io/projected/6fe90198-808b-48df-bdef-8341723e0511-kube-api-access-59qxh\") pod \"nova-cell1-db-create-snm2s\" (UID: \"6fe90198-808b-48df-bdef-8341723e0511\") " pod="openstack/nova-cell1-db-create-snm2s" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.142623 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe90198-808b-48df-bdef-8341723e0511-operator-scripts\") pod \"nova-cell1-db-create-snm2s\" (UID: \"6fe90198-808b-48df-bdef-8341723e0511\") " pod="openstack/nova-cell1-db-create-snm2s" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.164154 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dtt9x" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.170507 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8b9d-account-create-update-jnndn"] Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.245139 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59qxh\" (UniqueName: \"kubernetes.io/projected/6fe90198-808b-48df-bdef-8341723e0511-kube-api-access-59qxh\") pod \"nova-cell1-db-create-snm2s\" (UID: \"6fe90198-808b-48df-bdef-8341723e0511\") " pod="openstack/nova-cell1-db-create-snm2s" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.245234 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lntv2\" (UniqueName: \"kubernetes.io/projected/25372155-a159-47f2-aba8-8eacd41bf2ff-kube-api-access-lntv2\") pod \"nova-cell0-8b9d-account-create-update-jnndn\" (UID: \"25372155-a159-47f2-aba8-8eacd41bf2ff\") " pod="openstack/nova-cell0-8b9d-account-create-update-jnndn" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.245270 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25372155-a159-47f2-aba8-8eacd41bf2ff-operator-scripts\") pod \"nova-cell0-8b9d-account-create-update-jnndn\" (UID: \"25372155-a159-47f2-aba8-8eacd41bf2ff\") " pod="openstack/nova-cell0-8b9d-account-create-update-jnndn" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.245306 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe90198-808b-48df-bdef-8341723e0511-operator-scripts\") pod \"nova-cell1-db-create-snm2s\" (UID: \"6fe90198-808b-48df-bdef-8341723e0511\") " pod="openstack/nova-cell1-db-create-snm2s" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.246348 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe90198-808b-48df-bdef-8341723e0511-operator-scripts\") pod \"nova-cell1-db-create-snm2s\" (UID: \"6fe90198-808b-48df-bdef-8341723e0511\") " pod="openstack/nova-cell1-db-create-snm2s" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.286847 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-05c6-account-create-update-kmnjh"] Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.288335 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.301690 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kslp8" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.302226 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.303608 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59qxh\" (UniqueName: \"kubernetes.io/projected/6fe90198-808b-48df-bdef-8341723e0511-kube-api-access-59qxh\") pod \"nova-cell1-db-create-snm2s\" (UID: \"6fe90198-808b-48df-bdef-8341723e0511\") " pod="openstack/nova-cell1-db-create-snm2s" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.305961 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-05c6-account-create-update-kmnjh"] Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.347388 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hm67\" (UniqueName: \"kubernetes.io/projected/9093ab59-b372-4af4-8c67-a4ab97e79d33-kube-api-access-6hm67\") pod \"nova-cell1-05c6-account-create-update-kmnjh\" (UID: \"9093ab59-b372-4af4-8c67-a4ab97e79d33\") " pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.347449 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lntv2\" (UniqueName: \"kubernetes.io/projected/25372155-a159-47f2-aba8-8eacd41bf2ff-kube-api-access-lntv2\") pod \"nova-cell0-8b9d-account-create-update-jnndn\" (UID: \"25372155-a159-47f2-aba8-8eacd41bf2ff\") " pod="openstack/nova-cell0-8b9d-account-create-update-jnndn" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.347488 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9093ab59-b372-4af4-8c67-a4ab97e79d33-operator-scripts\") pod \"nova-cell1-05c6-account-create-update-kmnjh\" (UID: \"9093ab59-b372-4af4-8c67-a4ab97e79d33\") " pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.347512 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25372155-a159-47f2-aba8-8eacd41bf2ff-operator-scripts\") pod \"nova-cell0-8b9d-account-create-update-jnndn\" (UID: \"25372155-a159-47f2-aba8-8eacd41bf2ff\") " pod="openstack/nova-cell0-8b9d-account-create-update-jnndn" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.348243 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25372155-a159-47f2-aba8-8eacd41bf2ff-operator-scripts\") pod \"nova-cell0-8b9d-account-create-update-jnndn\" (UID: \"25372155-a159-47f2-aba8-8eacd41bf2ff\") " pod="openstack/nova-cell0-8b9d-account-create-update-jnndn" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.374032 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lntv2\" (UniqueName: \"kubernetes.io/projected/25372155-a159-47f2-aba8-8eacd41bf2ff-kube-api-access-lntv2\") pod \"nova-cell0-8b9d-account-create-update-jnndn\" (UID: \"25372155-a159-47f2-aba8-8eacd41bf2ff\") " pod="openstack/nova-cell0-8b9d-account-create-update-jnndn" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.387766 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-snm2s" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.402561 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7c80-account-create-update-2q7s7" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.449059 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9093ab59-b372-4af4-8c67-a4ab97e79d33-operator-scripts\") pod \"nova-cell1-05c6-account-create-update-kmnjh\" (UID: \"9093ab59-b372-4af4-8c67-a4ab97e79d33\") " pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.449268 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hm67\" (UniqueName: \"kubernetes.io/projected/9093ab59-b372-4af4-8c67-a4ab97e79d33-kube-api-access-6hm67\") pod \"nova-cell1-05c6-account-create-update-kmnjh\" (UID: \"9093ab59-b372-4af4-8c67-a4ab97e79d33\") " pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.450495 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9093ab59-b372-4af4-8c67-a4ab97e79d33-operator-scripts\") pod \"nova-cell1-05c6-account-create-update-kmnjh\" (UID: \"9093ab59-b372-4af4-8c67-a4ab97e79d33\") " pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.477506 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hm67\" (UniqueName: \"kubernetes.io/projected/9093ab59-b372-4af4-8c67-a4ab97e79d33-kube-api-access-6hm67\") pod \"nova-cell1-05c6-account-create-update-kmnjh\" (UID: \"9093ab59-b372-4af4-8c67-a4ab97e79d33\") " pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.526581 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8b9d-account-create-update-jnndn" Mar 16 15:33:43 crc kubenswrapper[4736]: W0316 15:33:43.586438 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1726cfcf_6fcf_4acd_9a3e_dbcc26a797aa.slice/crio-19f8d992174e5ab13728ac3a79447c10ae503761fa34c42408fb7aa69bd7801e.scope WatchSource:0}: Error finding container 19f8d992174e5ab13728ac3a79447c10ae503761fa34c42408fb7aa69bd7801e: Status 404 returned error can't find the container with id 19f8d992174e5ab13728ac3a79447c10ae503761fa34c42408fb7aa69bd7801e Mar 16 15:33:43 crc kubenswrapper[4736]: W0316 15:33:43.598162 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod590b1c05_0ddc_4332_986b_525b6720d465.slice/crio-ac08c9a3e6bd6ce3cee5ab1b052c0b950d836b66787cf6c518c56ba050606815.scope WatchSource:0}: Error finding container ac08c9a3e6bd6ce3cee5ab1b052c0b950d836b66787cf6c518c56ba050606815: Status 404 returned error can't find the container with id ac08c9a3e6bd6ce3cee5ab1b052c0b950d836b66787cf6c518c56ba050606815 Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.675474 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.842155 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-678d85b7f7-bdd5r"] Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.845754 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.900210 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6c8db4894c-j6mwd"] Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.912985 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.967681 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7tpp\" (UniqueName: \"kubernetes.io/projected/68f8b303-ec21-4d2d-a420-f06569625ae4-kube-api-access-t7tpp\") pod \"heat-api-6c8db4894c-j6mwd\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.968020 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhgj9\" (UniqueName: \"kubernetes.io/projected/5498e21f-9b52-4ecb-9c10-7a688723d57f-kube-api-access-mhgj9\") pod \"heat-engine-678d85b7f7-bdd5r\" (UID: \"5498e21f-9b52-4ecb-9c10-7a688723d57f\") " pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.968156 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5498e21f-9b52-4ecb-9c10-7a688723d57f-combined-ca-bundle\") pod \"heat-engine-678d85b7f7-bdd5r\" (UID: \"5498e21f-9b52-4ecb-9c10-7a688723d57f\") " pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.968280 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-config-data-custom\") pod \"heat-api-6c8db4894c-j6mwd\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.968307 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5498e21f-9b52-4ecb-9c10-7a688723d57f-config-data-custom\") pod \"heat-engine-678d85b7f7-bdd5r\" (UID: \"5498e21f-9b52-4ecb-9c10-7a688723d57f\") " pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.968472 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-combined-ca-bundle\") pod \"heat-api-6c8db4894c-j6mwd\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.968622 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5498e21f-9b52-4ecb-9c10-7a688723d57f-config-data\") pod \"heat-engine-678d85b7f7-bdd5r\" (UID: \"5498e21f-9b52-4ecb-9c10-7a688723d57f\") " pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:43 crc kubenswrapper[4736]: I0316 15:33:43.968755 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-config-data\") pod \"heat-api-6c8db4894c-j6mwd\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.004697 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-678d85b7f7-bdd5r"] Mar 16 15:33:44 crc kubenswrapper[4736]: E0316 15:33:44.035749 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dbfbe46_5ac1_4ba4_bea6_e1712d671e78.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dbfbe46_5ac1_4ba4_bea6_e1712d671e78.slice/crio-6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55ec5952_c9e8_4cbd_9e04_5a48f9d84d1d.slice/crio-conmon-385bb8499b5bf9134d3519b48f9c5b43761cb64555172b1c94cdd0a3890097ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45c93e24_5358_402f_9ace_e85478dedb49.slice/crio-42060cddfe4c59472f3be42cb2e0bb18ea86207173b4fad3d63b9e861b6fe74e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55ec5952_c9e8_4cbd_9e04_5a48f9d84d1d.slice/crio-385bb8499b5bf9134d3519b48f9c5b43761cb64555172b1c94cdd0a3890097ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45c93e24_5358_402f_9ace_e85478dedb49.slice/crio-conmon-42060cddfe4c59472f3be42cb2e0bb18ea86207173b4fad3d63b9e861b6fe74e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dbfbe46_5ac1_4ba4_bea6_e1712d671e78.slice/crio-conmon-6bcef6813f91f96852090679d4f461598ade51092c39cf39cf412fd368f587fe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dbfbe46_5ac1_4ba4_bea6_e1712d671e78.slice/crio-conmon-cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dbfbe46_5ac1_4ba4_bea6_e1712d671e78.slice/crio-conmon-6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod883da065_2249_44ef_8c33_e8bdd36f824b.slice/crio-conmon-276691327ae0a7b95558d58c3837d0a0da26128449d24edd3c17a7a46f2e2c0d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dbfbe46_5ac1_4ba4_bea6_e1712d671e78.slice/crio-cb5dbadc0d5bcf353f27f6929cc9eddb1f4d65f675da5cd56c83f71f3bca5f5d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4dbfbe46_5ac1_4ba4_bea6_e1712d671e78.slice/crio-6fa5171385b7817efc6526b04ba6253ff3a15c221de9040d250251d9dab33590.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c8512fe_c3bb_4573_b46d_1aada355ac6e.slice/crio-00b3937b599439e5190241abf1a4edb069a1ea6b1af9b0691999882836b0cb41.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c8512fe_c3bb_4573_b46d_1aada355ac6e.slice/crio-conmon-00b3937b599439e5190241abf1a4edb069a1ea6b1af9b0691999882836b0cb41.scope\": RecentStats: unable to find data in memory cache]" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.054463 4736 generic.go:334] "Generic (PLEG): container finished" podID="883da065-2249-44ef-8c33-e8bdd36f824b" containerID="276691327ae0a7b95558d58c3837d0a0da26128449d24edd3c17a7a46f2e2c0d" exitCode=137 Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.054579 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd955ff9f-lqqft" event={"ID":"883da065-2249-44ef-8c33-e8bdd36f824b","Type":"ContainerDied","Data":"276691327ae0a7b95558d58c3837d0a0da26128449d24edd3c17a7a46f2e2c0d"} Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.062834 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c8db4894c-j6mwd"] Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.064285 4736 generic.go:334] "Generic (PLEG): container finished" podID="2c8512fe-c3bb-4573-b46d-1aada355ac6e" containerID="00b3937b599439e5190241abf1a4edb069a1ea6b1af9b0691999882836b0cb41" exitCode=137 Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.064329 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" event={"ID":"2c8512fe-c3bb-4573-b46d-1aada355ac6e","Type":"ContainerDied","Data":"00b3937b599439e5190241abf1a4edb069a1ea6b1af9b0691999882836b0cb41"} Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.087952 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-config-data-custom\") pod \"heat-api-6c8db4894c-j6mwd\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.088026 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5498e21f-9b52-4ecb-9c10-7a688723d57f-config-data-custom\") pod \"heat-engine-678d85b7f7-bdd5r\" (UID: \"5498e21f-9b52-4ecb-9c10-7a688723d57f\") " pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.088177 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-combined-ca-bundle\") pod \"heat-api-6c8db4894c-j6mwd\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.088247 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5498e21f-9b52-4ecb-9c10-7a688723d57f-config-data\") pod \"heat-engine-678d85b7f7-bdd5r\" (UID: \"5498e21f-9b52-4ecb-9c10-7a688723d57f\") " pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.088301 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-config-data\") pod \"heat-api-6c8db4894c-j6mwd\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.088381 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7tpp\" (UniqueName: \"kubernetes.io/projected/68f8b303-ec21-4d2d-a420-f06569625ae4-kube-api-access-t7tpp\") pod \"heat-api-6c8db4894c-j6mwd\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.088498 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhgj9\" (UniqueName: \"kubernetes.io/projected/5498e21f-9b52-4ecb-9c10-7a688723d57f-kube-api-access-mhgj9\") pod \"heat-engine-678d85b7f7-bdd5r\" (UID: \"5498e21f-9b52-4ecb-9c10-7a688723d57f\") " pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.088534 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5498e21f-9b52-4ecb-9c10-7a688723d57f-combined-ca-bundle\") pod \"heat-engine-678d85b7f7-bdd5r\" (UID: \"5498e21f-9b52-4ecb-9c10-7a688723d57f\") " pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.104601 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-config-data\") pod \"heat-api-6c8db4894c-j6mwd\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.105816 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-57db97dd4d-cgdvc"] Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.106874 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-combined-ca-bundle\") pod \"heat-api-6c8db4894c-j6mwd\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.108328 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.111950 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5498e21f-9b52-4ecb-9c10-7a688723d57f-config-data-custom\") pod \"heat-engine-678d85b7f7-bdd5r\" (UID: \"5498e21f-9b52-4ecb-9c10-7a688723d57f\") " pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.115225 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-config-data-custom\") pod \"heat-api-6c8db4894c-j6mwd\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.115580 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5498e21f-9b52-4ecb-9c10-7a688723d57f-config-data\") pod \"heat-engine-678d85b7f7-bdd5r\" (UID: \"5498e21f-9b52-4ecb-9c10-7a688723d57f\") " pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.126014 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhgj9\" (UniqueName: \"kubernetes.io/projected/5498e21f-9b52-4ecb-9c10-7a688723d57f-kube-api-access-mhgj9\") pod \"heat-engine-678d85b7f7-bdd5r\" (UID: \"5498e21f-9b52-4ecb-9c10-7a688723d57f\") " pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.126808 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5498e21f-9b52-4ecb-9c10-7a688723d57f-combined-ca-bundle\") pod \"heat-engine-678d85b7f7-bdd5r\" (UID: \"5498e21f-9b52-4ecb-9c10-7a688723d57f\") " pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.159298 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57db97dd4d-cgdvc"] Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.160659 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7tpp\" (UniqueName: \"kubernetes.io/projected/68f8b303-ec21-4d2d-a420-f06569625ae4-kube-api-access-t7tpp\") pod \"heat-api-6c8db4894c-j6mwd\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.193504 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxzbn\" (UniqueName: \"kubernetes.io/projected/74565328-52cd-455e-b53c-199f069833c9-kube-api-access-pxzbn\") pod \"heat-cfnapi-57db97dd4d-cgdvc\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.193764 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-combined-ca-bundle\") pod \"heat-cfnapi-57db97dd4d-cgdvc\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.193815 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-config-data\") pod \"heat-cfnapi-57db97dd4d-cgdvc\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.193841 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-config-data-custom\") pod \"heat-cfnapi-57db97dd4d-cgdvc\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.212316 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.253748 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.295636 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxzbn\" (UniqueName: \"kubernetes.io/projected/74565328-52cd-455e-b53c-199f069833c9-kube-api-access-pxzbn\") pod \"heat-cfnapi-57db97dd4d-cgdvc\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.295792 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-combined-ca-bundle\") pod \"heat-cfnapi-57db97dd4d-cgdvc\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.295822 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-config-data\") pod \"heat-cfnapi-57db97dd4d-cgdvc\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.295841 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-config-data-custom\") pod \"heat-cfnapi-57db97dd4d-cgdvc\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.307608 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-config-data-custom\") pod \"heat-cfnapi-57db97dd4d-cgdvc\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.312713 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-combined-ca-bundle\") pod \"heat-cfnapi-57db97dd4d-cgdvc\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.313961 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-config-data\") pod \"heat-cfnapi-57db97dd4d-cgdvc\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.333918 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxzbn\" (UniqueName: \"kubernetes.io/projected/74565328-52cd-455e-b53c-199f069833c9-kube-api-access-pxzbn\") pod \"heat-cfnapi-57db97dd4d-cgdvc\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.562649 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 16 15:33:44 crc kubenswrapper[4736]: I0316 15:33:44.566681 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.601713 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.647687 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-config-data\") pod \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.647811 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-config-data-custom\") pod \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.647907 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-combined-ca-bundle\") pod \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.647936 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cswps\" (UniqueName: \"kubernetes.io/projected/2c8512fe-c3bb-4573-b46d-1aada355ac6e-kube-api-access-cswps\") pod \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.648116 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c8512fe-c3bb-4573-b46d-1aada355ac6e-logs\") pod \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\" (UID: \"2c8512fe-c3bb-4573-b46d-1aada355ac6e\") " Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.652392 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c8512fe-c3bb-4573-b46d-1aada355ac6e-logs" (OuterVolumeSpecName: "logs") pod "2c8512fe-c3bb-4573-b46d-1aada355ac6e" (UID: "2c8512fe-c3bb-4573-b46d-1aada355ac6e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.657259 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2c8512fe-c3bb-4573-b46d-1aada355ac6e" (UID: "2c8512fe-c3bb-4573-b46d-1aada355ac6e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.660409 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c8512fe-c3bb-4573-b46d-1aada355ac6e-kube-api-access-cswps" (OuterVolumeSpecName: "kube-api-access-cswps") pod "2c8512fe-c3bb-4573-b46d-1aada355ac6e" (UID: "2c8512fe-c3bb-4573-b46d-1aada355ac6e"). InnerVolumeSpecName "kube-api-access-cswps". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.750956 4736 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.751368 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cswps\" (UniqueName: \"kubernetes.io/projected/2c8512fe-c3bb-4573-b46d-1aada355ac6e-kube-api-access-cswps\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.751381 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c8512fe-c3bb-4573-b46d-1aada355ac6e-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.871287 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.896555 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c8512fe-c3bb-4573-b46d-1aada355ac6e" (UID: "2c8512fe-c3bb-4573-b46d-1aada355ac6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.937678 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-config-data" (OuterVolumeSpecName: "config-data") pod "2c8512fe-c3bb-4573-b46d-1aada355ac6e" (UID: "2c8512fe-c3bb-4573-b46d-1aada355ac6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.971932 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-combined-ca-bundle\") pod \"883da065-2249-44ef-8c33-e8bdd36f824b\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.972081 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883da065-2249-44ef-8c33-e8bdd36f824b-logs\") pod \"883da065-2249-44ef-8c33-e8bdd36f824b\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.972234 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-config-data\") pod \"883da065-2249-44ef-8c33-e8bdd36f824b\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.972318 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8577t\" (UniqueName: \"kubernetes.io/projected/883da065-2249-44ef-8c33-e8bdd36f824b-kube-api-access-8577t\") pod \"883da065-2249-44ef-8c33-e8bdd36f824b\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.972424 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-config-data-custom\") pod \"883da065-2249-44ef-8c33-e8bdd36f824b\" (UID: \"883da065-2249-44ef-8c33-e8bdd36f824b\") " Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.983334 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.983606 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8512fe-c3bb-4573-b46d-1aada355ac6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:45 crc kubenswrapper[4736]: I0316 15:33:45.984910 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883da065-2249-44ef-8c33-e8bdd36f824b-logs" (OuterVolumeSpecName: "logs") pod "883da065-2249-44ef-8c33-e8bdd36f824b" (UID: "883da065-2249-44ef-8c33-e8bdd36f824b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.002677 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883da065-2249-44ef-8c33-e8bdd36f824b-kube-api-access-8577t" (OuterVolumeSpecName: "kube-api-access-8577t") pod "883da065-2249-44ef-8c33-e8bdd36f824b" (UID: "883da065-2249-44ef-8c33-e8bdd36f824b"). InnerVolumeSpecName "kube-api-access-8577t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.018347 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "883da065-2249-44ef-8c33-e8bdd36f824b" (UID: "883da065-2249-44ef-8c33-e8bdd36f824b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.089186 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/883da065-2249-44ef-8c33-e8bdd36f824b-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.089226 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8577t\" (UniqueName: \"kubernetes.io/projected/883da065-2249-44ef-8c33-e8bdd36f824b-kube-api-access-8577t\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.089238 4736 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.112361 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "883da065-2249-44ef-8c33-e8bdd36f824b" (UID: "883da065-2249-44ef-8c33-e8bdd36f824b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.196713 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.285657 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-config-data" (OuterVolumeSpecName: "config-data") pod "883da065-2249-44ef-8c33-e8bdd36f824b" (UID: "883da065-2249-44ef-8c33-e8bdd36f824b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.304189 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883da065-2249-44ef-8c33-e8bdd36f824b-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.394481 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" event={"ID":"2c8512fe-c3bb-4573-b46d-1aada355ac6e","Type":"ContainerDied","Data":"513effc752df252eccc2223e1a1a4e67a6cb81e053c1e36cbff2c5a995126ba3"} Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.394556 4736 scope.go:117] "RemoveContainer" containerID="00b3937b599439e5190241abf1a4edb069a1ea6b1af9b0691999882836b0cb41" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.394716 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cf4cbd846-gkgvc" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.502129 4736 scope.go:117] "RemoveContainer" containerID="f24d671fb965d82c3b93b00d1f27dcfb39c7e973d0ce70d47b20081fc8639b74" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.502409 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd955ff9f-lqqft" event={"ID":"883da065-2249-44ef-8c33-e8bdd36f824b","Type":"ContainerDied","Data":"a3f383d73636f9e7c7334b8b9808ef2c7c208c0b44f6e35a094cd5d2eadf4b98"} Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.502489 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd955ff9f-lqqft" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.549380 4736 generic.go:334] "Generic (PLEG): container finished" podID="55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" containerID="7ffd6c7641d7f382793b78d637a249602cf34732889e9f01964f23df3673331a" exitCode=0 Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.549432 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6779bd586b-h28pq" event={"ID":"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d","Type":"ContainerDied","Data":"7ffd6c7641d7f382793b78d637a249602cf34732889e9f01964f23df3673331a"} Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.558627 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6cf4cbd846-gkgvc"] Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.568141 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.603972 4736 scope.go:117] "RemoveContainer" containerID="276691327ae0a7b95558d58c3837d0a0da26128449d24edd3c17a7a46f2e2c0d" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.624210 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-6cf4cbd846-gkgvc"] Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.695628 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5fd955ff9f-lqqft"] Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.734818 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.741826 4736 scope.go:117] "RemoveContainer" containerID="4c8de0fefbba647f4dc0ab4c0e5384a3eb159a12cb000e51d9695b8475151fc2" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.818060 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-ovndb-tls-certs\") pod \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.818142 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-config\") pod \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.818219 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-combined-ca-bundle\") pod \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.818381 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-httpd-config\") pod \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.818416 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95pbh\" (UniqueName: \"kubernetes.io/projected/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-kube-api-access-95pbh\") pod \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\" (UID: \"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d\") " Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.854829 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5fd955ff9f-lqqft"] Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.862387 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" (UID: "55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.876201 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-kube-api-access-95pbh" (OuterVolumeSpecName: "kube-api-access-95pbh") pod "55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" (UID: "55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d"). InnerVolumeSpecName "kube-api-access-95pbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.928681 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98bb585-hhxkb"] Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.932251 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98bb585-hhxkb" podUID="b2436adc-ebdd-4481-92f8-b1c1d9010ab7" containerName="dnsmasq-dns" containerID="cri-o://1c90f848d4197065da6c945e6f4a35ab2ed0a74b1d1739ef43359b6a86d447b1" gracePeriod=10 Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.946093 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dtt9x"] Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.952250 4736 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:46 crc kubenswrapper[4736]: I0316 15:33:46.952286 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95pbh\" (UniqueName: \"kubernetes.io/projected/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-kube-api-access-95pbh\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.030542 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c8512fe-c3bb-4573-b46d-1aada355ac6e" path="/var/lib/kubelet/pods/2c8512fe-c3bb-4573-b46d-1aada355ac6e/volumes" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.030799 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-98bb585-hhxkb" podUID="b2436adc-ebdd-4481-92f8-b1c1d9010ab7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.174:5353: connect: connection refused" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.031264 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883da065-2249-44ef-8c33-e8bdd36f824b" path="/var/lib/kubelet/pods/883da065-2249-44ef-8c33-e8bdd36f824b/volumes" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.055754 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-config" (OuterVolumeSpecName: "config") pod "55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" (UID: "55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.065275 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" (UID: "55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.161849 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.162182 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.163940 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" (UID: "55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.271861 4736 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.498812 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-05c6-account-create-update-kmnjh"] Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.512321 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kslp8"] Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.571369 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c8db4894c-j6mwd"] Mar 16 15:33:47 crc kubenswrapper[4736]: W0316 15:33:47.602649 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9093ab59_b372_4af4_8c67_a4ab97e79d33.slice/crio-bc4b4e2614a4262bc5896e61499949ef2da42983c2d74d76ff5f34886eb17b69 WatchSource:0}: Error finding container bc4b4e2614a4262bc5896e61499949ef2da42983c2d74d76ff5f34886eb17b69: Status 404 returned error can't find the container with id bc4b4e2614a4262bc5896e61499949ef2da42983c2d74d76ff5f34886eb17b69 Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.605159 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-678d85b7f7-bdd5r"] Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.645419 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-799b5776df-77jmf" event={"ID":"de867ad2-920b-4719-972b-71b4cbb83e2e","Type":"ContainerStarted","Data":"9ee3106e1ec1b95880a59b2f5ba0b62d0a9e52092274015d2a60a3a720a425f5"} Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.645669 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:33:47 crc kubenswrapper[4736]: W0316 15:33:47.654303 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9763d87e_5b6c_45e2_9b19_a9799e96f9fd.slice/crio-e3adcf2aa5b07760d6fa32235bce1f27afb9d42acb6dc18a1fa7104be47971f8 WatchSource:0}: Error finding container e3adcf2aa5b07760d6fa32235bce1f27afb9d42acb6dc18a1fa7104be47971f8: Status 404 returned error can't find the container with id e3adcf2aa5b07760d6fa32235bce1f27afb9d42acb6dc18a1fa7104be47971f8 Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.725057 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-56b5646679-zf58k" event={"ID":"55d9a0a2-2842-4c23-98b1-2f97469a951f","Type":"ContainerStarted","Data":"9d08fae30dd0e2b3845a83d9d5ecc08b039e1df826b5a22a1bd2213c610104be"} Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.726049 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.743449 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-799b5776df-77jmf" podStartSLOduration=4.221079757 podStartE2EDuration="11.743420797s" podCreationTimestamp="2026-03-16 15:33:36 +0000 UTC" firstStartedPulling="2026-03-16 15:33:38.039968027 +0000 UTC m=+1219.767358314" lastFinishedPulling="2026-03-16 15:33:45.562309067 +0000 UTC m=+1227.289699354" observedRunningTime="2026-03-16 15:33:47.676423213 +0000 UTC m=+1229.403813500" watchObservedRunningTime="2026-03-16 15:33:47.743420797 +0000 UTC m=+1229.470811084" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.754335 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dtt9x" event={"ID":"3f7edb4e-af8b-4049-b0d9-cd107b5cb783","Type":"ContainerStarted","Data":"82950c21ae0eb5e9a47e77a7fc53aa6a8bff059c10dabb29bd372e5c84f3dce0"} Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.754385 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dtt9x" event={"ID":"3f7edb4e-af8b-4049-b0d9-cd107b5cb783","Type":"ContainerStarted","Data":"ca794e1c49f7ffe888c513ac8f92c8357a33700e5ea4e9739f5f313686e47a73"} Mar 16 15:33:47 crc kubenswrapper[4736]: W0316 15:33:47.779446 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5498e21f_9b52_4ecb_9c10_7a688723d57f.slice/crio-7e48b7604979a7d6407394412deb9a6d2df426f173db6c9c99db9921b0a861be WatchSource:0}: Error finding container 7e48b7604979a7d6407394412deb9a6d2df426f173db6c9c99db9921b0a861be: Status 404 returned error can't find the container with id 7e48b7604979a7d6407394412deb9a6d2df426f173db6c9c99db9921b0a861be Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.808021 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-56b5646679-zf58k" podStartSLOduration=4.671459147 podStartE2EDuration="11.807991216s" podCreationTimestamp="2026-03-16 15:33:36 +0000 UTC" firstStartedPulling="2026-03-16 15:33:37.776914334 +0000 UTC m=+1219.504304611" lastFinishedPulling="2026-03-16 15:33:44.913446393 +0000 UTC m=+1226.640836680" observedRunningTime="2026-03-16 15:33:47.770865601 +0000 UTC m=+1229.498255888" watchObservedRunningTime="2026-03-16 15:33:47.807991216 +0000 UTC m=+1229.535381503" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.817687 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"979cb634-6ed1-418b-8b87-1e5064da96c8","Type":"ContainerStarted","Data":"683b5e4909c2aceb15bd9e1c93eb1708165a4b42f671d9898b2a182e458069e2"} Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.829612 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-dtt9x" podStartSLOduration=5.829590113 podStartE2EDuration="5.829590113s" podCreationTimestamp="2026-03-16 15:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:47.81115858 +0000 UTC m=+1229.538548877" watchObservedRunningTime="2026-03-16 15:33:47.829590113 +0000 UTC m=+1229.556980400" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.899087 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6779bd586b-h28pq" event={"ID":"55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d","Type":"ContainerDied","Data":"1eed9c06ed247475117533fcf394c148fc156f55c12b3db21dafb8ebdd41626d"} Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.899192 4736 scope.go:117] "RemoveContainer" containerID="385bb8499b5bf9134d3519b48f9c5b43761cb64555172b1c94cdd0a3890097ef" Mar 16 15:33:47 crc kubenswrapper[4736]: I0316 15:33:47.899406 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6779bd586b-h28pq" Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.005494 4736 generic.go:334] "Generic (PLEG): container finished" podID="b2436adc-ebdd-4481-92f8-b1c1d9010ab7" containerID="1c90f848d4197065da6c945e6f4a35ab2ed0a74b1d1739ef43359b6a86d447b1" exitCode=0 Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.005712 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98bb585-hhxkb" event={"ID":"b2436adc-ebdd-4481-92f8-b1c1d9010ab7","Type":"ContainerDied","Data":"1c90f848d4197065da6c945e6f4a35ab2ed0a74b1d1739ef43359b6a86d447b1"} Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.015073 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57db97dd4d-cgdvc"] Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.042172 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6779bd586b-h28pq"] Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.061832 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6779bd586b-h28pq"] Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.135843 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8b9d-account-create-update-jnndn"] Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.137934 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.150305 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-snm2s"] Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.163703 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7c80-account-create-update-2q7s7"] Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.221344 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-ovsdbserver-sb\") pod \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.221403 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-ovsdbserver-nb\") pod \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.221511 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-dns-svc\") pod \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.221585 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fjn9\" (UniqueName: \"kubernetes.io/projected/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-kube-api-access-4fjn9\") pod \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.221604 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-config\") pod \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\" (UID: \"b2436adc-ebdd-4481-92f8-b1c1d9010ab7\") " Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.291971 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-kube-api-access-4fjn9" (OuterVolumeSpecName: "kube-api-access-4fjn9") pod "b2436adc-ebdd-4481-92f8-b1c1d9010ab7" (UID: "b2436adc-ebdd-4481-92f8-b1c1d9010ab7"). InnerVolumeSpecName "kube-api-access-4fjn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.325506 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fjn9\" (UniqueName: \"kubernetes.io/projected/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-kube-api-access-4fjn9\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.382391 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-config" (OuterVolumeSpecName: "config") pod "b2436adc-ebdd-4481-92f8-b1c1d9010ab7" (UID: "b2436adc-ebdd-4481-92f8-b1c1d9010ab7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.406690 4736 scope.go:117] "RemoveContainer" containerID="7ffd6c7641d7f382793b78d637a249602cf34732889e9f01964f23df3673331a" Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.427572 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:48 crc kubenswrapper[4736]: W0316 15:33:48.495657 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d25ea99_429a_495b_b0db_f30af232a75f.slice/crio-5f1da14c22ff28abfcba15e3499b468a64fa9d91ed864d6200d7c371c838774b WatchSource:0}: Error finding container 5f1da14c22ff28abfcba15e3499b468a64fa9d91ed864d6200d7c371c838774b: Status 404 returned error can't find the container with id 5f1da14c22ff28abfcba15e3499b468a64fa9d91ed864d6200d7c371c838774b Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.593047 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b2436adc-ebdd-4481-92f8-b1c1d9010ab7" (UID: "b2436adc-ebdd-4481-92f8-b1c1d9010ab7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.638155 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.758684 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b2436adc-ebdd-4481-92f8-b1c1d9010ab7" (UID: "b2436adc-ebdd-4481-92f8-b1c1d9010ab7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.770591 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b2436adc-ebdd-4481-92f8-b1c1d9010ab7" (UID: "b2436adc-ebdd-4481-92f8-b1c1d9010ab7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.847744 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:48 crc kubenswrapper[4736]: I0316 15:33:48.847777 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2436adc-ebdd-4481-92f8-b1c1d9010ab7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.010353 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" path="/var/lib/kubelet/pods/55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d/volumes" Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.049962 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" event={"ID":"9093ab59-b372-4af4-8c67-a4ab97e79d33","Type":"ContainerStarted","Data":"2dc1cace83828633251f3ff46d3f8a607fd74dc90e2b56935c551ecb0b17c2e2"} Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.051671 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" event={"ID":"9093ab59-b372-4af4-8c67-a4ab97e79d33","Type":"ContainerStarted","Data":"bc4b4e2614a4262bc5896e61499949ef2da42983c2d74d76ff5f34886eb17b69"} Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.058929 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8b9d-account-create-update-jnndn" event={"ID":"25372155-a159-47f2-aba8-8eacd41bf2ff","Type":"ContainerStarted","Data":"e54fda7607b86e67408c841e3fc021738ab4d71c9216f2ec5a82c6b361615442"} Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.065936 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98bb585-hhxkb" event={"ID":"b2436adc-ebdd-4481-92f8-b1c1d9010ab7","Type":"ContainerDied","Data":"49d0be8f62d591a2d97146487c354fdbf0faa4d57128fffdb163d99791483be4"} Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.066156 4736 scope.go:117] "RemoveContainer" containerID="1c90f848d4197065da6c945e6f4a35ab2ed0a74b1d1739ef43359b6a86d447b1" Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.067632 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98bb585-hhxkb" Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.079140 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-678d85b7f7-bdd5r" event={"ID":"5498e21f-9b52-4ecb-9c10-7a688723d57f","Type":"ContainerStarted","Data":"7e48b7604979a7d6407394412deb9a6d2df426f173db6c9c99db9921b0a861be"} Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.099005 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" event={"ID":"74565328-52cd-455e-b53c-199f069833c9","Type":"ContainerStarted","Data":"1542733dff5d1b008c9a91ee337d4ff890b2c7569f99dded3b23089f8db57f6d"} Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.106740 4736 generic.go:334] "Generic (PLEG): container finished" podID="3f7edb4e-af8b-4049-b0d9-cd107b5cb783" containerID="82950c21ae0eb5e9a47e77a7fc53aa6a8bff059c10dabb29bd372e5c84f3dce0" exitCode=0 Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.106874 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dtt9x" event={"ID":"3f7edb4e-af8b-4049-b0d9-cd107b5cb783","Type":"ContainerDied","Data":"82950c21ae0eb5e9a47e77a7fc53aa6a8bff059c10dabb29bd372e5c84f3dce0"} Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.145775 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c8db4894c-j6mwd" event={"ID":"68f8b303-ec21-4d2d-a420-f06569625ae4","Type":"ContainerStarted","Data":"f0cbb0a9d1f4457335870ec440ec29d0cb5a96ad186a02698efd97de035f212c"} Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.145828 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c8db4894c-j6mwd" event={"ID":"68f8b303-ec21-4d2d-a420-f06569625ae4","Type":"ContainerStarted","Data":"c34180b85432981ac651492569d569fb7f02b3336c40f9f4afe0c878b8d766e1"} Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.147009 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.159255 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kslp8" event={"ID":"9763d87e-5b6c-45e2-9b19-a9799e96f9fd","Type":"ContainerStarted","Data":"39b0a7d78ec606ac459c47e296aafc8a73c10a01bd29d1100fbd7b6d079b2d56"} Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.159310 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kslp8" event={"ID":"9763d87e-5b6c-45e2-9b19-a9799e96f9fd","Type":"ContainerStarted","Data":"e3adcf2aa5b07760d6fa32235bce1f27afb9d42acb6dc18a1fa7104be47971f8"} Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.172337 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-snm2s" event={"ID":"6fe90198-808b-48df-bdef-8341723e0511","Type":"ContainerStarted","Data":"e975bddd9d54bb18d1f2d045444eff33537f9c0fa9b1c1cff8ebd37bf160805b"} Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.188823 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7c80-account-create-update-2q7s7" event={"ID":"1d25ea99-429a-495b-b0db-f30af232a75f","Type":"ContainerStarted","Data":"5f1da14c22ff28abfcba15e3499b468a64fa9d91ed864d6200d7c371c838774b"} Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.279101 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6c8db4894c-j6mwd" podStartSLOduration=6.279076654 podStartE2EDuration="6.279076654s" podCreationTimestamp="2026-03-16 15:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:49.2733264 +0000 UTC m=+1231.000716697" watchObservedRunningTime="2026-03-16 15:33:49.279076654 +0000 UTC m=+1231.006466941" Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.289714 4736 scope.go:117] "RemoveContainer" containerID="c5f956b340f9ec106af69986aaf62f09070ac88eeaa93d7bd8a3af0ef40e69fc" Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.303496 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-kslp8" podStartSLOduration=7.303474567 podStartE2EDuration="7.303474567s" podCreationTimestamp="2026-03-16 15:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:49.295704209 +0000 UTC m=+1231.023094496" watchObservedRunningTime="2026-03-16 15:33:49.303474567 +0000 UTC m=+1231.030864854" Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.373736 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" podStartSLOduration=6.373534393 podStartE2EDuration="6.373534393s" podCreationTimestamp="2026-03-16 15:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:49.366649478 +0000 UTC m=+1231.094039765" watchObservedRunningTime="2026-03-16 15:33:49.373534393 +0000 UTC m=+1231.100924690" Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.415745 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98bb585-hhxkb"] Mar 16 15:33:49 crc kubenswrapper[4736]: I0316 15:33:49.455685 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98bb585-hhxkb"] Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.208755 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-799b5776df-77jmf"] Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.236582 4736 generic.go:334] "Generic (PLEG): container finished" podID="6fe90198-808b-48df-bdef-8341723e0511" containerID="7854849ab46c0bef0984d2c5417cc6c87124fbffa0f8908e2aaa37f5c675faaa" exitCode=0 Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.236684 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-snm2s" event={"ID":"6fe90198-808b-48df-bdef-8341723e0511","Type":"ContainerDied","Data":"7854849ab46c0bef0984d2c5417cc6c87124fbffa0f8908e2aaa37f5c675faaa"} Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.243734 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-56b5646679-zf58k"] Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.246975 4736 generic.go:334] "Generic (PLEG): container finished" podID="1d25ea99-429a-495b-b0db-f30af232a75f" containerID="e3fb7bbc41336ef2c7255445d2c8b421415e29b8a23f058fa4ef68f054054db8" exitCode=0 Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.247049 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7c80-account-create-update-2q7s7" event={"ID":"1d25ea99-429a-495b-b0db-f30af232a75f","Type":"ContainerDied","Data":"e3fb7bbc41336ef2c7255445d2c8b421415e29b8a23f058fa4ef68f054054db8"} Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.264332 4736 generic.go:334] "Generic (PLEG): container finished" podID="9763d87e-5b6c-45e2-9b19-a9799e96f9fd" containerID="39b0a7d78ec606ac459c47e296aafc8a73c10a01bd29d1100fbd7b6d079b2d56" exitCode=0 Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.264397 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kslp8" event={"ID":"9763d87e-5b6c-45e2-9b19-a9799e96f9fd","Type":"ContainerDied","Data":"39b0a7d78ec606ac459c47e296aafc8a73c10a01bd29d1100fbd7b6d079b2d56"} Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.266735 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-75b949cc99-d78kz"] Mar 16 15:33:50 crc kubenswrapper[4736]: E0316 15:33:50.270398 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2436adc-ebdd-4481-92f8-b1c1d9010ab7" containerName="dnsmasq-dns" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.270571 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2436adc-ebdd-4481-92f8-b1c1d9010ab7" containerName="dnsmasq-dns" Mar 16 15:33:50 crc kubenswrapper[4736]: E0316 15:33:50.270737 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883da065-2249-44ef-8c33-e8bdd36f824b" containerName="barbican-worker-log" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.270802 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="883da065-2249-44ef-8c33-e8bdd36f824b" containerName="barbican-worker-log" Mar 16 15:33:50 crc kubenswrapper[4736]: E0316 15:33:50.270882 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" containerName="neutron-api" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.270938 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" containerName="neutron-api" Mar 16 15:33:50 crc kubenswrapper[4736]: E0316 15:33:50.270990 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883da065-2249-44ef-8c33-e8bdd36f824b" containerName="barbican-worker" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.271048 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="883da065-2249-44ef-8c33-e8bdd36f824b" containerName="barbican-worker" Mar 16 15:33:50 crc kubenswrapper[4736]: E0316 15:33:50.271145 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" containerName="neutron-httpd" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.271213 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" containerName="neutron-httpd" Mar 16 15:33:50 crc kubenswrapper[4736]: E0316 15:33:50.271297 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8512fe-c3bb-4573-b46d-1aada355ac6e" containerName="barbican-keystone-listener-log" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.271362 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8512fe-c3bb-4573-b46d-1aada355ac6e" containerName="barbican-keystone-listener-log" Mar 16 15:33:50 crc kubenswrapper[4736]: E0316 15:33:50.271800 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8512fe-c3bb-4573-b46d-1aada355ac6e" containerName="barbican-keystone-listener" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.271875 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8512fe-c3bb-4573-b46d-1aada355ac6e" containerName="barbican-keystone-listener" Mar 16 15:33:50 crc kubenswrapper[4736]: E0316 15:33:50.271936 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2436adc-ebdd-4481-92f8-b1c1d9010ab7" containerName="init" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.272004 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2436adc-ebdd-4481-92f8-b1c1d9010ab7" containerName="init" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.272301 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="883da065-2249-44ef-8c33-e8bdd36f824b" containerName="barbican-worker-log" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.272413 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c8512fe-c3bb-4573-b46d-1aada355ac6e" containerName="barbican-keystone-listener-log" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.272499 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" containerName="neutron-httpd" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.272557 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2436adc-ebdd-4481-92f8-b1c1d9010ab7" containerName="dnsmasq-dns" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.272617 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="55ec5952-c9e8-4cbd-9e04-5a48f9d84d1d" containerName="neutron-api" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.272853 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c8512fe-c3bb-4573-b46d-1aada355ac6e" containerName="barbican-keystone-listener" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.273011 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="883da065-2249-44ef-8c33-e8bdd36f824b" containerName="barbican-worker" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.282622 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-678d85b7f7-bdd5r" event={"ID":"5498e21f-9b52-4ecb-9c10-7a688723d57f","Type":"ContainerStarted","Data":"f782303a076426989487a08fff35ab101a3eddd306a92c0a52cc9df71238ba90"} Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.282977 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.283148 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.287754 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.287979 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.300642 4736 generic.go:334] "Generic (PLEG): container finished" podID="74565328-52cd-455e-b53c-199f069833c9" containerID="6d35b6e04d42a028ff585588c48e3c20927eee31924fdcb36fff2b8fb0e5034e" exitCode=1 Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.300790 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" event={"ID":"74565328-52cd-455e-b53c-199f069833c9","Type":"ContainerDied","Data":"6d35b6e04d42a028ff585588c48e3c20927eee31924fdcb36fff2b8fb0e5034e"} Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.301776 4736 scope.go:117] "RemoveContainer" containerID="6d35b6e04d42a028ff585588c48e3c20927eee31924fdcb36fff2b8fb0e5034e" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.343976 4736 generic.go:334] "Generic (PLEG): container finished" podID="9093ab59-b372-4af4-8c67-a4ab97e79d33" containerID="2dc1cace83828633251f3ff46d3f8a607fd74dc90e2b56935c551ecb0b17c2e2" exitCode=0 Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.344095 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" event={"ID":"9093ab59-b372-4af4-8c67-a4ab97e79d33","Type":"ContainerDied","Data":"2dc1cace83828633251f3ff46d3f8a607fd74dc90e2b56935c551ecb0b17c2e2"} Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.370420 4736 generic.go:334] "Generic (PLEG): container finished" podID="68f8b303-ec21-4d2d-a420-f06569625ae4" containerID="f0cbb0a9d1f4457335870ec440ec29d0cb5a96ad186a02698efd97de035f212c" exitCode=1 Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.370707 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-75b949cc99-d78kz"] Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.370785 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c8db4894c-j6mwd" event={"ID":"68f8b303-ec21-4d2d-a420-f06569625ae4","Type":"ContainerDied","Data":"f0cbb0a9d1f4457335870ec440ec29d0cb5a96ad186a02698efd97de035f212c"} Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.371429 4736 scope.go:117] "RemoveContainer" containerID="f0cbb0a9d1f4457335870ec440ec29d0cb5a96ad186a02698efd97de035f212c" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.400624 4736 generic.go:334] "Generic (PLEG): container finished" podID="25372155-a159-47f2-aba8-8eacd41bf2ff" containerID="6b9189de1af32b1e9bfc4c2d2b94b0385f0cbd4e383ac25a73770e7853a1f2ec" exitCode=0 Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.400906 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8b9d-account-create-update-jnndn" event={"ID":"25372155-a159-47f2-aba8-8eacd41bf2ff","Type":"ContainerDied","Data":"6b9189de1af32b1e9bfc4c2d2b94b0385f0cbd4e383ac25a73770e7853a1f2ec"} Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.408910 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-public-tls-certs\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.408987 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-internal-tls-certs\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.410707 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-config-data\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.410810 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-combined-ca-bundle\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.410938 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktbhl\" (UniqueName: \"kubernetes.io/projected/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-kube-api-access-ktbhl\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.411114 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-config-data-custom\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.439531 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7b4b9fc8dc-4c2rv"] Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.440930 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.444128 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.452862 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.463812 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-678d85b7f7-bdd5r" podStartSLOduration=7.463787924 podStartE2EDuration="7.463787924s" podCreationTimestamp="2026-03-16 15:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:50.338295874 +0000 UTC m=+1232.065686161" watchObservedRunningTime="2026-03-16 15:33:50.463787924 +0000 UTC m=+1232.191178211" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.522542 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-config-data\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.522635 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-combined-ca-bundle\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.522747 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktbhl\" (UniqueName: \"kubernetes.io/projected/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-kube-api-access-ktbhl\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.525947 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-config-data-custom\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.526027 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-public-tls-certs\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.526059 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-internal-tls-certs\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.539225 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b4b9fc8dc-4c2rv"] Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.571121 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-config-data-custom\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.577597 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-combined-ca-bundle\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.590248 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktbhl\" (UniqueName: \"kubernetes.io/projected/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-kube-api-access-ktbhl\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.591459 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-public-tls-certs\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.610542 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-internal-tls-certs\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.611439 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7-config-data\") pod \"heat-api-75b949cc99-d78kz\" (UID: \"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7\") " pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.622730 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.644711 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-config-data-custom\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.644925 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7vml\" (UniqueName: \"kubernetes.io/projected/b85b52b6-77da-47f3-96b7-5230c7804524-kube-api-access-p7vml\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.645067 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-combined-ca-bundle\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.645302 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-public-tls-certs\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.645487 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-internal-tls-certs\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.645535 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-config-data\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.748173 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-combined-ca-bundle\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.748299 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-public-tls-certs\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.748383 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-internal-tls-certs\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.748406 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-config-data\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.748494 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-config-data-custom\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.748625 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7vml\" (UniqueName: \"kubernetes.io/projected/b85b52b6-77da-47f3-96b7-5230c7804524-kube-api-access-p7vml\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.764873 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-public-tls-certs\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.791950 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-combined-ca-bundle\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.803256 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7vml\" (UniqueName: \"kubernetes.io/projected/b85b52b6-77da-47f3-96b7-5230c7804524-kube-api-access-p7vml\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.861670 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-config-data\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.869192 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-internal-tls-certs\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:50 crc kubenswrapper[4736]: I0316 15:33:50.906327 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b85b52b6-77da-47f3-96b7-5230c7804524-config-data-custom\") pod \"heat-cfnapi-7b4b9fc8dc-4c2rv\" (UID: \"b85b52b6-77da-47f3-96b7-5230c7804524\") " pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.053717 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2436adc-ebdd-4481-92f8-b1c1d9010ab7" path="/var/lib/kubelet/pods/b2436adc-ebdd-4481-92f8-b1c1d9010ab7/volumes" Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.076423 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.139267 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dtt9x" Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.281637 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxcm7\" (UniqueName: \"kubernetes.io/projected/3f7edb4e-af8b-4049-b0d9-cd107b5cb783-kube-api-access-vxcm7\") pod \"3f7edb4e-af8b-4049-b0d9-cd107b5cb783\" (UID: \"3f7edb4e-af8b-4049-b0d9-cd107b5cb783\") " Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.281748 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f7edb4e-af8b-4049-b0d9-cd107b5cb783-operator-scripts\") pod \"3f7edb4e-af8b-4049-b0d9-cd107b5cb783\" (UID: \"3f7edb4e-af8b-4049-b0d9-cd107b5cb783\") " Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.283273 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7edb4e-af8b-4049-b0d9-cd107b5cb783-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f7edb4e-af8b-4049-b0d9-cd107b5cb783" (UID: "3f7edb4e-af8b-4049-b0d9-cd107b5cb783"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.286919 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7edb4e-af8b-4049-b0d9-cd107b5cb783-kube-api-access-vxcm7" (OuterVolumeSpecName: "kube-api-access-vxcm7") pod "3f7edb4e-af8b-4049-b0d9-cd107b5cb783" (UID: "3f7edb4e-af8b-4049-b0d9-cd107b5cb783"). InnerVolumeSpecName "kube-api-access-vxcm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.384627 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxcm7\" (UniqueName: \"kubernetes.io/projected/3f7edb4e-af8b-4049-b0d9-cd107b5cb783-kube-api-access-vxcm7\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.384689 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f7edb4e-af8b-4049-b0d9-cd107b5cb783-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.424368 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dtt9x" event={"ID":"3f7edb4e-af8b-4049-b0d9-cd107b5cb783","Type":"ContainerDied","Data":"ca794e1c49f7ffe888c513ac8f92c8357a33700e5ea4e9739f5f313686e47a73"} Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.424436 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca794e1c49f7ffe888c513ac8f92c8357a33700e5ea4e9739f5f313686e47a73" Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.424548 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dtt9x" Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.425819 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-799b5776df-77jmf" podUID="de867ad2-920b-4719-972b-71b4cbb83e2e" containerName="heat-api" containerID="cri-o://9ee3106e1ec1b95880a59b2f5ba0b62d0a9e52092274015d2a60a3a720a425f5" gracePeriod=60 Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.427768 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-56b5646679-zf58k" podUID="55d9a0a2-2842-4c23-98b1-2f97469a951f" containerName="heat-cfnapi" containerID="cri-o://9d08fae30dd0e2b3845a83d9d5ecc08b039e1df826b5a22a1bd2213c610104be" gracePeriod=60 Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.450265 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-56b5646679-zf58k" podUID="55d9a0a2-2842-4c23-98b1-2f97469a951f" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.182:8000/healthcheck\": EOF" Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.458723 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-799b5776df-77jmf" podUID="de867ad2-920b-4719-972b-71b4cbb83e2e" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.183:8004/healthcheck\": EOF" Mar 16 15:33:51 crc kubenswrapper[4736]: I0316 15:33:51.548729 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-56b5646679-zf58k" podUID="55d9a0a2-2842-4c23-98b1-2f97469a951f" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.182:8000/healthcheck\": EOF" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.125633 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kslp8" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.252606 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9763d87e-5b6c-45e2-9b19-a9799e96f9fd-operator-scripts\") pod \"9763d87e-5b6c-45e2-9b19-a9799e96f9fd\" (UID: \"9763d87e-5b6c-45e2-9b19-a9799e96f9fd\") " Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.252723 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6kmd\" (UniqueName: \"kubernetes.io/projected/9763d87e-5b6c-45e2-9b19-a9799e96f9fd-kube-api-access-t6kmd\") pod \"9763d87e-5b6c-45e2-9b19-a9799e96f9fd\" (UID: \"9763d87e-5b6c-45e2-9b19-a9799e96f9fd\") " Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.259046 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9763d87e-5b6c-45e2-9b19-a9799e96f9fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9763d87e-5b6c-45e2-9b19-a9799e96f9fd" (UID: "9763d87e-5b6c-45e2-9b19-a9799e96f9fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.277804 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9763d87e-5b6c-45e2-9b19-a9799e96f9fd-kube-api-access-t6kmd" (OuterVolumeSpecName: "kube-api-access-t6kmd") pod "9763d87e-5b6c-45e2-9b19-a9799e96f9fd" (UID: "9763d87e-5b6c-45e2-9b19-a9799e96f9fd"). InnerVolumeSpecName "kube-api-access-t6kmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.286842 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.359825 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hm67\" (UniqueName: \"kubernetes.io/projected/9093ab59-b372-4af4-8c67-a4ab97e79d33-kube-api-access-6hm67\") pod \"9093ab59-b372-4af4-8c67-a4ab97e79d33\" (UID: \"9093ab59-b372-4af4-8c67-a4ab97e79d33\") " Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.359984 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9093ab59-b372-4af4-8c67-a4ab97e79d33-operator-scripts\") pod \"9093ab59-b372-4af4-8c67-a4ab97e79d33\" (UID: \"9093ab59-b372-4af4-8c67-a4ab97e79d33\") " Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.360728 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9763d87e-5b6c-45e2-9b19-a9799e96f9fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.360756 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6kmd\" (UniqueName: \"kubernetes.io/projected/9763d87e-5b6c-45e2-9b19-a9799e96f9fd-kube-api-access-t6kmd\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.361298 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9093ab59-b372-4af4-8c67-a4ab97e79d33-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9093ab59-b372-4af4-8c67-a4ab97e79d33" (UID: "9093ab59-b372-4af4-8c67-a4ab97e79d33"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.384148 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9093ab59-b372-4af4-8c67-a4ab97e79d33-kube-api-access-6hm67" (OuterVolumeSpecName: "kube-api-access-6hm67") pod "9093ab59-b372-4af4-8c67-a4ab97e79d33" (UID: "9093ab59-b372-4af4-8c67-a4ab97e79d33"). InnerVolumeSpecName "kube-api-access-6hm67". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.466819 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hm67\" (UniqueName: \"kubernetes.io/projected/9093ab59-b372-4af4-8c67-a4ab97e79d33-kube-api-access-6hm67\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.467273 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9093ab59-b372-4af4-8c67-a4ab97e79d33-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.518943 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" event={"ID":"9093ab59-b372-4af4-8c67-a4ab97e79d33","Type":"ContainerDied","Data":"bc4b4e2614a4262bc5896e61499949ef2da42983c2d74d76ff5f34886eb17b69"} Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.519011 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc4b4e2614a4262bc5896e61499949ef2da42983c2d74d76ff5f34886eb17b69" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.519098 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-05c6-account-create-update-kmnjh" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.566378 4736 generic.go:334] "Generic (PLEG): container finished" podID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerID="155657f8a696f56dc49ec2f184c01c7c2d808c7a65337e7a1c2bfaaa3b82d5cf" exitCode=137 Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.566493 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67c978df54-kdnqn" event={"ID":"b6de0392-402f-47e4-aa1d-19c956a68e1d","Type":"ContainerDied","Data":"155657f8a696f56dc49ec2f184c01c7c2d808c7a65337e7a1c2bfaaa3b82d5cf"} Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.575922 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kslp8" event={"ID":"9763d87e-5b6c-45e2-9b19-a9799e96f9fd","Type":"ContainerDied","Data":"e3adcf2aa5b07760d6fa32235bce1f27afb9d42acb6dc18a1fa7104be47971f8"} Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.576001 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3adcf2aa5b07760d6fa32235bce1f27afb9d42acb6dc18a1fa7104be47971f8" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.576131 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kslp8" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.583441 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" event={"ID":"74565328-52cd-455e-b53c-199f069833c9","Type":"ContainerStarted","Data":"d82df3d86a501a140515b66d086ff6cb559945e1ec700711862a7f9cf2be2e05"} Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.584456 4736 scope.go:117] "RemoveContainer" containerID="d82df3d86a501a140515b66d086ff6cb559945e1ec700711862a7f9cf2be2e05" Mar 16 15:33:52 crc kubenswrapper[4736]: E0316 15:33:52.584767 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-57db97dd4d-cgdvc_openstack(74565328-52cd-455e-b53c-199f069833c9)\"" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" podUID="74565328-52cd-455e-b53c-199f069833c9" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.602058 4736 generic.go:334] "Generic (PLEG): container finished" podID="4a2c18b8-790c-4bb8-ac86-c70f0220ab3f" containerID="ae03ef4d42b1153ca6b496c7c17ea26b167a6eef22d2794e60fdb93441e529e8" exitCode=137 Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.602129 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ff55bcd5b-psrsc" event={"ID":"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f","Type":"ContainerDied","Data":"ae03ef4d42b1153ca6b496c7c17ea26b167a6eef22d2794e60fdb93441e529e8"} Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.858591 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-75b949cc99-d78kz"] Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.883758 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-snm2s" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.895483 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8b9d-account-create-update-jnndn" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.986918 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25372155-a159-47f2-aba8-8eacd41bf2ff-operator-scripts\") pod \"25372155-a159-47f2-aba8-8eacd41bf2ff\" (UID: \"25372155-a159-47f2-aba8-8eacd41bf2ff\") " Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.987307 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lntv2\" (UniqueName: \"kubernetes.io/projected/25372155-a159-47f2-aba8-8eacd41bf2ff-kube-api-access-lntv2\") pod \"25372155-a159-47f2-aba8-8eacd41bf2ff\" (UID: \"25372155-a159-47f2-aba8-8eacd41bf2ff\") " Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.987490 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe90198-808b-48df-bdef-8341723e0511-operator-scripts\") pod \"6fe90198-808b-48df-bdef-8341723e0511\" (UID: \"6fe90198-808b-48df-bdef-8341723e0511\") " Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.987536 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59qxh\" (UniqueName: \"kubernetes.io/projected/6fe90198-808b-48df-bdef-8341723e0511-kube-api-access-59qxh\") pod \"6fe90198-808b-48df-bdef-8341723e0511\" (UID: \"6fe90198-808b-48df-bdef-8341723e0511\") " Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.994276 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fe90198-808b-48df-bdef-8341723e0511-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6fe90198-808b-48df-bdef-8341723e0511" (UID: "6fe90198-808b-48df-bdef-8341723e0511"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:52 crc kubenswrapper[4736]: I0316 15:33:52.994420 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25372155-a159-47f2-aba8-8eacd41bf2ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25372155-a159-47f2-aba8-8eacd41bf2ff" (UID: "25372155-a159-47f2-aba8-8eacd41bf2ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.008428 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7c80-account-create-update-2q7s7" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.074908 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25372155-a159-47f2-aba8-8eacd41bf2ff-kube-api-access-lntv2" (OuterVolumeSpecName: "kube-api-access-lntv2") pod "25372155-a159-47f2-aba8-8eacd41bf2ff" (UID: "25372155-a159-47f2-aba8-8eacd41bf2ff"). InnerVolumeSpecName "kube-api-access-lntv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.098031 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmtvc\" (UniqueName: \"kubernetes.io/projected/1d25ea99-429a-495b-b0db-f30af232a75f-kube-api-access-lmtvc\") pod \"1d25ea99-429a-495b-b0db-f30af232a75f\" (UID: \"1d25ea99-429a-495b-b0db-f30af232a75f\") " Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.098350 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d25ea99-429a-495b-b0db-f30af232a75f-operator-scripts\") pod \"1d25ea99-429a-495b-b0db-f30af232a75f\" (UID: \"1d25ea99-429a-495b-b0db-f30af232a75f\") " Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.099891 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25372155-a159-47f2-aba8-8eacd41bf2ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.099995 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lntv2\" (UniqueName: \"kubernetes.io/projected/25372155-a159-47f2-aba8-8eacd41bf2ff-kube-api-access-lntv2\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.100083 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6fe90198-808b-48df-bdef-8341723e0511-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.106248 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d25ea99-429a-495b-b0db-f30af232a75f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d25ea99-429a-495b-b0db-f30af232a75f" (UID: "1d25ea99-429a-495b-b0db-f30af232a75f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.129234 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe90198-808b-48df-bdef-8341723e0511-kube-api-access-59qxh" (OuterVolumeSpecName: "kube-api-access-59qxh") pod "6fe90198-808b-48df-bdef-8341723e0511" (UID: "6fe90198-808b-48df-bdef-8341723e0511"). InnerVolumeSpecName "kube-api-access-59qxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.175912 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d25ea99-429a-495b-b0db-f30af232a75f-kube-api-access-lmtvc" (OuterVolumeSpecName: "kube-api-access-lmtvc") pod "1d25ea99-429a-495b-b0db-f30af232a75f" (UID: "1d25ea99-429a-495b-b0db-f30af232a75f"). InnerVolumeSpecName "kube-api-access-lmtvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.210183 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmtvc\" (UniqueName: \"kubernetes.io/projected/1d25ea99-429a-495b-b0db-f30af232a75f-kube-api-access-lmtvc\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.210225 4736 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d25ea99-429a-495b-b0db-f30af232a75f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.210236 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59qxh\" (UniqueName: \"kubernetes.io/projected/6fe90198-808b-48df-bdef-8341723e0511-kube-api-access-59qxh\") on node \"crc\" DevicePath \"\"" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.254210 4736 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod590b1c05-0ddc-4332-986b-525b6720d465"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod590b1c05-0ddc-4332-986b-525b6720d465] : Timed out while waiting for systemd to remove kubepods-besteffort-pod590b1c05_0ddc_4332_986b_525b6720d465.slice" Mar 16 15:33:53 crc kubenswrapper[4736]: E0316 15:33:53.254288 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod590b1c05-0ddc-4332-986b-525b6720d465] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod590b1c05-0ddc-4332-986b-525b6720d465] : Timed out while waiting for systemd to remove kubepods-besteffort-pod590b1c05_0ddc_4332_986b_525b6720d465.slice" pod="openstack/cinder-api-0" podUID="590b1c05-0ddc-4332-986b-525b6720d465" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.459254 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b4b9fc8dc-4c2rv"] Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.573223 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-678dd4f677-jxtsk"] Mar 16 15:33:53 crc kubenswrapper[4736]: E0316 15:33:53.574070 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9763d87e-5b6c-45e2-9b19-a9799e96f9fd" containerName="mariadb-database-create" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.574085 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9763d87e-5b6c-45e2-9b19-a9799e96f9fd" containerName="mariadb-database-create" Mar 16 15:33:53 crc kubenswrapper[4736]: E0316 15:33:53.574119 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25372155-a159-47f2-aba8-8eacd41bf2ff" containerName="mariadb-account-create-update" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.574125 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="25372155-a159-47f2-aba8-8eacd41bf2ff" containerName="mariadb-account-create-update" Mar 16 15:33:53 crc kubenswrapper[4736]: E0316 15:33:53.574140 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d25ea99-429a-495b-b0db-f30af232a75f" containerName="mariadb-account-create-update" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.574146 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d25ea99-429a-495b-b0db-f30af232a75f" containerName="mariadb-account-create-update" Mar 16 15:33:53 crc kubenswrapper[4736]: E0316 15:33:53.574158 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9093ab59-b372-4af4-8c67-a4ab97e79d33" containerName="mariadb-account-create-update" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.574163 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9093ab59-b372-4af4-8c67-a4ab97e79d33" containerName="mariadb-account-create-update" Mar 16 15:33:53 crc kubenswrapper[4736]: E0316 15:33:53.574183 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe90198-808b-48df-bdef-8341723e0511" containerName="mariadb-database-create" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.574189 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe90198-808b-48df-bdef-8341723e0511" containerName="mariadb-database-create" Mar 16 15:33:53 crc kubenswrapper[4736]: E0316 15:33:53.574206 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7edb4e-af8b-4049-b0d9-cd107b5cb783" containerName="mariadb-database-create" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.574212 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7edb4e-af8b-4049-b0d9-cd107b5cb783" containerName="mariadb-database-create" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.574416 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d25ea99-429a-495b-b0db-f30af232a75f" containerName="mariadb-account-create-update" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.574434 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7edb4e-af8b-4049-b0d9-cd107b5cb783" containerName="mariadb-database-create" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.574441 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="25372155-a159-47f2-aba8-8eacd41bf2ff" containerName="mariadb-account-create-update" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.574449 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9093ab59-b372-4af4-8c67-a4ab97e79d33" containerName="mariadb-account-create-update" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.574455 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fe90198-808b-48df-bdef-8341723e0511" containerName="mariadb-database-create" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.574467 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9763d87e-5b6c-45e2-9b19-a9799e96f9fd" containerName="mariadb-database-create" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.575542 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.596725 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.597140 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.597513 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.622000 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67c978df54-kdnqn" event={"ID":"b6de0392-402f-47e4-aa1d-19c956a68e1d","Type":"ContainerStarted","Data":"bb34a40cb6b11f02510ae1feff999a3f1039c1a0ec12f398e7f5deb918dc9b15"} Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.631520 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bccee937-d642-4483-87fb-033b157cf68c-run-httpd\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.631592 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bccee937-d642-4483-87fb-033b157cf68c-config-data\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.631617 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.631691 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bccee937-d642-4483-87fb-033b157cf68c-public-tls-certs\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.631727 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bccee937-d642-4483-87fb-033b157cf68c-internal-tls-certs\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.631776 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzn5q\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-kube-api-access-kzn5q\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.631835 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bccee937-d642-4483-87fb-033b157cf68c-combined-ca-bundle\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.631858 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bccee937-d642-4483-87fb-033b157cf68c-log-httpd\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.638227 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ff55bcd5b-psrsc" event={"ID":"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f","Type":"ContainerStarted","Data":"92eb5124b55e332240f3b250db39e95238d56dfb7b44e8b175f4bd70f4318710"} Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.638930 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-678dd4f677-jxtsk"] Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.662788 4736 generic.go:334] "Generic (PLEG): container finished" podID="74565328-52cd-455e-b53c-199f069833c9" containerID="d82df3d86a501a140515b66d086ff6cb559945e1ec700711862a7f9cf2be2e05" exitCode=1 Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.664012 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" event={"ID":"74565328-52cd-455e-b53c-199f069833c9","Type":"ContainerDied","Data":"d82df3d86a501a140515b66d086ff6cb559945e1ec700711862a7f9cf2be2e05"} Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.664489 4736 scope.go:117] "RemoveContainer" containerID="6d35b6e04d42a028ff585588c48e3c20927eee31924fdcb36fff2b8fb0e5034e" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.665788 4736 scope.go:117] "RemoveContainer" containerID="d82df3d86a501a140515b66d086ff6cb559945e1ec700711862a7f9cf2be2e05" Mar 16 15:33:53 crc kubenswrapper[4736]: E0316 15:33:53.666098 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-57db97dd4d-cgdvc_openstack(74565328-52cd-455e-b53c-199f069833c9)\"" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" podUID="74565328-52cd-455e-b53c-199f069833c9" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.722511 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"979cb634-6ed1-418b-8b87-1e5064da96c8","Type":"ContainerStarted","Data":"fde1930900329527d3309922513a43d34f818e47ca21bd18b4bcbdbaa217b8ab"} Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.723823 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.737979 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bccee937-d642-4483-87fb-033b157cf68c-combined-ca-bundle\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.738048 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bccee937-d642-4483-87fb-033b157cf68c-log-httpd\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.738145 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bccee937-d642-4483-87fb-033b157cf68c-run-httpd\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.738217 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bccee937-d642-4483-87fb-033b157cf68c-config-data\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.738240 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.738338 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bccee937-d642-4483-87fb-033b157cf68c-public-tls-certs\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.738370 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bccee937-d642-4483-87fb-033b157cf68c-internal-tls-certs\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.738453 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzn5q\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-kube-api-access-kzn5q\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.751534 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bccee937-d642-4483-87fb-033b157cf68c-log-httpd\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.758393 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bccee937-d642-4483-87fb-033b157cf68c-combined-ca-bundle\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.762858 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bccee937-d642-4483-87fb-033b157cf68c-run-httpd\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: E0316 15:33:53.765137 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:33:53 crc kubenswrapper[4736]: E0316 15:33:53.765184 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-678dd4f677-jxtsk: configmap "swift-ring-files" not found Mar 16 15:33:53 crc kubenswrapper[4736]: E0316 15:33:53.765239 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift podName:bccee937-d642-4483-87fb-033b157cf68c nodeName:}" failed. No retries permitted until 2026-03-16 15:33:54.265219208 +0000 UTC m=+1235.992609495 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift") pod "swift-proxy-678dd4f677-jxtsk" (UID: "bccee937-d642-4483-87fb-033b157cf68c") : configmap "swift-ring-files" not found Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.778319 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bccee937-d642-4483-87fb-033b157cf68c-public-tls-certs\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.782840 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" event={"ID":"b85b52b6-77da-47f3-96b7-5230c7804524","Type":"ContainerStarted","Data":"140144c7f252dcab7b2235a51fd5ca5bd05f48c578e1994a6f62343cf8e93472"} Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.785454 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bccee937-d642-4483-87fb-033b157cf68c-internal-tls-certs\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.790715 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzn5q\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-kube-api-access-kzn5q\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.822256 4736 generic.go:334] "Generic (PLEG): container finished" podID="68f8b303-ec21-4d2d-a420-f06569625ae4" containerID="ff6195edf0f8b7ff7b191ad7d6a034c8ff4e86f0fcd765c555d3b3bb58dd34d4" exitCode=1 Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.822747 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c8db4894c-j6mwd" event={"ID":"68f8b303-ec21-4d2d-a420-f06569625ae4","Type":"ContainerDied","Data":"ff6195edf0f8b7ff7b191ad7d6a034c8ff4e86f0fcd765c555d3b3bb58dd34d4"} Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.830946 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bccee937-d642-4483-87fb-033b157cf68c-config-data\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.832355 4736 scope.go:117] "RemoveContainer" containerID="ff6195edf0f8b7ff7b191ad7d6a034c8ff4e86f0fcd765c555d3b3bb58dd34d4" Mar 16 15:33:53 crc kubenswrapper[4736]: E0316 15:33:53.832699 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6c8db4894c-j6mwd_openstack(68f8b303-ec21-4d2d-a420-f06569625ae4)\"" pod="openstack/heat-api-6c8db4894c-j6mwd" podUID="68f8b303-ec21-4d2d-a420-f06569625ae4" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.835143 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8b9d-account-create-update-jnndn" event={"ID":"25372155-a159-47f2-aba8-8eacd41bf2ff","Type":"ContainerDied","Data":"e54fda7607b86e67408c841e3fc021738ab4d71c9216f2ec5a82c6b361615442"} Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.835208 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e54fda7607b86e67408c841e3fc021738ab4d71c9216f2ec5a82c6b361615442" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.835275 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8b9d-account-create-update-jnndn" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.861844 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-snm2s" event={"ID":"6fe90198-808b-48df-bdef-8341723e0511","Type":"ContainerDied","Data":"e975bddd9d54bb18d1f2d045444eff33537f9c0fa9b1c1cff8ebd37bf160805b"} Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.862185 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e975bddd9d54bb18d1f2d045444eff33537f9c0fa9b1c1cff8ebd37bf160805b" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.862343 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-snm2s" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.876198 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75b949cc99-d78kz" event={"ID":"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7","Type":"ContainerStarted","Data":"7b302c6c40b8213f076fa67d873c03f790f7da16809e3be29e4596cbbcc04594"} Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.887376 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.889334 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7c80-account-create-update-2q7s7" event={"ID":"1d25ea99-429a-495b-b0db-f30af232a75f","Type":"ContainerDied","Data":"5f1da14c22ff28abfcba15e3499b468a64fa9d91ed864d6200d7c371c838774b"} Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.889545 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f1da14c22ff28abfcba15e3499b468a64fa9d91ed864d6200d7c371c838774b" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.889395 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7c80-account-create-update-2q7s7" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.894874 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.39248791 podStartE2EDuration="18.894855751s" podCreationTimestamp="2026-03-16 15:33:35 +0000 UTC" firstStartedPulling="2026-03-16 15:33:37.36260799 +0000 UTC m=+1219.089998277" lastFinishedPulling="2026-03-16 15:33:51.864975831 +0000 UTC m=+1233.592366118" observedRunningTime="2026-03-16 15:33:53.844516627 +0000 UTC m=+1235.571906914" watchObservedRunningTime="2026-03-16 15:33:53.894855751 +0000 UTC m=+1235.622246038" Mar 16 15:33:53 crc kubenswrapper[4736]: I0316 15:33:53.964045 4736 scope.go:117] "RemoveContainer" containerID="f0cbb0a9d1f4457335870ec440ec29d0cb5a96ad186a02698efd97de035f212c" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.062515 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.088461 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.191128 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.206536 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.214277 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.214520 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.215335 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.225169 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.270495 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.271045 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.285282 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4frj\" (UniqueName: \"kubernetes.io/projected/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-kube-api-access-z4frj\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.285399 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.285437 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.285569 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.285641 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-config-data-custom\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.285706 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.285755 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.285825 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-config-data\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.285867 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-scripts\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.285944 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-logs\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: E0316 15:33:54.286742 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:33:54 crc kubenswrapper[4736]: E0316 15:33:54.286765 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-678dd4f677-jxtsk: configmap "swift-ring-files" not found Mar 16 15:33:54 crc kubenswrapper[4736]: E0316 15:33:54.286829 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift podName:bccee937-d642-4483-87fb-033b157cf68c nodeName:}" failed. No retries permitted until 2026-03-16 15:33:55.286810691 +0000 UTC m=+1237.014200978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift") pod "swift-proxy-678dd4f677-jxtsk" (UID: "bccee937-d642-4483-87fb-033b157cf68c") : configmap "swift-ring-files" not found Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.397741 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4frj\" (UniqueName: \"kubernetes.io/projected/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-kube-api-access-z4frj\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.398664 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.398754 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.398876 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.398963 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-config-data-custom\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.399072 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.399171 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-config-data\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.399252 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-scripts\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.399343 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-logs\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.399823 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-logs\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.410821 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.412213 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.415448 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-config-data-custom\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.421254 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4frj\" (UniqueName: \"kubernetes.io/projected/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-kube-api-access-z4frj\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.424859 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-public-tls-certs\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.425132 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-config-data\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.426049 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.427303 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9-scripts\") pod \"cinder-api-0\" (UID: \"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9\") " pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.567128 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.567191 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.572372 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.924932 4736 scope.go:117] "RemoveContainer" containerID="d82df3d86a501a140515b66d086ff6cb559945e1ec700711862a7f9cf2be2e05" Mar 16 15:33:54 crc kubenswrapper[4736]: E0316 15:33:54.925734 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-57db97dd4d-cgdvc_openstack(74565328-52cd-455e-b53c-199f069833c9)\"" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" podUID="74565328-52cd-455e-b53c-199f069833c9" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.926864 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" event={"ID":"b85b52b6-77da-47f3-96b7-5230c7804524","Type":"ContainerStarted","Data":"a6d343b8e71599a819c87475a54188e40f3d7181b35df62445552ec739b9c604"} Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.930719 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.943656 4736 scope.go:117] "RemoveContainer" containerID="ff6195edf0f8b7ff7b191ad7d6a034c8ff4e86f0fcd765c555d3b3bb58dd34d4" Mar 16 15:33:54 crc kubenswrapper[4736]: E0316 15:33:54.943912 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6c8db4894c-j6mwd_openstack(68f8b303-ec21-4d2d-a420-f06569625ae4)\"" pod="openstack/heat-api-6c8db4894c-j6mwd" podUID="68f8b303-ec21-4d2d-a420-f06569625ae4" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.953169 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75b949cc99-d78kz" event={"ID":"f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7","Type":"ContainerStarted","Data":"b25d753dab4ae8f72cf74096d636c566e21187e9afc35445e00be175d92ca736"} Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.954878 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:33:54 crc kubenswrapper[4736]: I0316 15:33:54.975523 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" podStartSLOduration=4.975486628 podStartE2EDuration="4.975486628s" podCreationTimestamp="2026-03-16 15:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:54.963732454 +0000 UTC m=+1236.691122741" watchObservedRunningTime="2026-03-16 15:33:54.975486628 +0000 UTC m=+1236.702876915" Mar 16 15:33:55 crc kubenswrapper[4736]: I0316 15:33:55.008149 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-75b949cc99-d78kz" podStartSLOduration=5.008098619 podStartE2EDuration="5.008098619s" podCreationTimestamp="2026-03-16 15:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:55.006858676 +0000 UTC m=+1236.734248963" watchObservedRunningTime="2026-03-16 15:33:55.008098619 +0000 UTC m=+1236.735488906" Mar 16 15:33:55 crc kubenswrapper[4736]: I0316 15:33:55.024086 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="590b1c05-0ddc-4332-986b-525b6720d465" path="/var/lib/kubelet/pods/590b1c05-0ddc-4332-986b-525b6720d465/volumes" Mar 16 15:33:55 crc kubenswrapper[4736]: I0316 15:33:55.188668 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 16 15:33:55 crc kubenswrapper[4736]: W0316 15:33:55.213577 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40c8e7ab_5a8b_4ea8_bf0a_745f04e4f9e9.slice/crio-d3e7bcb0d1a15dbd84f41b581156446902aa14fdcd8923de25de80a5da5580e7 WatchSource:0}: Error finding container d3e7bcb0d1a15dbd84f41b581156446902aa14fdcd8923de25de80a5da5580e7: Status 404 returned error can't find the container with id d3e7bcb0d1a15dbd84f41b581156446902aa14fdcd8923de25de80a5da5580e7 Mar 16 15:33:55 crc kubenswrapper[4736]: I0316 15:33:55.323559 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:55 crc kubenswrapper[4736]: E0316 15:33:55.323814 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:33:55 crc kubenswrapper[4736]: E0316 15:33:55.323854 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-678dd4f677-jxtsk: configmap "swift-ring-files" not found Mar 16 15:33:55 crc kubenswrapper[4736]: E0316 15:33:55.323934 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift podName:bccee937-d642-4483-87fb-033b157cf68c nodeName:}" failed. No retries permitted until 2026-03-16 15:33:57.323909346 +0000 UTC m=+1239.051299633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift") pod "swift-proxy-678dd4f677-jxtsk" (UID: "bccee937-d642-4483-87fb-033b157cf68c") : configmap "swift-ring-files" not found Mar 16 15:33:55 crc kubenswrapper[4736]: I0316 15:33:55.987139 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9","Type":"ContainerStarted","Data":"d3e7bcb0d1a15dbd84f41b581156446902aa14fdcd8923de25de80a5da5580e7"} Mar 16 15:33:55 crc kubenswrapper[4736]: I0316 15:33:55.987865 4736 scope.go:117] "RemoveContainer" containerID="ff6195edf0f8b7ff7b191ad7d6a034c8ff4e86f0fcd765c555d3b3bb58dd34d4" Mar 16 15:33:55 crc kubenswrapper[4736]: E0316 15:33:55.988215 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6c8db4894c-j6mwd_openstack(68f8b303-ec21-4d2d-a420-f06569625ae4)\"" pod="openstack/heat-api-6c8db4894c-j6mwd" podUID="68f8b303-ec21-4d2d-a420-f06569625ae4" Mar 16 15:33:55 crc kubenswrapper[4736]: I0316 15:33:55.988990 4736 scope.go:117] "RemoveContainer" containerID="d82df3d86a501a140515b66d086ff6cb559945e1ec700711862a7f9cf2be2e05" Mar 16 15:33:55 crc kubenswrapper[4736]: E0316 15:33:55.989182 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-57db97dd4d-cgdvc_openstack(74565328-52cd-455e-b53c-199f069833c9)\"" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" podUID="74565328-52cd-455e-b53c-199f069833c9" Mar 16 15:33:56 crc kubenswrapper[4736]: I0316 15:33:56.579952 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:33:56 crc kubenswrapper[4736]: I0316 15:33:56.868173 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:33:57 crc kubenswrapper[4736]: I0316 15:33:57.029157 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="ceilometer-central-agent" containerID="cri-o://b9fb6e699f3ea391a43c6ebb50fd2a1b84b5414735c34127821eead6962bec2b" gracePeriod=30 Mar 16 15:33:57 crc kubenswrapper[4736]: I0316 15:33:57.029878 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="proxy-httpd" containerID="cri-o://fde1930900329527d3309922513a43d34f818e47ca21bd18b4bcbdbaa217b8ab" gracePeriod=30 Mar 16 15:33:57 crc kubenswrapper[4736]: I0316 15:33:57.030026 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="ceilometer-notification-agent" containerID="cri-o://5575127eab2bf22fa7b9d9a952e86aa33867feb7857f0fea413c7cfaa3c4b977" gracePeriod=30 Mar 16 15:33:57 crc kubenswrapper[4736]: I0316 15:33:57.030068 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="sg-core" containerID="cri-o://683b5e4909c2aceb15bd9e1c93eb1708165a4b42f671d9898b2a182e458069e2" gracePeriod=30 Mar 16 15:33:57 crc kubenswrapper[4736]: I0316 15:33:57.082027 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9","Type":"ContainerStarted","Data":"18d1b846f6be31cbd2c50aeb9282e9c425d1ad3c0a9b71338f6bcff3f84dc49d"} Mar 16 15:33:57 crc kubenswrapper[4736]: I0316 15:33:57.425284 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:33:57 crc kubenswrapper[4736]: E0316 15:33:57.425548 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:33:57 crc kubenswrapper[4736]: E0316 15:33:57.425600 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-678dd4f677-jxtsk: configmap "swift-ring-files" not found Mar 16 15:33:57 crc kubenswrapper[4736]: E0316 15:33:57.425688 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift podName:bccee937-d642-4483-87fb-033b157cf68c nodeName:}" failed. No retries permitted until 2026-03-16 15:34:01.425659899 +0000 UTC m=+1243.153050366 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift") pod "swift-proxy-678dd4f677-jxtsk" (UID: "bccee937-d642-4483-87fb-033b157cf68c") : configmap "swift-ring-files" not found Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.046152 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9","Type":"ContainerStarted","Data":"aafe2ae9b46f933cf7e57b1d94129fc18f3abf4f6389d437a0bdc7d3eb831808"} Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.046557 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.058992 4736 generic.go:334] "Generic (PLEG): container finished" podID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerID="fde1930900329527d3309922513a43d34f818e47ca21bd18b4bcbdbaa217b8ab" exitCode=0 Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.059051 4736 generic.go:334] "Generic (PLEG): container finished" podID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerID="683b5e4909c2aceb15bd9e1c93eb1708165a4b42f671d9898b2a182e458069e2" exitCode=2 Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.059063 4736 generic.go:334] "Generic (PLEG): container finished" podID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerID="b9fb6e699f3ea391a43c6ebb50fd2a1b84b5414735c34127821eead6962bec2b" exitCode=0 Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.059093 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"979cb634-6ed1-418b-8b87-1e5064da96c8","Type":"ContainerDied","Data":"fde1930900329527d3309922513a43d34f818e47ca21bd18b4bcbdbaa217b8ab"} Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.059149 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"979cb634-6ed1-418b-8b87-1e5064da96c8","Type":"ContainerDied","Data":"683b5e4909c2aceb15bd9e1c93eb1708165a4b42f671d9898b2a182e458069e2"} Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.059159 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"979cb634-6ed1-418b-8b87-1e5064da96c8","Type":"ContainerDied","Data":"b9fb6e699f3ea391a43c6ebb50fd2a1b84b5414735c34127821eead6962bec2b"} Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.077900 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.077877912 podStartE2EDuration="4.077877912s" podCreationTimestamp="2026-03-16 15:33:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:33:58.072143949 +0000 UTC m=+1239.799534236" watchObservedRunningTime="2026-03-16 15:33:58.077877912 +0000 UTC m=+1239.805268199" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.448311 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zhxnt"] Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.450369 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.453742 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.453912 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.454144 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zld2d" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.474459 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zhxnt"] Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.550601 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn9j5\" (UniqueName: \"kubernetes.io/projected/c548edb2-4f30-4790-bf13-c2509601cd25-kube-api-access-qn9j5\") pod \"nova-cell0-conductor-db-sync-zhxnt\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.550808 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-scripts\") pod \"nova-cell0-conductor-db-sync-zhxnt\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.550834 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zhxnt\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.550878 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-config-data\") pod \"nova-cell0-conductor-db-sync-zhxnt\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.652704 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-config-data\") pod \"nova-cell0-conductor-db-sync-zhxnt\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.652860 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn9j5\" (UniqueName: \"kubernetes.io/projected/c548edb2-4f30-4790-bf13-c2509601cd25-kube-api-access-qn9j5\") pod \"nova-cell0-conductor-db-sync-zhxnt\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.653055 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-scripts\") pod \"nova-cell0-conductor-db-sync-zhxnt\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.653078 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zhxnt\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.663633 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zhxnt\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.665811 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-scripts\") pod \"nova-cell0-conductor-db-sync-zhxnt\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.670357 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-config-data\") pod \"nova-cell0-conductor-db-sync-zhxnt\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.693149 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn9j5\" (UniqueName: \"kubernetes.io/projected/c548edb2-4f30-4790-bf13-c2509601cd25-kube-api-access-qn9j5\") pod \"nova-cell0-conductor-db-sync-zhxnt\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:33:58 crc kubenswrapper[4736]: I0316 15:33:58.783422 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:33:59 crc kubenswrapper[4736]: I0316 15:33:59.080211 4736 generic.go:334] "Generic (PLEG): container finished" podID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerID="5575127eab2bf22fa7b9d9a952e86aa33867feb7857f0fea413c7cfaa3c4b977" exitCode=0 Mar 16 15:33:59 crc kubenswrapper[4736]: I0316 15:33:59.080278 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"979cb634-6ed1-418b-8b87-1e5064da96c8","Type":"ContainerDied","Data":"5575127eab2bf22fa7b9d9a952e86aa33867feb7857f0fea413c7cfaa3c4b977"} Mar 16 15:33:59 crc kubenswrapper[4736]: I0316 15:33:59.228297 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:59 crc kubenswrapper[4736]: I0316 15:33:59.298050 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-784f554468-tgz6j" Mar 16 15:33:59 crc kubenswrapper[4736]: I0316 15:33:59.451480 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-657c4c6596-nmfhd"] Mar 16 15:33:59 crc kubenswrapper[4736]: I0316 15:33:59.452066 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-657c4c6596-nmfhd" podUID="8dcb0e26-e416-4867-8748-92d2b405bc20" containerName="placement-log" containerID="cri-o://ca314bf15850304723004c1b313bdeda120468ecfaf784ef381f814bf6c8a800" gracePeriod=30 Mar 16 15:33:59 crc kubenswrapper[4736]: I0316 15:33:59.452554 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-657c4c6596-nmfhd" podUID="8dcb0e26-e416-4867-8748-92d2b405bc20" containerName="placement-api" containerID="cri-o://ecda1de7c141cbe7ee94e87cd039c10e914287a00c96da9567f266214ab678f6" gracePeriod=30 Mar 16 15:33:59 crc kubenswrapper[4736]: I0316 15:33:59.899684 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-799b5776df-77jmf" podUID="de867ad2-920b-4719-972b-71b4cbb83e2e" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.183:8004/healthcheck\": read tcp 10.217.0.2:45094->10.217.0.183:8004: read: connection reset by peer" Mar 16 15:33:59 crc kubenswrapper[4736]: I0316 15:33:59.900550 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-799b5776df-77jmf" podUID="de867ad2-920b-4719-972b-71b4cbb83e2e" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.183:8004/healthcheck\": dial tcp 10.217.0.183:8004: connect: connection refused" Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.045467 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-56b5646679-zf58k" podUID="55d9a0a2-2842-4c23-98b1-2f97469a951f" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.182:8000/healthcheck\": read tcp 10.217.0.2:49660->10.217.0.182:8000: read: connection reset by peer" Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.092440 4736 generic.go:334] "Generic (PLEG): container finished" podID="de867ad2-920b-4719-972b-71b4cbb83e2e" containerID="9ee3106e1ec1b95880a59b2f5ba0b62d0a9e52092274015d2a60a3a720a425f5" exitCode=0 Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.092524 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-799b5776df-77jmf" event={"ID":"de867ad2-920b-4719-972b-71b4cbb83e2e","Type":"ContainerDied","Data":"9ee3106e1ec1b95880a59b2f5ba0b62d0a9e52092274015d2a60a3a720a425f5"} Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.095157 4736 generic.go:334] "Generic (PLEG): container finished" podID="8dcb0e26-e416-4867-8748-92d2b405bc20" containerID="ca314bf15850304723004c1b313bdeda120468ecfaf784ef381f814bf6c8a800" exitCode=143 Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.095262 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-657c4c6596-nmfhd" event={"ID":"8dcb0e26-e416-4867-8748-92d2b405bc20","Type":"ContainerDied","Data":"ca314bf15850304723004c1b313bdeda120468ecfaf784ef381f814bf6c8a800"} Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.157558 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561254-fl8vh"] Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.158965 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561254-fl8vh" Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.163432 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.163783 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.163835 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.175921 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561254-fl8vh"] Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.320403 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tggxm\" (UniqueName: \"kubernetes.io/projected/4074fbb2-9d24-491f-9053-54171f6e4dbd-kube-api-access-tggxm\") pod \"auto-csr-approver-29561254-fl8vh\" (UID: \"4074fbb2-9d24-491f-9053-54171f6e4dbd\") " pod="openshift-infra/auto-csr-approver-29561254-fl8vh" Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.423406 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tggxm\" (UniqueName: \"kubernetes.io/projected/4074fbb2-9d24-491f-9053-54171f6e4dbd-kube-api-access-tggxm\") pod \"auto-csr-approver-29561254-fl8vh\" (UID: \"4074fbb2-9d24-491f-9053-54171f6e4dbd\") " pod="openshift-infra/auto-csr-approver-29561254-fl8vh" Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.464049 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tggxm\" (UniqueName: \"kubernetes.io/projected/4074fbb2-9d24-491f-9053-54171f6e4dbd-kube-api-access-tggxm\") pod \"auto-csr-approver-29561254-fl8vh\" (UID: \"4074fbb2-9d24-491f-9053-54171f6e4dbd\") " pod="openshift-infra/auto-csr-approver-29561254-fl8vh" Mar 16 15:34:00 crc kubenswrapper[4736]: I0316 15:34:00.486498 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561254-fl8vh" Mar 16 15:34:01 crc kubenswrapper[4736]: I0316 15:34:01.120148 4736 generic.go:334] "Generic (PLEG): container finished" podID="55d9a0a2-2842-4c23-98b1-2f97469a951f" containerID="9d08fae30dd0e2b3845a83d9d5ecc08b039e1df826b5a22a1bd2213c610104be" exitCode=0 Mar 16 15:34:01 crc kubenswrapper[4736]: I0316 15:34:01.120270 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-56b5646679-zf58k" event={"ID":"55d9a0a2-2842-4c23-98b1-2f97469a951f","Type":"ContainerDied","Data":"9d08fae30dd0e2b3845a83d9d5ecc08b039e1df826b5a22a1bd2213c610104be"} Mar 16 15:34:01 crc kubenswrapper[4736]: I0316 15:34:01.446220 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:34:01 crc kubenswrapper[4736]: E0316 15:34:01.446480 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:34:01 crc kubenswrapper[4736]: E0316 15:34:01.446521 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-678dd4f677-jxtsk: configmap "swift-ring-files" not found Mar 16 15:34:01 crc kubenswrapper[4736]: E0316 15:34:01.446603 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift podName:bccee937-d642-4483-87fb-033b157cf68c nodeName:}" failed. No retries permitted until 2026-03-16 15:34:09.44657483 +0000 UTC m=+1251.173965117 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift") pod "swift-proxy-678dd4f677-jxtsk" (UID: "bccee937-d642-4483-87fb-033b157cf68c") : configmap "swift-ring-files" not found Mar 16 15:34:01 crc kubenswrapper[4736]: I0316 15:34:01.521274 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-56b5646679-zf58k" podUID="55d9a0a2-2842-4c23-98b1-2f97469a951f" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.182:8000/healthcheck\": dial tcp 10.217.0.182:8000: connect: connection refused" Mar 16 15:34:01 crc kubenswrapper[4736]: I0316 15:34:01.604409 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:34:01 crc kubenswrapper[4736]: I0316 15:34:01.604779 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:34:01 crc kubenswrapper[4736]: I0316 15:34:01.813975 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:34:01 crc kubenswrapper[4736]: I0316 15:34:01.814030 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:34:01 crc kubenswrapper[4736]: I0316 15:34:01.849933 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-799b5776df-77jmf" podUID="de867ad2-920b-4719-972b-71b4cbb83e2e" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.183:8004/healthcheck\": dial tcp 10.217.0.183:8004: connect: connection refused" Mar 16 15:34:02 crc kubenswrapper[4736]: I0316 15:34:02.995860 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-75b949cc99-d78kz" Mar 16 15:34:03 crc kubenswrapper[4736]: I0316 15:34:03.091293 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c8db4894c-j6mwd"] Mar 16 15:34:03 crc kubenswrapper[4736]: I0316 15:34:03.156826 4736 generic.go:334] "Generic (PLEG): container finished" podID="8dcb0e26-e416-4867-8748-92d2b405bc20" containerID="ecda1de7c141cbe7ee94e87cd039c10e914287a00c96da9567f266214ab678f6" exitCode=0 Mar 16 15:34:03 crc kubenswrapper[4736]: I0316 15:34:03.156906 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-657c4c6596-nmfhd" event={"ID":"8dcb0e26-e416-4867-8748-92d2b405bc20","Type":"ContainerDied","Data":"ecda1de7c141cbe7ee94e87cd039c10e914287a00c96da9567f266214ab678f6"} Mar 16 15:34:03 crc kubenswrapper[4736]: I0316 15:34:03.485913 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7b4b9fc8dc-4c2rv" Mar 16 15:34:03 crc kubenswrapper[4736]: I0316 15:34:03.586320 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57db97dd4d-cgdvc"] Mar 16 15:34:04 crc kubenswrapper[4736]: I0316 15:34:04.266404 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-678d85b7f7-bdd5r" Mar 16 15:34:04 crc kubenswrapper[4736]: I0316 15:34:04.328526 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-78d5db5598-m7dnz"] Mar 16 15:34:04 crc kubenswrapper[4736]: I0316 15:34:04.328910 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-78d5db5598-m7dnz" podUID="2a16c147-1622-4470-a54d-e331fae4ea8a" containerName="heat-engine" containerID="cri-o://ebcc15e8677362de260b8c26909e58b6d9aa895b94bd769385e9f9c2c80a2601" gracePeriod=60 Mar 16 15:34:06 crc kubenswrapper[4736]: E0316 15:34:06.399192 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebcc15e8677362de260b8c26909e58b6d9aa895b94bd769385e9f9c2c80a2601" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 16 15:34:06 crc kubenswrapper[4736]: E0316 15:34:06.404148 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebcc15e8677362de260b8c26909e58b6d9aa895b94bd769385e9f9c2c80a2601" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 16 15:34:06 crc kubenswrapper[4736]: E0316 15:34:06.408693 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebcc15e8677362de260b8c26909e58b6d9aa895b94bd769385e9f9c2c80a2601" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 16 15:34:06 crc kubenswrapper[4736]: E0316 15:34:06.408747 4736 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-78d5db5598-m7dnz" podUID="2a16c147-1622-4470-a54d-e331fae4ea8a" containerName="heat-engine" Mar 16 15:34:06 crc kubenswrapper[4736]: I0316 15:34:06.495930 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.180:3000/\": dial tcp 10.217.0.180:3000: connect: connection refused" Mar 16 15:34:06 crc kubenswrapper[4736]: I0316 15:34:06.521414 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-56b5646679-zf58k" podUID="55d9a0a2-2842-4c23-98b1-2f97469a951f" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.182:8000/healthcheck\": dial tcp 10.217.0.182:8000: connect: connection refused" Mar 16 15:34:06 crc kubenswrapper[4736]: I0316 15:34:06.872497 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-799b5776df-77jmf" podUID="de867ad2-920b-4719-972b-71b4cbb83e2e" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.183:8004/healthcheck\": dial tcp 10.217.0.183:8004: connect: connection refused" Mar 16 15:34:08 crc kubenswrapper[4736]: I0316 15:34:08.606370 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.196:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:34:09 crc kubenswrapper[4736]: I0316 15:34:09.464835 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:34:09 crc kubenswrapper[4736]: E0316 15:34:09.465123 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:34:09 crc kubenswrapper[4736]: E0316 15:34:09.465435 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-678dd4f677-jxtsk: configmap "swift-ring-files" not found Mar 16 15:34:09 crc kubenswrapper[4736]: E0316 15:34:09.465533 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift podName:bccee937-d642-4483-87fb-033b157cf68c nodeName:}" failed. No retries permitted until 2026-03-16 15:34:25.465500999 +0000 UTC m=+1267.192891276 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift") pod "swift-proxy-678dd4f677-jxtsk" (UID: "bccee937-d642-4483-87fb-033b157cf68c") : configmap "swift-ring-files" not found Mar 16 15:34:09 crc kubenswrapper[4736]: I0316 15:34:09.580550 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.196:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:34:11 crc kubenswrapper[4736]: I0316 15:34:11.521684 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-56b5646679-zf58k" podUID="55d9a0a2-2842-4c23-98b1-2f97469a951f" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.182:8000/healthcheck\": dial tcp 10.217.0.182:8000: connect: connection refused" Mar 16 15:34:11 crc kubenswrapper[4736]: I0316 15:34:11.606156 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 16 15:34:11 crc kubenswrapper[4736]: I0316 15:34:11.819190 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ff55bcd5b-psrsc" podUID="4a2c18b8-790c-4bb8-ac86-c70f0220ab3f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Mar 16 15:34:11 crc kubenswrapper[4736]: I0316 15:34:11.850726 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-799b5776df-77jmf" podUID="de867ad2-920b-4719-972b-71b4cbb83e2e" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.183:8004/healthcheck\": dial tcp 10.217.0.183:8004: connect: connection refused" Mar 16 15:34:12 crc kubenswrapper[4736]: E0316 15:34:12.085319 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-openstackclient:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:34:12 crc kubenswrapper[4736]: E0316 15:34:12.085381 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-openstackclient:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:34:12 crc kubenswrapper[4736]: E0316 15:34:12.085537 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-openstackclient:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bbh59h56ch5b6h5c7h554h66bh546hcdh57bh5d5h59chd4hf4h8chf4h547hc4h676h6bhddh685h558h556h5c9h68fh67ch674h97hch679h58dq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkdnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(421bab10-ac4a-458f-98e3-18cd0adef038): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:34:12 crc kubenswrapper[4736]: E0316 15:34:12.086771 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="421bab10-ac4a-458f-98e3-18cd0adef038" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.299938 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.340423 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.395377 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c8db4894c-j6mwd" event={"ID":"68f8b303-ec21-4d2d-a420-f06569625ae4","Type":"ContainerDied","Data":"c34180b85432981ac651492569d569fb7f02b3336c40f9f4afe0c878b8d766e1"} Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.395685 4736 scope.go:117] "RemoveContainer" containerID="ff6195edf0f8b7ff7b191ad7d6a034c8ff4e86f0fcd765c555d3b3bb58dd34d4" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.395980 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c8db4894c-j6mwd" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.421130 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.421303 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57db97dd4d-cgdvc" event={"ID":"74565328-52cd-455e-b53c-199f069833c9","Type":"ContainerDied","Data":"1542733dff5d1b008c9a91ee337d4ff890b2c7569f99dded3b23089f8db57f6d"} Mar 16 15:34:12 crc kubenswrapper[4736]: E0316 15:34:12.422308 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-openstackclient:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/openstackclient" podUID="421bab10-ac4a-458f-98e3-18cd0adef038" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.441837 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxzbn\" (UniqueName: \"kubernetes.io/projected/74565328-52cd-455e-b53c-199f069833c9-kube-api-access-pxzbn\") pod \"74565328-52cd-455e-b53c-199f069833c9\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.441954 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-config-data-custom\") pod \"74565328-52cd-455e-b53c-199f069833c9\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.442128 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-combined-ca-bundle\") pod \"74565328-52cd-455e-b53c-199f069833c9\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.442176 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-config-data\") pod \"74565328-52cd-455e-b53c-199f069833c9\" (UID: \"74565328-52cd-455e-b53c-199f069833c9\") " Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.480025 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "74565328-52cd-455e-b53c-199f069833c9" (UID: "74565328-52cd-455e-b53c-199f069833c9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.500789 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74565328-52cd-455e-b53c-199f069833c9-kube-api-access-pxzbn" (OuterVolumeSpecName: "kube-api-access-pxzbn") pod "74565328-52cd-455e-b53c-199f069833c9" (UID: "74565328-52cd-455e-b53c-199f069833c9"). InnerVolumeSpecName "kube-api-access-pxzbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.526628 4736 scope.go:117] "RemoveContainer" containerID="d82df3d86a501a140515b66d086ff6cb559945e1ec700711862a7f9cf2be2e05" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.551243 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7tpp\" (UniqueName: \"kubernetes.io/projected/68f8b303-ec21-4d2d-a420-f06569625ae4-kube-api-access-t7tpp\") pod \"68f8b303-ec21-4d2d-a420-f06569625ae4\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.551384 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-config-data-custom\") pod \"68f8b303-ec21-4d2d-a420-f06569625ae4\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.551647 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-combined-ca-bundle\") pod \"68f8b303-ec21-4d2d-a420-f06569625ae4\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.551788 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-config-data\") pod \"68f8b303-ec21-4d2d-a420-f06569625ae4\" (UID: \"68f8b303-ec21-4d2d-a420-f06569625ae4\") " Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.570599 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxzbn\" (UniqueName: \"kubernetes.io/projected/74565328-52cd-455e-b53c-199f069833c9-kube-api-access-pxzbn\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.570652 4736 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.633645 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "68f8b303-ec21-4d2d-a420-f06569625ae4" (UID: "68f8b303-ec21-4d2d-a420-f06569625ae4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.641378 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f8b303-ec21-4d2d-a420-f06569625ae4-kube-api-access-t7tpp" (OuterVolumeSpecName: "kube-api-access-t7tpp") pod "68f8b303-ec21-4d2d-a420-f06569625ae4" (UID: "68f8b303-ec21-4d2d-a420-f06569625ae4"). InnerVolumeSpecName "kube-api-access-t7tpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.672936 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7tpp\" (UniqueName: \"kubernetes.io/projected/68f8b303-ec21-4d2d-a420-f06569625ae4-kube-api-access-t7tpp\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.672967 4736 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.699973 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74565328-52cd-455e-b53c-199f069833c9" (UID: "74565328-52cd-455e-b53c-199f069833c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.747443 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-config-data" (OuterVolumeSpecName: "config-data") pod "74565328-52cd-455e-b53c-199f069833c9" (UID: "74565328-52cd-455e-b53c-199f069833c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.781509 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.781542 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74565328-52cd-455e-b53c-199f069833c9-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.804548 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68f8b303-ec21-4d2d-a420-f06569625ae4" (UID: "68f8b303-ec21-4d2d-a420-f06569625ae4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.832075 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-config-data" (OuterVolumeSpecName: "config-data") pod "68f8b303-ec21-4d2d-a420-f06569625ae4" (UID: "68f8b303-ec21-4d2d-a420-f06569625ae4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.844250 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.884949 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-config-data-custom\") pod \"de867ad2-920b-4719-972b-71b4cbb83e2e\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.885099 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dczsj\" (UniqueName: \"kubernetes.io/projected/de867ad2-920b-4719-972b-71b4cbb83e2e-kube-api-access-dczsj\") pod \"de867ad2-920b-4719-972b-71b4cbb83e2e\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.885147 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-combined-ca-bundle\") pod \"de867ad2-920b-4719-972b-71b4cbb83e2e\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.885255 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-config-data\") pod \"de867ad2-920b-4719-972b-71b4cbb83e2e\" (UID: \"de867ad2-920b-4719-972b-71b4cbb83e2e\") " Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.885751 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.885770 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f8b303-ec21-4d2d-a420-f06569625ae4-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.900325 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de867ad2-920b-4719-972b-71b4cbb83e2e-kube-api-access-dczsj" (OuterVolumeSpecName: "kube-api-access-dczsj") pod "de867ad2-920b-4719-972b-71b4cbb83e2e" (UID: "de867ad2-920b-4719-972b-71b4cbb83e2e"). InnerVolumeSpecName "kube-api-access-dczsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.913738 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "de867ad2-920b-4719-972b-71b4cbb83e2e" (UID: "de867ad2-920b-4719-972b-71b4cbb83e2e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.994826 4736 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:12 crc kubenswrapper[4736]: I0316 15:34:12.995703 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dczsj\" (UniqueName: \"kubernetes.io/projected/de867ad2-920b-4719-972b-71b4cbb83e2e-kube-api-access-dczsj\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.112438 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de867ad2-920b-4719-972b-71b4cbb83e2e" (UID: "de867ad2-920b-4719-972b-71b4cbb83e2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.243261 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.298996 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.304523 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c8db4894c-j6mwd"] Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.332401 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.346998 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfp5q\" (UniqueName: \"kubernetes.io/projected/8dcb0e26-e416-4867-8748-92d2b405bc20-kube-api-access-zfp5q\") pod \"8dcb0e26-e416-4867-8748-92d2b405bc20\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.347386 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-public-tls-certs\") pod \"8dcb0e26-e416-4867-8748-92d2b405bc20\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.347541 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dcb0e26-e416-4867-8748-92d2b405bc20-logs\") pod \"8dcb0e26-e416-4867-8748-92d2b405bc20\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.347619 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-config-data\") pod \"8dcb0e26-e416-4867-8748-92d2b405bc20\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.347806 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-internal-tls-certs\") pod \"8dcb0e26-e416-4867-8748-92d2b405bc20\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.347918 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-combined-ca-bundle\") pod \"8dcb0e26-e416-4867-8748-92d2b405bc20\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.348011 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-scripts\") pod \"8dcb0e26-e416-4867-8748-92d2b405bc20\" (UID: \"8dcb0e26-e416-4867-8748-92d2b405bc20\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.380353 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dcb0e26-e416-4867-8748-92d2b405bc20-logs" (OuterVolumeSpecName: "logs") pod "8dcb0e26-e416-4867-8748-92d2b405bc20" (UID: "8dcb0e26-e416-4867-8748-92d2b405bc20"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.411348 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-config-data" (OuterVolumeSpecName: "config-data") pod "de867ad2-920b-4719-972b-71b4cbb83e2e" (UID: "de867ad2-920b-4719-972b-71b4cbb83e2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.411477 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-scripts" (OuterVolumeSpecName: "scripts") pod "8dcb0e26-e416-4867-8748-92d2b405bc20" (UID: "8dcb0e26-e416-4867-8748-92d2b405bc20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.411624 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dcb0e26-e416-4867-8748-92d2b405bc20-kube-api-access-zfp5q" (OuterVolumeSpecName: "kube-api-access-zfp5q") pod "8dcb0e26-e416-4867-8748-92d2b405bc20" (UID: "8dcb0e26-e416-4867-8748-92d2b405bc20"). InnerVolumeSpecName "kube-api-access-zfp5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.454456 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-config-data-custom\") pod \"55d9a0a2-2842-4c23-98b1-2f97469a951f\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.454782 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m85r4\" (UniqueName: \"kubernetes.io/projected/55d9a0a2-2842-4c23-98b1-2f97469a951f-kube-api-access-m85r4\") pod \"55d9a0a2-2842-4c23-98b1-2f97469a951f\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.455116 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-config-data\") pod \"55d9a0a2-2842-4c23-98b1-2f97469a951f\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.455228 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-combined-ca-bundle\") pod \"55d9a0a2-2842-4c23-98b1-2f97469a951f\" (UID: \"55d9a0a2-2842-4c23-98b1-2f97469a951f\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.455977 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.456082 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfp5q\" (UniqueName: \"kubernetes.io/projected/8dcb0e26-e416-4867-8748-92d2b405bc20-kube-api-access-zfp5q\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.456174 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de867ad2-920b-4719-972b-71b4cbb83e2e-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.456291 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8dcb0e26-e416-4867-8748-92d2b405bc20-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.490210 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.502134 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6c8db4894c-j6mwd"] Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.555313 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "55d9a0a2-2842-4c23-98b1-2f97469a951f" (UID: "55d9a0a2-2842-4c23-98b1-2f97469a951f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.563563 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-sg-core-conf-yaml\") pod \"979cb634-6ed1-418b-8b87-1e5064da96c8\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.563634 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/979cb634-6ed1-418b-8b87-1e5064da96c8-run-httpd\") pod \"979cb634-6ed1-418b-8b87-1e5064da96c8\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.563744 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-combined-ca-bundle\") pod \"979cb634-6ed1-418b-8b87-1e5064da96c8\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.563847 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-config-data\") pod \"979cb634-6ed1-418b-8b87-1e5064da96c8\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.563935 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-scripts\") pod \"979cb634-6ed1-418b-8b87-1e5064da96c8\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.568224 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/979cb634-6ed1-418b-8b87-1e5064da96c8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "979cb634-6ed1-418b-8b87-1e5064da96c8" (UID: "979cb634-6ed1-418b-8b87-1e5064da96c8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.568355 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d9a0a2-2842-4c23-98b1-2f97469a951f-kube-api-access-m85r4" (OuterVolumeSpecName: "kube-api-access-m85r4") pod "55d9a0a2-2842-4c23-98b1-2f97469a951f" (UID: "55d9a0a2-2842-4c23-98b1-2f97469a951f"). InnerVolumeSpecName "kube-api-access-m85r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.568579 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/979cb634-6ed1-418b-8b87-1e5064da96c8-log-httpd\") pod \"979cb634-6ed1-418b-8b87-1e5064da96c8\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.568638 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j4xz\" (UniqueName: \"kubernetes.io/projected/979cb634-6ed1-418b-8b87-1e5064da96c8-kube-api-access-6j4xz\") pod \"979cb634-6ed1-418b-8b87-1e5064da96c8\" (UID: \"979cb634-6ed1-418b-8b87-1e5064da96c8\") " Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.569154 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/979cb634-6ed1-418b-8b87-1e5064da96c8-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.569167 4736 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.569176 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m85r4\" (UniqueName: \"kubernetes.io/projected/55d9a0a2-2842-4c23-98b1-2f97469a951f-kube-api-access-m85r4\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.572567 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/979cb634-6ed1-418b-8b87-1e5064da96c8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "979cb634-6ed1-418b-8b87-1e5064da96c8" (UID: "979cb634-6ed1-418b-8b87-1e5064da96c8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.577739 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-657c4c6596-nmfhd" event={"ID":"8dcb0e26-e416-4867-8748-92d2b405bc20","Type":"ContainerDied","Data":"86c77a0dd86a4de8035b6705d5dd08dd1659c0e6035167ded67226ed33d2f058"} Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.578178 4736 scope.go:117] "RemoveContainer" containerID="ecda1de7c141cbe7ee94e87cd039c10e914287a00c96da9567f266214ab678f6" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.578303 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-657c4c6596-nmfhd" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.593830 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/979cb634-6ed1-418b-8b87-1e5064da96c8-kube-api-access-6j4xz" (OuterVolumeSpecName: "kube-api-access-6j4xz") pod "979cb634-6ed1-418b-8b87-1e5064da96c8" (UID: "979cb634-6ed1-418b-8b87-1e5064da96c8"). InnerVolumeSpecName "kube-api-access-6j4xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.597541 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-799b5776df-77jmf" event={"ID":"de867ad2-920b-4719-972b-71b4cbb83e2e","Type":"ContainerDied","Data":"408f9f1de266a5aa2a78b12eaadb1dfdfb6b328d07103fb08f269b2dc8e96b04"} Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.597831 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-799b5776df-77jmf" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.616261 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.196:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.642458 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57db97dd4d-cgdvc"] Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.655182 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-scripts" (OuterVolumeSpecName: "scripts") pod "979cb634-6ed1-418b-8b87-1e5064da96c8" (UID: "979cb634-6ed1-418b-8b87-1e5064da96c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.656547 4736 scope.go:117] "RemoveContainer" containerID="ca314bf15850304723004c1b313bdeda120468ecfaf784ef381f814bf6c8a800" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.669390 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-57db97dd4d-cgdvc"] Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.671061 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.671081 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/979cb634-6ed1-418b-8b87-1e5064da96c8-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.671091 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j4xz\" (UniqueName: \"kubernetes.io/projected/979cb634-6ed1-418b-8b87-1e5064da96c8-kube-api-access-6j4xz\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.674308 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-56b5646679-zf58k" event={"ID":"55d9a0a2-2842-4c23-98b1-2f97469a951f","Type":"ContainerDied","Data":"5e5eccee78369ffaddead3274e32e3495180636a1612230380a70f32ea04eef8"} Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.674408 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-56b5646679-zf58k" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.755389 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"979cb634-6ed1-418b-8b87-1e5064da96c8","Type":"ContainerDied","Data":"52440f446e2eb427ca8b6e76e7acad42f9fb19d4abdf19939e01a264b1089c43"} Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.755553 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.783638 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561254-fl8vh"] Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.787166 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8dcb0e26-e416-4867-8748-92d2b405bc20" (UID: "8dcb0e26-e416-4867-8748-92d2b405bc20"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.789636 4736 scope.go:117] "RemoveContainer" containerID="9ee3106e1ec1b95880a59b2f5ba0b62d0a9e52092274015d2a60a3a720a425f5" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.801801 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zhxnt"] Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.810271 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-799b5776df-77jmf"] Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.822749 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-799b5776df-77jmf"] Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.849068 4736 scope.go:117] "RemoveContainer" containerID="9d08fae30dd0e2b3845a83d9d5ecc08b039e1df826b5a22a1bd2213c610104be" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.854345 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "979cb634-6ed1-418b-8b87-1e5064da96c8" (UID: "979cb634-6ed1-418b-8b87-1e5064da96c8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.863350 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55d9a0a2-2842-4c23-98b1-2f97469a951f" (UID: "55d9a0a2-2842-4c23-98b1-2f97469a951f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.883705 4736 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.884083 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.888509 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.895275 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8dcb0e26-e416-4867-8748-92d2b405bc20" (UID: "8dcb0e26-e416-4867-8748-92d2b405bc20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.910838 4736 scope.go:117] "RemoveContainer" containerID="fde1930900329527d3309922513a43d34f818e47ca21bd18b4bcbdbaa217b8ab" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.943550 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-config-data" (OuterVolumeSpecName: "config-data") pod "55d9a0a2-2842-4c23-98b1-2f97469a951f" (UID: "55d9a0a2-2842-4c23-98b1-2f97469a951f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.944328 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-config-data" (OuterVolumeSpecName: "config-data") pod "8dcb0e26-e416-4867-8748-92d2b405bc20" (UID: "8dcb0e26-e416-4867-8748-92d2b405bc20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.959268 4736 scope.go:117] "RemoveContainer" containerID="683b5e4909c2aceb15bd9e1c93eb1708165a4b42f671d9898b2a182e458069e2" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.994224 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55d9a0a2-2842-4c23-98b1-2f97469a951f-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.994251 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:13 crc kubenswrapper[4736]: I0316 15:34:13.994262 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.055282 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-config-data" (OuterVolumeSpecName: "config-data") pod "979cb634-6ed1-418b-8b87-1e5064da96c8" (UID: "979cb634-6ed1-418b-8b87-1e5064da96c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.097369 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.130396 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "979cb634-6ed1-418b-8b87-1e5064da96c8" (UID: "979cb634-6ed1-418b-8b87-1e5064da96c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.143297 4736 scope.go:117] "RemoveContainer" containerID="5575127eab2bf22fa7b9d9a952e86aa33867feb7857f0fea413c7cfaa3c4b977" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.157166 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-56b5646679-zf58k"] Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.159513 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8dcb0e26-e416-4867-8748-92d2b405bc20" (UID: "8dcb0e26-e416-4867-8748-92d2b405bc20"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.172775 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-56b5646679-zf58k"] Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.200560 4736 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8dcb0e26-e416-4867-8748-92d2b405bc20-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.200597 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979cb634-6ed1-418b-8b87-1e5064da96c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.278381 4736 scope.go:117] "RemoveContainer" containerID="b9fb6e699f3ea391a43c6ebb50fd2a1b84b5414735c34127821eead6962bec2b" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.278383 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-657c4c6596-nmfhd"] Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.293849 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-657c4c6596-nmfhd"] Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.409612 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.419544 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.454388 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:14 crc kubenswrapper[4736]: E0316 15:34:14.455198 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74565328-52cd-455e-b53c-199f069833c9" containerName="heat-cfnapi" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.455278 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="74565328-52cd-455e-b53c-199f069833c9" containerName="heat-cfnapi" Mar 16 15:34:14 crc kubenswrapper[4736]: E0316 15:34:14.455335 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f8b303-ec21-4d2d-a420-f06569625ae4" containerName="heat-api" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.455410 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f8b303-ec21-4d2d-a420-f06569625ae4" containerName="heat-api" Mar 16 15:34:14 crc kubenswrapper[4736]: E0316 15:34:14.455485 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="proxy-httpd" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.455617 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="proxy-httpd" Mar 16 15:34:14 crc kubenswrapper[4736]: E0316 15:34:14.455708 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="sg-core" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.455878 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="sg-core" Mar 16 15:34:14 crc kubenswrapper[4736]: E0316 15:34:14.456014 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d9a0a2-2842-4c23-98b1-2f97469a951f" containerName="heat-cfnapi" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.456086 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d9a0a2-2842-4c23-98b1-2f97469a951f" containerName="heat-cfnapi" Mar 16 15:34:14 crc kubenswrapper[4736]: E0316 15:34:14.467229 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="ceilometer-notification-agent" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.467649 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="ceilometer-notification-agent" Mar 16 15:34:14 crc kubenswrapper[4736]: E0316 15:34:14.467768 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de867ad2-920b-4719-972b-71b4cbb83e2e" containerName="heat-api" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.467850 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="de867ad2-920b-4719-972b-71b4cbb83e2e" containerName="heat-api" Mar 16 15:34:14 crc kubenswrapper[4736]: E0316 15:34:14.467934 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="ceilometer-central-agent" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.468008 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="ceilometer-central-agent" Mar 16 15:34:14 crc kubenswrapper[4736]: E0316 15:34:14.468120 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dcb0e26-e416-4867-8748-92d2b405bc20" containerName="placement-api" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.468252 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcb0e26-e416-4867-8748-92d2b405bc20" containerName="placement-api" Mar 16 15:34:14 crc kubenswrapper[4736]: E0316 15:34:14.468349 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dcb0e26-e416-4867-8748-92d2b405bc20" containerName="placement-log" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.468431 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dcb0e26-e416-4867-8748-92d2b405bc20" containerName="placement-log" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.468950 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="ceilometer-notification-agent" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.469037 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="74565328-52cd-455e-b53c-199f069833c9" containerName="heat-cfnapi" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.469147 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="proxy-httpd" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.469235 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dcb0e26-e416-4867-8748-92d2b405bc20" containerName="placement-api" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.469317 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f8b303-ec21-4d2d-a420-f06569625ae4" containerName="heat-api" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.469392 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="ceilometer-central-agent" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.470203 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="74565328-52cd-455e-b53c-199f069833c9" containerName="heat-cfnapi" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.470301 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f8b303-ec21-4d2d-a420-f06569625ae4" containerName="heat-api" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.470394 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dcb0e26-e416-4867-8748-92d2b405bc20" containerName="placement-log" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.470472 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="55d9a0a2-2842-4c23-98b1-2f97469a951f" containerName="heat-cfnapi" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.470556 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" containerName="sg-core" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.470641 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="de867ad2-920b-4719-972b-71b4cbb83e2e" containerName="heat-api" Mar 16 15:34:14 crc kubenswrapper[4736]: E0316 15:34:14.470945 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74565328-52cd-455e-b53c-199f069833c9" containerName="heat-cfnapi" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.471017 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="74565328-52cd-455e-b53c-199f069833c9" containerName="heat-cfnapi" Mar 16 15:34:14 crc kubenswrapper[4736]: E0316 15:34:14.471087 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f8b303-ec21-4d2d-a420-f06569625ae4" containerName="heat-api" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.471161 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f8b303-ec21-4d2d-a420-f06569625ae4" containerName="heat-api" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.473009 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.478604 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.478812 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.507498 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-scripts\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.507590 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.507650 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.507667 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpdq9\" (UniqueName: \"kubernetes.io/projected/d4302c72-bb35-405a-ad35-509b9e845dcc-kube-api-access-vpdq9\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.507711 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4302c72-bb35-405a-ad35-509b9e845dcc-run-httpd\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.507755 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-config-data\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.507773 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4302c72-bb35-405a-ad35-509b9e845dcc-log-httpd\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.520553 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.590527 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.196:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.604491 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.610179 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-config-data\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.610225 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4302c72-bb35-405a-ad35-509b9e845dcc-log-httpd\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.610303 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-scripts\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.610345 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.610373 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpdq9\" (UniqueName: \"kubernetes.io/projected/d4302c72-bb35-405a-ad35-509b9e845dcc-kube-api-access-vpdq9\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.610391 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.610435 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4302c72-bb35-405a-ad35-509b9e845dcc-run-httpd\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.611092 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4302c72-bb35-405a-ad35-509b9e845dcc-run-httpd\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.613614 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4302c72-bb35-405a-ad35-509b9e845dcc-log-httpd\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.631573 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.631584 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.640009 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-config-data\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.648842 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpdq9\" (UniqueName: \"kubernetes.io/projected/d4302c72-bb35-405a-ad35-509b9e845dcc-kube-api-access-vpdq9\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.661236 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-scripts\") pod \"ceilometer-0\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " pod="openstack/ceilometer-0" Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.775538 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zhxnt" event={"ID":"c548edb2-4f30-4790-bf13-c2509601cd25","Type":"ContainerStarted","Data":"21afe0fbca9b68412e19fdd037992c13cf9f091fda057b2ec010156a34cf4182"} Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.802413 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561254-fl8vh" event={"ID":"4074fbb2-9d24-491f-9053-54171f6e4dbd","Type":"ContainerStarted","Data":"429f47af199b91b8f7d497f46c805a42694e5b1adda6e417ce9aef5f07b94828"} Mar 16 15:34:14 crc kubenswrapper[4736]: I0316 15:34:14.831569 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:34:15 crc kubenswrapper[4736]: I0316 15:34:15.001502 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d9a0a2-2842-4c23-98b1-2f97469a951f" path="/var/lib/kubelet/pods/55d9a0a2-2842-4c23-98b1-2f97469a951f/volumes" Mar 16 15:34:15 crc kubenswrapper[4736]: I0316 15:34:15.009692 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68f8b303-ec21-4d2d-a420-f06569625ae4" path="/var/lib/kubelet/pods/68f8b303-ec21-4d2d-a420-f06569625ae4/volumes" Mar 16 15:34:15 crc kubenswrapper[4736]: I0316 15:34:15.010330 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74565328-52cd-455e-b53c-199f069833c9" path="/var/lib/kubelet/pods/74565328-52cd-455e-b53c-199f069833c9/volumes" Mar 16 15:34:15 crc kubenswrapper[4736]: I0316 15:34:15.034440 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dcb0e26-e416-4867-8748-92d2b405bc20" path="/var/lib/kubelet/pods/8dcb0e26-e416-4867-8748-92d2b405bc20/volumes" Mar 16 15:34:15 crc kubenswrapper[4736]: I0316 15:34:15.036015 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="979cb634-6ed1-418b-8b87-1e5064da96c8" path="/var/lib/kubelet/pods/979cb634-6ed1-418b-8b87-1e5064da96c8/volumes" Mar 16 15:34:15 crc kubenswrapper[4736]: I0316 15:34:15.036989 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de867ad2-920b-4719-972b-71b4cbb83e2e" path="/var/lib/kubelet/pods/de867ad2-920b-4719-972b-71b4cbb83e2e/volumes" Mar 16 15:34:15 crc kubenswrapper[4736]: I0316 15:34:15.490089 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:15 crc kubenswrapper[4736]: I0316 15:34:15.866001 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561254-fl8vh" event={"ID":"4074fbb2-9d24-491f-9053-54171f6e4dbd","Type":"ContainerStarted","Data":"2b423fc65aca3355837af1c8c08bd0591ff54a664ef759155f18ef866d482d88"} Mar 16 15:34:15 crc kubenswrapper[4736]: I0316 15:34:15.876271 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4302c72-bb35-405a-ad35-509b9e845dcc","Type":"ContainerStarted","Data":"01ebbf430160b320cd4edc72b1b620f082bf146ef6d1ddf45649aa144f687b4c"} Mar 16 15:34:16 crc kubenswrapper[4736]: E0316 15:34:16.405372 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebcc15e8677362de260b8c26909e58b6d9aa895b94bd769385e9f9c2c80a2601" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 16 15:34:16 crc kubenswrapper[4736]: E0316 15:34:16.408355 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebcc15e8677362de260b8c26909e58b6d9aa895b94bd769385e9f9c2c80a2601" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 16 15:34:16 crc kubenswrapper[4736]: E0316 15:34:16.414625 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebcc15e8677362de260b8c26909e58b6d9aa895b94bd769385e9f9c2c80a2601" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 16 15:34:16 crc kubenswrapper[4736]: E0316 15:34:16.414731 4736 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-78d5db5598-m7dnz" podUID="2a16c147-1622-4470-a54d-e331fae4ea8a" containerName="heat-engine" Mar 16 15:34:16 crc kubenswrapper[4736]: I0316 15:34:16.896163 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4302c72-bb35-405a-ad35-509b9e845dcc","Type":"ContainerStarted","Data":"845717db5b63f7978fc2f2d6435480fd88ed59aeb4bcaa960099bb32253a058c"} Mar 16 15:34:17 crc kubenswrapper[4736]: I0316 15:34:17.913417 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4302c72-bb35-405a-ad35-509b9e845dcc","Type":"ContainerStarted","Data":"f3535adb29f1bec8d7057c20c9ba896944030e9edd10e1049441e98eaa683b92"} Mar 16 15:34:17 crc kubenswrapper[4736]: I0316 15:34:17.914028 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4302c72-bb35-405a-ad35-509b9e845dcc","Type":"ContainerStarted","Data":"d84664e5c9d1dd16267705f20a70689ca0da2e21aed27f64fcb8dbeb4a981420"} Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.000484 4736 generic.go:334] "Generic (PLEG): container finished" podID="4074fbb2-9d24-491f-9053-54171f6e4dbd" containerID="2b423fc65aca3355837af1c8c08bd0591ff54a664ef759155f18ef866d482d88" exitCode=0 Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.006616 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561254-fl8vh" event={"ID":"4074fbb2-9d24-491f-9053-54171f6e4dbd","Type":"ContainerDied","Data":"2b423fc65aca3355837af1c8c08bd0591ff54a664ef759155f18ef866d482d88"} Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.039404 4736 generic.go:334] "Generic (PLEG): container finished" podID="2a16c147-1622-4470-a54d-e331fae4ea8a" containerID="ebcc15e8677362de260b8c26909e58b6d9aa895b94bd769385e9f9c2c80a2601" exitCode=0 Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.039453 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-78d5db5598-m7dnz" event={"ID":"2a16c147-1622-4470-a54d-e331fae4ea8a","Type":"ContainerDied","Data":"ebcc15e8677362de260b8c26909e58b6d9aa895b94bd769385e9f9c2c80a2601"} Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.054101 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561254-fl8vh" podStartSLOduration=17.753337491 podStartE2EDuration="19.054078117s" podCreationTimestamp="2026-03-16 15:34:00 +0000 UTC" firstStartedPulling="2026-03-16 15:34:13.714847182 +0000 UTC m=+1255.442237469" lastFinishedPulling="2026-03-16 15:34:15.015587808 +0000 UTC m=+1256.742978095" observedRunningTime="2026-03-16 15:34:15.894773174 +0000 UTC m=+1257.622163461" watchObservedRunningTime="2026-03-16 15:34:19.054078117 +0000 UTC m=+1260.781468404" Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.183438 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.268966 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-config-data\") pod \"2a16c147-1622-4470-a54d-e331fae4ea8a\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.269214 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-config-data-custom\") pod \"2a16c147-1622-4470-a54d-e331fae4ea8a\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.269341 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-combined-ca-bundle\") pod \"2a16c147-1622-4470-a54d-e331fae4ea8a\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.269406 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fskwq\" (UniqueName: \"kubernetes.io/projected/2a16c147-1622-4470-a54d-e331fae4ea8a-kube-api-access-fskwq\") pod \"2a16c147-1622-4470-a54d-e331fae4ea8a\" (UID: \"2a16c147-1622-4470-a54d-e331fae4ea8a\") " Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.290814 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2a16c147-1622-4470-a54d-e331fae4ea8a" (UID: "2a16c147-1622-4470-a54d-e331fae4ea8a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.303440 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a16c147-1622-4470-a54d-e331fae4ea8a-kube-api-access-fskwq" (OuterVolumeSpecName: "kube-api-access-fskwq") pod "2a16c147-1622-4470-a54d-e331fae4ea8a" (UID: "2a16c147-1622-4470-a54d-e331fae4ea8a"). InnerVolumeSpecName "kube-api-access-fskwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.332012 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a16c147-1622-4470-a54d-e331fae4ea8a" (UID: "2a16c147-1622-4470-a54d-e331fae4ea8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.350707 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-config-data" (OuterVolumeSpecName: "config-data") pod "2a16c147-1622-4470-a54d-e331fae4ea8a" (UID: "2a16c147-1622-4470-a54d-e331fae4ea8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.375526 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fskwq\" (UniqueName: \"kubernetes.io/projected/2a16c147-1622-4470-a54d-e331fae4ea8a-kube-api-access-fskwq\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.375590 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.375604 4736 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:19 crc kubenswrapper[4736]: I0316 15:34:19.375613 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a16c147-1622-4470-a54d-e331fae4ea8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:20 crc kubenswrapper[4736]: I0316 15:34:20.050998 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-78d5db5598-m7dnz" Mar 16 15:34:20 crc kubenswrapper[4736]: I0316 15:34:20.054144 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-78d5db5598-m7dnz" event={"ID":"2a16c147-1622-4470-a54d-e331fae4ea8a","Type":"ContainerDied","Data":"b3970329543467f63c17fe569e5ed8e82ada55c03202f066deeb45a36ba40f65"} Mar 16 15:34:20 crc kubenswrapper[4736]: I0316 15:34:20.054186 4736 scope.go:117] "RemoveContainer" containerID="ebcc15e8677362de260b8c26909e58b6d9aa895b94bd769385e9f9c2c80a2601" Mar 16 15:34:20 crc kubenswrapper[4736]: I0316 15:34:20.100140 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-78d5db5598-m7dnz"] Mar 16 15:34:20 crc kubenswrapper[4736]: I0316 15:34:20.108475 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-78d5db5598-m7dnz"] Mar 16 15:34:20 crc kubenswrapper[4736]: I0316 15:34:20.515955 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561254-fl8vh" Mar 16 15:34:20 crc kubenswrapper[4736]: I0316 15:34:20.600983 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tggxm\" (UniqueName: \"kubernetes.io/projected/4074fbb2-9d24-491f-9053-54171f6e4dbd-kube-api-access-tggxm\") pod \"4074fbb2-9d24-491f-9053-54171f6e4dbd\" (UID: \"4074fbb2-9d24-491f-9053-54171f6e4dbd\") " Mar 16 15:34:20 crc kubenswrapper[4736]: I0316 15:34:20.609304 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4074fbb2-9d24-491f-9053-54171f6e4dbd-kube-api-access-tggxm" (OuterVolumeSpecName: "kube-api-access-tggxm") pod "4074fbb2-9d24-491f-9053-54171f6e4dbd" (UID: "4074fbb2-9d24-491f-9053-54171f6e4dbd"). InnerVolumeSpecName "kube-api-access-tggxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:34:20 crc kubenswrapper[4736]: I0316 15:34:20.703661 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tggxm\" (UniqueName: \"kubernetes.io/projected/4074fbb2-9d24-491f-9053-54171f6e4dbd-kube-api-access-tggxm\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:21 crc kubenswrapper[4736]: I0316 15:34:21.018801 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a16c147-1622-4470-a54d-e331fae4ea8a" path="/var/lib/kubelet/pods/2a16c147-1622-4470-a54d-e331fae4ea8a/volumes" Mar 16 15:34:21 crc kubenswrapper[4736]: I0316 15:34:21.066702 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4302c72-bb35-405a-ad35-509b9e845dcc","Type":"ContainerStarted","Data":"6e72a243845616596bc9cb39d3b2b045375f39a24faa609e3d1a0cfffd3969c6"} Mar 16 15:34:21 crc kubenswrapper[4736]: I0316 15:34:21.068023 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 16 15:34:21 crc kubenswrapper[4736]: I0316 15:34:21.070590 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561254-fl8vh" event={"ID":"4074fbb2-9d24-491f-9053-54171f6e4dbd","Type":"ContainerDied","Data":"429f47af199b91b8f7d497f46c805a42694e5b1adda6e417ce9aef5f07b94828"} Mar 16 15:34:21 crc kubenswrapper[4736]: I0316 15:34:21.070619 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="429f47af199b91b8f7d497f46c805a42694e5b1adda6e417ce9aef5f07b94828" Mar 16 15:34:21 crc kubenswrapper[4736]: I0316 15:34:21.070673 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561254-fl8vh" Mar 16 15:34:21 crc kubenswrapper[4736]: I0316 15:34:21.105226 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.233536932 podStartE2EDuration="7.105200549s" podCreationTimestamp="2026-03-16 15:34:14 +0000 UTC" firstStartedPulling="2026-03-16 15:34:15.527253196 +0000 UTC m=+1257.254643483" lastFinishedPulling="2026-03-16 15:34:20.398916813 +0000 UTC m=+1262.126307100" observedRunningTime="2026-03-16 15:34:21.089476099 +0000 UTC m=+1262.816866406" watchObservedRunningTime="2026-03-16 15:34:21.105200549 +0000 UTC m=+1262.832590836" Mar 16 15:34:21 crc kubenswrapper[4736]: I0316 15:34:21.194280 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561248-5nwht"] Mar 16 15:34:21 crc kubenswrapper[4736]: I0316 15:34:21.200676 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561248-5nwht"] Mar 16 15:34:21 crc kubenswrapper[4736]: I0316 15:34:21.606297 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 16 15:34:21 crc kubenswrapper[4736]: I0316 15:34:21.818187 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ff55bcd5b-psrsc" podUID="4a2c18b8-790c-4bb8-ac86-c70f0220ab3f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Mar 16 15:34:22 crc kubenswrapper[4736]: I0316 15:34:22.989424 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3c3b2e2-c7c8-4879-8b27-1f379491c363" path="/var/lib/kubelet/pods/d3c3b2e2-c7c8-4879-8b27-1f379491c363/volumes" Mar 16 15:34:25 crc kubenswrapper[4736]: I0316 15:34:25.472562 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:34:25 crc kubenswrapper[4736]: E0316 15:34:25.472762 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:34:25 crc kubenswrapper[4736]: E0316 15:34:25.473154 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-678dd4f677-jxtsk: configmap "swift-ring-files" not found Mar 16 15:34:25 crc kubenswrapper[4736]: E0316 15:34:25.473222 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift podName:bccee937-d642-4483-87fb-033b157cf68c nodeName:}" failed. No retries permitted until 2026-03-16 15:34:57.473201521 +0000 UTC m=+1299.200591808 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift") pod "swift-proxy-678dd4f677-jxtsk" (UID: "bccee937-d642-4483-87fb-033b157cf68c") : configmap "swift-ring-files" not found Mar 16 15:34:31 crc kubenswrapper[4736]: I0316 15:34:31.605084 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 16 15:34:31 crc kubenswrapper[4736]: I0316 15:34:31.605832 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:34:31 crc kubenswrapper[4736]: I0316 15:34:31.606710 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"bb34a40cb6b11f02510ae1feff999a3f1039c1a0ec12f398e7f5deb918dc9b15"} pod="openstack/horizon-67c978df54-kdnqn" containerMessage="Container horizon failed startup probe, will be restarted" Mar 16 15:34:31 crc kubenswrapper[4736]: I0316 15:34:31.606745 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" containerID="cri-o://bb34a40cb6b11f02510ae1feff999a3f1039c1a0ec12f398e7f5deb918dc9b15" gracePeriod=30 Mar 16 15:34:31 crc kubenswrapper[4736]: I0316 15:34:31.814562 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ff55bcd5b-psrsc" podUID="4a2c18b8-790c-4bb8-ac86-c70f0220ab3f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Mar 16 15:34:31 crc kubenswrapper[4736]: I0316 15:34:31.815091 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:34:31 crc kubenswrapper[4736]: I0316 15:34:31.816370 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"92eb5124b55e332240f3b250db39e95238d56dfb7b44e8b175f4bd70f4318710"} pod="openstack/horizon-ff55bcd5b-psrsc" containerMessage="Container horizon failed startup probe, will be restarted" Mar 16 15:34:31 crc kubenswrapper[4736]: I0316 15:34:31.816421 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-ff55bcd5b-psrsc" podUID="4a2c18b8-790c-4bb8-ac86-c70f0220ab3f" containerName="horizon" containerID="cri-o://92eb5124b55e332240f3b250db39e95238d56dfb7b44e8b175f4bd70f4318710" gracePeriod=30 Mar 16 15:34:31 crc kubenswrapper[4736]: I0316 15:34:31.903731 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:31 crc kubenswrapper[4736]: I0316 15:34:31.904048 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="ceilometer-central-agent" containerID="cri-o://845717db5b63f7978fc2f2d6435480fd88ed59aeb4bcaa960099bb32253a058c" gracePeriod=30 Mar 16 15:34:31 crc kubenswrapper[4736]: I0316 15:34:31.904140 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="proxy-httpd" containerID="cri-o://6e72a243845616596bc9cb39d3b2b045375f39a24faa609e3d1a0cfffd3969c6" gracePeriod=30 Mar 16 15:34:31 crc kubenswrapper[4736]: I0316 15:34:31.904193 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="sg-core" containerID="cri-o://f3535adb29f1bec8d7057c20c9ba896944030e9edd10e1049441e98eaa683b92" gracePeriod=30 Mar 16 15:34:31 crc kubenswrapper[4736]: I0316 15:34:31.904228 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="ceilometer-notification-agent" containerID="cri-o://d84664e5c9d1dd16267705f20a70689ca0da2e21aed27f64fcb8dbeb4a981420" gracePeriod=30 Mar 16 15:34:31 crc kubenswrapper[4736]: I0316 15:34:31.915949 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.199:3000/\": EOF" Mar 16 15:34:32 crc kubenswrapper[4736]: I0316 15:34:32.234198 4736 generic.go:334] "Generic (PLEG): container finished" podID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerID="f3535adb29f1bec8d7057c20c9ba896944030e9edd10e1049441e98eaa683b92" exitCode=2 Mar 16 15:34:32 crc kubenswrapper[4736]: I0316 15:34:32.234252 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4302c72-bb35-405a-ad35-509b9e845dcc","Type":"ContainerDied","Data":"f3535adb29f1bec8d7057c20c9ba896944030e9edd10e1049441e98eaa683b92"} Mar 16 15:34:33 crc kubenswrapper[4736]: I0316 15:34:33.278865 4736 generic.go:334] "Generic (PLEG): container finished" podID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerID="6e72a243845616596bc9cb39d3b2b045375f39a24faa609e3d1a0cfffd3969c6" exitCode=0 Mar 16 15:34:33 crc kubenswrapper[4736]: I0316 15:34:33.279283 4736 generic.go:334] "Generic (PLEG): container finished" podID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerID="d84664e5c9d1dd16267705f20a70689ca0da2e21aed27f64fcb8dbeb4a981420" exitCode=0 Mar 16 15:34:33 crc kubenswrapper[4736]: I0316 15:34:33.279297 4736 generic.go:334] "Generic (PLEG): container finished" podID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerID="845717db5b63f7978fc2f2d6435480fd88ed59aeb4bcaa960099bb32253a058c" exitCode=0 Mar 16 15:34:33 crc kubenswrapper[4736]: I0316 15:34:33.279341 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4302c72-bb35-405a-ad35-509b9e845dcc","Type":"ContainerDied","Data":"6e72a243845616596bc9cb39d3b2b045375f39a24faa609e3d1a0cfffd3969c6"} Mar 16 15:34:33 crc kubenswrapper[4736]: I0316 15:34:33.279372 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4302c72-bb35-405a-ad35-509b9e845dcc","Type":"ContainerDied","Data":"d84664e5c9d1dd16267705f20a70689ca0da2e21aed27f64fcb8dbeb4a981420"} Mar 16 15:34:33 crc kubenswrapper[4736]: I0316 15:34:33.279383 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4302c72-bb35-405a-ad35-509b9e845dcc","Type":"ContainerDied","Data":"845717db5b63f7978fc2f2d6435480fd88ed59aeb4bcaa960099bb32253a058c"} Mar 16 15:34:33 crc kubenswrapper[4736]: E0316 15:34:33.451138 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-nova-conductor:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:34:33 crc kubenswrapper[4736]: E0316 15:34:33.451199 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-nova-conductor:e43235cb19da04699a53f42b6a75afe9" Mar 16 15:34:33 crc kubenswrapper[4736]: E0316 15:34:33.451323 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-nova-conductor:e43235cb19da04699a53f42b6a75afe9,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qn9j5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-zhxnt_openstack(c548edb2-4f30-4790-bf13-c2509601cd25): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 15:34:33 crc kubenswrapper[4736]: E0316 15:34:33.456874 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-zhxnt" podUID="c548edb2-4f30-4790-bf13-c2509601cd25" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.195306 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.302397 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpdq9\" (UniqueName: \"kubernetes.io/projected/d4302c72-bb35-405a-ad35-509b9e845dcc-kube-api-access-vpdq9\") pod \"d4302c72-bb35-405a-ad35-509b9e845dcc\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.302506 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4302c72-bb35-405a-ad35-509b9e845dcc-log-httpd\") pod \"d4302c72-bb35-405a-ad35-509b9e845dcc\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.302539 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-scripts\") pod \"d4302c72-bb35-405a-ad35-509b9e845dcc\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.302629 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-sg-core-conf-yaml\") pod \"d4302c72-bb35-405a-ad35-509b9e845dcc\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.302666 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-config-data\") pod \"d4302c72-bb35-405a-ad35-509b9e845dcc\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.302790 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-combined-ca-bundle\") pod \"d4302c72-bb35-405a-ad35-509b9e845dcc\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.303026 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4302c72-bb35-405a-ad35-509b9e845dcc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d4302c72-bb35-405a-ad35-509b9e845dcc" (UID: "d4302c72-bb35-405a-ad35-509b9e845dcc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.303263 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4302c72-bb35-405a-ad35-509b9e845dcc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d4302c72-bb35-405a-ad35-509b9e845dcc" (UID: "d4302c72-bb35-405a-ad35-509b9e845dcc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.303579 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4302c72-bb35-405a-ad35-509b9e845dcc-run-httpd\") pod \"d4302c72-bb35-405a-ad35-509b9e845dcc\" (UID: \"d4302c72-bb35-405a-ad35-509b9e845dcc\") " Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.304019 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4302c72-bb35-405a-ad35-509b9e845dcc-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.304032 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4302c72-bb35-405a-ad35-509b9e845dcc-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.326364 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-scripts" (OuterVolumeSpecName: "scripts") pod "d4302c72-bb35-405a-ad35-509b9e845dcc" (UID: "d4302c72-bb35-405a-ad35-509b9e845dcc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.357122 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4302c72-bb35-405a-ad35-509b9e845dcc-kube-api-access-vpdq9" (OuterVolumeSpecName: "kube-api-access-vpdq9") pod "d4302c72-bb35-405a-ad35-509b9e845dcc" (UID: "d4302c72-bb35-405a-ad35-509b9e845dcc"). InnerVolumeSpecName "kube-api-access-vpdq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.397186 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.397179 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4302c72-bb35-405a-ad35-509b9e845dcc","Type":"ContainerDied","Data":"01ebbf430160b320cd4edc72b1b620f082bf146ef6d1ddf45649aa144f687b4c"} Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.399682 4736 scope.go:117] "RemoveContainer" containerID="6e72a243845616596bc9cb39d3b2b045375f39a24faa609e3d1a0cfffd3969c6" Mar 16 15:34:34 crc kubenswrapper[4736]: E0316 15:34:34.400471 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-nova-conductor:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-zhxnt" podUID="c548edb2-4f30-4790-bf13-c2509601cd25" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.407071 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpdq9\" (UniqueName: \"kubernetes.io/projected/d4302c72-bb35-405a-ad35-509b9e845dcc-kube-api-access-vpdq9\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.407096 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.411204 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d4302c72-bb35-405a-ad35-509b9e845dcc" (UID: "d4302c72-bb35-405a-ad35-509b9e845dcc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.484804 4736 scope.go:117] "RemoveContainer" containerID="f3535adb29f1bec8d7057c20c9ba896944030e9edd10e1049441e98eaa683b92" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.506209 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4302c72-bb35-405a-ad35-509b9e845dcc" (UID: "d4302c72-bb35-405a-ad35-509b9e845dcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.509062 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.509209 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.531306 4736 scope.go:117] "RemoveContainer" containerID="d84664e5c9d1dd16267705f20a70689ca0da2e21aed27f64fcb8dbeb4a981420" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.556807 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-config-data" (OuterVolumeSpecName: "config-data") pod "d4302c72-bb35-405a-ad35-509b9e845dcc" (UID: "d4302c72-bb35-405a-ad35-509b9e845dcc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.580393 4736 scope.go:117] "RemoveContainer" containerID="845717db5b63f7978fc2f2d6435480fd88ed59aeb4bcaa960099bb32253a058c" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.610927 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4302c72-bb35-405a-ad35-509b9e845dcc-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.750962 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.757429 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.843337 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:34 crc kubenswrapper[4736]: E0316 15:34:34.843776 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="ceilometer-central-agent" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.843794 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="ceilometer-central-agent" Mar 16 15:34:34 crc kubenswrapper[4736]: E0316 15:34:34.843809 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="sg-core" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.843817 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="sg-core" Mar 16 15:34:34 crc kubenswrapper[4736]: E0316 15:34:34.843827 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="proxy-httpd" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.843833 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="proxy-httpd" Mar 16 15:34:34 crc kubenswrapper[4736]: E0316 15:34:34.843850 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a16c147-1622-4470-a54d-e331fae4ea8a" containerName="heat-engine" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.843856 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a16c147-1622-4470-a54d-e331fae4ea8a" containerName="heat-engine" Mar 16 15:34:34 crc kubenswrapper[4736]: E0316 15:34:34.843872 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="ceilometer-notification-agent" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.843879 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="ceilometer-notification-agent" Mar 16 15:34:34 crc kubenswrapper[4736]: E0316 15:34:34.843894 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4074fbb2-9d24-491f-9053-54171f6e4dbd" containerName="oc" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.843901 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4074fbb2-9d24-491f-9053-54171f6e4dbd" containerName="oc" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.844069 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="proxy-httpd" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.844087 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4074fbb2-9d24-491f-9053-54171f6e4dbd" containerName="oc" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.844153 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="sg-core" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.844165 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="ceilometer-central-agent" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.844179 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a16c147-1622-4470-a54d-e331fae4ea8a" containerName="heat-engine" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.844187 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" containerName="ceilometer-notification-agent" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.846310 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.849489 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.849659 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.879099 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.917030 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4d7\" (UniqueName: \"kubernetes.io/projected/568db0f4-c4f9-498e-9a40-36bbd88936b8-kube-api-access-gg4d7\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.917198 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/568db0f4-c4f9-498e-9a40-36bbd88936b8-log-httpd\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.917279 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-config-data\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.917305 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.917347 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.917382 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/568db0f4-c4f9-498e-9a40-36bbd88936b8-run-httpd\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.917508 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-scripts\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:34 crc kubenswrapper[4736]: I0316 15:34:34.989496 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4302c72-bb35-405a-ad35-509b9e845dcc" path="/var/lib/kubelet/pods/d4302c72-bb35-405a-ad35-509b9e845dcc/volumes" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.019800 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-scripts\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.019887 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg4d7\" (UniqueName: \"kubernetes.io/projected/568db0f4-c4f9-498e-9a40-36bbd88936b8-kube-api-access-gg4d7\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.020023 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/568db0f4-c4f9-498e-9a40-36bbd88936b8-log-httpd\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.020226 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-config-data\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.020272 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.020311 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.020358 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/568db0f4-c4f9-498e-9a40-36bbd88936b8-run-httpd\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.020949 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/568db0f4-c4f9-498e-9a40-36bbd88936b8-run-httpd\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.020980 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/568db0f4-c4f9-498e-9a40-36bbd88936b8-log-httpd\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.024204 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-scripts\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.025225 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.028696 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.030968 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-config-data\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.040739 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg4d7\" (UniqueName: \"kubernetes.io/projected/568db0f4-c4f9-498e-9a40-36bbd88936b8-kube-api-access-gg4d7\") pod \"ceilometer-0\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.170891 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.407837 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"421bab10-ac4a-458f-98e3-18cd0adef038","Type":"ContainerStarted","Data":"a0bfeaac6716ef874bd67d212c995564b2723eaa721029f94b4b61d0be5d704d"} Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.429137 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=5.984661193 podStartE2EDuration="1m5.429086402s" podCreationTimestamp="2026-03-16 15:33:30 +0000 UTC" firstStartedPulling="2026-03-16 15:33:34.638853962 +0000 UTC m=+1216.366244249" lastFinishedPulling="2026-03-16 15:34:34.083279171 +0000 UTC m=+1275.810669458" observedRunningTime="2026-03-16 15:34:35.425531218 +0000 UTC m=+1277.152921495" watchObservedRunningTime="2026-03-16 15:34:35.429086402 +0000 UTC m=+1277.156476689" Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.677874 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.678203 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2d211807-0eb0-4c50-89c4-f85fe552dcb6" containerName="glance-log" containerID="cri-o://ae8365bdcff13d95406a141eee254832ff2825587f9974cdfda46ed169371201" gracePeriod=30 Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.678375 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2d211807-0eb0-4c50-89c4-f85fe552dcb6" containerName="glance-httpd" containerID="cri-o://d75869f2cacbe41fdec4476257774b71fa31c01d8ddaf5f9021696548485b669" gracePeriod=30 Mar 16 15:34:35 crc kubenswrapper[4736]: I0316 15:34:35.688055 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:36 crc kubenswrapper[4736]: E0316 15:34:36.001707 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d211807_0eb0_4c50_89c4_f85fe552dcb6.slice/crio-conmon-ae8365bdcff13d95406a141eee254832ff2825587f9974cdfda46ed169371201.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d211807_0eb0_4c50_89c4_f85fe552dcb6.slice/crio-ae8365bdcff13d95406a141eee254832ff2825587f9974cdfda46ed169371201.scope\": RecentStats: unable to find data in memory cache]" Mar 16 15:34:36 crc kubenswrapper[4736]: I0316 15:34:36.423171 4736 generic.go:334] "Generic (PLEG): container finished" podID="2d211807-0eb0-4c50-89c4-f85fe552dcb6" containerID="ae8365bdcff13d95406a141eee254832ff2825587f9974cdfda46ed169371201" exitCode=143 Mar 16 15:34:36 crc kubenswrapper[4736]: I0316 15:34:36.423723 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2d211807-0eb0-4c50-89c4-f85fe552dcb6","Type":"ContainerDied","Data":"ae8365bdcff13d95406a141eee254832ff2825587f9974cdfda46ed169371201"} Mar 16 15:34:36 crc kubenswrapper[4736]: I0316 15:34:36.425722 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"568db0f4-c4f9-498e-9a40-36bbd88936b8","Type":"ContainerStarted","Data":"59dac5c4ff2034eb73c5d7139bad942eab183ad8ea6904dbe578ebca1f2ac538"} Mar 16 15:34:36 crc kubenswrapper[4736]: I0316 15:34:36.425781 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"568db0f4-c4f9-498e-9a40-36bbd88936b8","Type":"ContainerStarted","Data":"74eaf53a2c02c05155125fea059dd194ff72952c3cdcf630b375fe0b0e62798c"} Mar 16 15:34:37 crc kubenswrapper[4736]: I0316 15:34:37.242979 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:34:37 crc kubenswrapper[4736]: I0316 15:34:37.248473 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" containerName="glance-log" containerID="cri-o://ad41162d082708d1bd8198588e0c026c653422c0e1613625a57b60a61c492497" gracePeriod=30 Mar 16 15:34:37 crc kubenswrapper[4736]: I0316 15:34:37.248588 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" containerName="glance-httpd" containerID="cri-o://78d1a406a3835d0ca6e8308046fa9b239065cf3315d06d62522ce6ca50a6bd35" gracePeriod=30 Mar 16 15:34:37 crc kubenswrapper[4736]: I0316 15:34:37.437883 4736 generic.go:334] "Generic (PLEG): container finished" podID="1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" containerID="ad41162d082708d1bd8198588e0c026c653422c0e1613625a57b60a61c492497" exitCode=143 Mar 16 15:34:37 crc kubenswrapper[4736]: I0316 15:34:37.437967 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54","Type":"ContainerDied","Data":"ad41162d082708d1bd8198588e0c026c653422c0e1613625a57b60a61c492497"} Mar 16 15:34:37 crc kubenswrapper[4736]: I0316 15:34:37.446691 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"568db0f4-c4f9-498e-9a40-36bbd88936b8","Type":"ContainerStarted","Data":"246f7a55a2baeb22ee7fca3e7461ad8352298b6864a6822e5f1ed71cdb66d087"} Mar 16 15:34:38 crc kubenswrapper[4736]: I0316 15:34:38.459904 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"568db0f4-c4f9-498e-9a40-36bbd88936b8","Type":"ContainerStarted","Data":"136a3c08194fb89d7a70eb8fecff6e9b03f508f11cb32066a61246f505136ab9"} Mar 16 15:34:39 crc kubenswrapper[4736]: I0316 15:34:39.518803 4736 generic.go:334] "Generic (PLEG): container finished" podID="2d211807-0eb0-4c50-89c4-f85fe552dcb6" containerID="d75869f2cacbe41fdec4476257774b71fa31c01d8ddaf5f9021696548485b669" exitCode=0 Mar 16 15:34:39 crc kubenswrapper[4736]: I0316 15:34:39.520305 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2d211807-0eb0-4c50-89c4-f85fe552dcb6","Type":"ContainerDied","Data":"d75869f2cacbe41fdec4476257774b71fa31c01d8ddaf5f9021696548485b669"} Mar 16 15:34:39 crc kubenswrapper[4736]: I0316 15:34:39.963298 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.070791 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-public-tls-certs\") pod \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.070935 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d211807-0eb0-4c50-89c4-f85fe552dcb6-httpd-run\") pod \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.070982 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-scripts\") pod \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.071067 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-combined-ca-bundle\") pod \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.071130 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.071220 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d211807-0eb0-4c50-89c4-f85fe552dcb6-logs\") pod \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.071293 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwxxw\" (UniqueName: \"kubernetes.io/projected/2d211807-0eb0-4c50-89c4-f85fe552dcb6-kube-api-access-cwxxw\") pod \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.071357 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-config-data\") pod \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\" (UID: \"2d211807-0eb0-4c50-89c4-f85fe552dcb6\") " Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.077088 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d211807-0eb0-4c50-89c4-f85fe552dcb6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2d211807-0eb0-4c50-89c4-f85fe552dcb6" (UID: "2d211807-0eb0-4c50-89c4-f85fe552dcb6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.078984 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d211807-0eb0-4c50-89c4-f85fe552dcb6-logs" (OuterVolumeSpecName: "logs") pod "2d211807-0eb0-4c50-89c4-f85fe552dcb6" (UID: "2d211807-0eb0-4c50-89c4-f85fe552dcb6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.087436 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "2d211807-0eb0-4c50-89c4-f85fe552dcb6" (UID: "2d211807-0eb0-4c50-89c4-f85fe552dcb6"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.091944 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-scripts" (OuterVolumeSpecName: "scripts") pod "2d211807-0eb0-4c50-89c4-f85fe552dcb6" (UID: "2d211807-0eb0-4c50-89c4-f85fe552dcb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.097701 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d211807-0eb0-4c50-89c4-f85fe552dcb6-kube-api-access-cwxxw" (OuterVolumeSpecName: "kube-api-access-cwxxw") pod "2d211807-0eb0-4c50-89c4-f85fe552dcb6" (UID: "2d211807-0eb0-4c50-89c4-f85fe552dcb6"). InnerVolumeSpecName "kube-api-access-cwxxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.129238 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d211807-0eb0-4c50-89c4-f85fe552dcb6" (UID: "2d211807-0eb0-4c50-89c4-f85fe552dcb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.172311 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2d211807-0eb0-4c50-89c4-f85fe552dcb6" (UID: "2d211807-0eb0-4c50-89c4-f85fe552dcb6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.175418 4736 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d211807-0eb0-4c50-89c4-f85fe552dcb6-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.175445 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.175456 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.175484 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.175495 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d211807-0eb0-4c50-89c4-f85fe552dcb6-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.175503 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwxxw\" (UniqueName: \"kubernetes.io/projected/2d211807-0eb0-4c50-89c4-f85fe552dcb6-kube-api-access-cwxxw\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.175514 4736 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.192417 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-config-data" (OuterVolumeSpecName: "config-data") pod "2d211807-0eb0-4c50-89c4-f85fe552dcb6" (UID: "2d211807-0eb0-4c50-89c4-f85fe552dcb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.203285 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.276850 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d211807-0eb0-4c50-89c4-f85fe552dcb6-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.276887 4736 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.535675 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"568db0f4-c4f9-498e-9a40-36bbd88936b8","Type":"ContainerStarted","Data":"af1d6d2eb216fcb7467a2a133eaef09cf21510af8c407ea82e191f92b3c765ef"} Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.536214 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.539150 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2d211807-0eb0-4c50-89c4-f85fe552dcb6","Type":"ContainerDied","Data":"7f42093c8775ea00d361a897e649a0f86f4ab8909282e6398b0ad1668b0c650e"} Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.539199 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.539206 4736 scope.go:117] "RemoveContainer" containerID="d75869f2cacbe41fdec4476257774b71fa31c01d8ddaf5f9021696548485b669" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.618320 4736 scope.go:117] "RemoveContainer" containerID="ae8365bdcff13d95406a141eee254832ff2825587f9974cdfda46ed169371201" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.622567 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5220914089999997 podStartE2EDuration="6.622541055s" podCreationTimestamp="2026-03-16 15:34:34 +0000 UTC" firstStartedPulling="2026-03-16 15:34:35.71240916 +0000 UTC m=+1277.439799447" lastFinishedPulling="2026-03-16 15:34:39.812858806 +0000 UTC m=+1281.540249093" observedRunningTime="2026-03-16 15:34:40.579562146 +0000 UTC m=+1282.306952423" watchObservedRunningTime="2026-03-16 15:34:40.622541055 +0000 UTC m=+1282.349931342" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.639521 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.652992 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.674416 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:34:40 crc kubenswrapper[4736]: E0316 15:34:40.675163 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d211807-0eb0-4c50-89c4-f85fe552dcb6" containerName="glance-httpd" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.675182 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d211807-0eb0-4c50-89c4-f85fe552dcb6" containerName="glance-httpd" Mar 16 15:34:40 crc kubenswrapper[4736]: E0316 15:34:40.675214 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d211807-0eb0-4c50-89c4-f85fe552dcb6" containerName="glance-log" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.675223 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d211807-0eb0-4c50-89c4-f85fe552dcb6" containerName="glance-log" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.675401 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d211807-0eb0-4c50-89c4-f85fe552dcb6" containerName="glance-log" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.675426 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d211807-0eb0-4c50-89c4-f85fe552dcb6" containerName="glance-httpd" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.678393 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.683783 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.684059 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.704186 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.804020 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c913371-a3e8-4e40-a1e3-69f93eeef930-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.804129 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c913371-a3e8-4e40-a1e3-69f93eeef930-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.804154 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c913371-a3e8-4e40-a1e3-69f93eeef930-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.804235 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c913371-a3e8-4e40-a1e3-69f93eeef930-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.804259 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.804284 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c913371-a3e8-4e40-a1e3-69f93eeef930-logs\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.804303 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c913371-a3e8-4e40-a1e3-69f93eeef930-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.804322 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk72t\" (UniqueName: \"kubernetes.io/projected/5c913371-a3e8-4e40-a1e3-69f93eeef930-kube-api-access-zk72t\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.906022 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c913371-a3e8-4e40-a1e3-69f93eeef930-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.908729 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c913371-a3e8-4e40-a1e3-69f93eeef930-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.909134 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c913371-a3e8-4e40-a1e3-69f93eeef930-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.909184 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.909266 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c913371-a3e8-4e40-a1e3-69f93eeef930-logs\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.909303 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c913371-a3e8-4e40-a1e3-69f93eeef930-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.909351 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk72t\" (UniqueName: \"kubernetes.io/projected/5c913371-a3e8-4e40-a1e3-69f93eeef930-kube-api-access-zk72t\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.909631 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c913371-a3e8-4e40-a1e3-69f93eeef930-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.909691 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.909827 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c913371-a3e8-4e40-a1e3-69f93eeef930-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.910094 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c913371-a3e8-4e40-a1e3-69f93eeef930-logs\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.918556 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c913371-a3e8-4e40-a1e3-69f93eeef930-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.920479 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c913371-a3e8-4e40-a1e3-69f93eeef930-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.927846 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c913371-a3e8-4e40-a1e3-69f93eeef930-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.963066 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk72t\" (UniqueName: \"kubernetes.io/projected/5c913371-a3e8-4e40-a1e3-69f93eeef930-kube-api-access-zk72t\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.965863 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c913371-a3e8-4e40-a1e3-69f93eeef930-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:40 crc kubenswrapper[4736]: I0316 15:34:40.989629 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5c913371-a3e8-4e40-a1e3-69f93eeef930\") " pod="openstack/glance-default-external-api-0" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.005699 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.026001 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d211807-0eb0-4c50-89c4-f85fe552dcb6" path="/var/lib/kubelet/pods/2d211807-0eb0-4c50-89c4-f85fe552dcb6/volumes" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.219241 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.319385 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-config-data\") pod \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.319508 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-combined-ca-bundle\") pod \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.319674 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-internal-tls-certs\") pod \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.319702 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-scripts\") pod \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.319821 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkgsl\" (UniqueName: \"kubernetes.io/projected/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-kube-api-access-qkgsl\") pod \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.319855 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-logs\") pod \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.320083 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-httpd-run\") pod \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.320184 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\" (UID: \"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54\") " Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.331456 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" (UID: "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.331535 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-logs" (OuterVolumeSpecName: "logs") pod "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" (UID: "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.347012 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-scripts" (OuterVolumeSpecName: "scripts") pod "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" (UID: "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.347928 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" (UID: "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.348350 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-kube-api-access-qkgsl" (OuterVolumeSpecName: "kube-api-access-qkgsl") pod "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" (UID: "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54"). InnerVolumeSpecName "kube-api-access-qkgsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.445558 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.445592 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.445602 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkgsl\" (UniqueName: \"kubernetes.io/projected/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-kube-api-access-qkgsl\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.445613 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.445625 4736 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.500286 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" (UID: "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.518823 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.547362 4736 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.554507 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.558638 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-config-data" (OuterVolumeSpecName: "config-data") pod "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" (UID: "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.581814 4736 generic.go:334] "Generic (PLEG): container finished" podID="1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" containerID="78d1a406a3835d0ca6e8308046fa9b239065cf3315d06d62522ce6ca50a6bd35" exitCode=0 Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.581864 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54","Type":"ContainerDied","Data":"78d1a406a3835d0ca6e8308046fa9b239065cf3315d06d62522ce6ca50a6bd35"} Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.581894 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54","Type":"ContainerDied","Data":"ccbdf00dbd9ba3115decc20d915847cfd61e4965b0c2c7ed291de5d92b94a643"} Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.581913 4736 scope.go:117] "RemoveContainer" containerID="78d1a406a3835d0ca6e8308046fa9b239065cf3315d06d62522ce6ca50a6bd35" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.582034 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.598831 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" (UID: "1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.632013 4736 scope.go:117] "RemoveContainer" containerID="ad41162d082708d1bd8198588e0c026c653422c0e1613625a57b60a61c492497" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.656991 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.657029 4736 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.688079 4736 scope.go:117] "RemoveContainer" containerID="78d1a406a3835d0ca6e8308046fa9b239065cf3315d06d62522ce6ca50a6bd35" Mar 16 15:34:41 crc kubenswrapper[4736]: E0316 15:34:41.695688 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78d1a406a3835d0ca6e8308046fa9b239065cf3315d06d62522ce6ca50a6bd35\": container with ID starting with 78d1a406a3835d0ca6e8308046fa9b239065cf3315d06d62522ce6ca50a6bd35 not found: ID does not exist" containerID="78d1a406a3835d0ca6e8308046fa9b239065cf3315d06d62522ce6ca50a6bd35" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.695760 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78d1a406a3835d0ca6e8308046fa9b239065cf3315d06d62522ce6ca50a6bd35"} err="failed to get container status \"78d1a406a3835d0ca6e8308046fa9b239065cf3315d06d62522ce6ca50a6bd35\": rpc error: code = NotFound desc = could not find container \"78d1a406a3835d0ca6e8308046fa9b239065cf3315d06d62522ce6ca50a6bd35\": container with ID starting with 78d1a406a3835d0ca6e8308046fa9b239065cf3315d06d62522ce6ca50a6bd35 not found: ID does not exist" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.695795 4736 scope.go:117] "RemoveContainer" containerID="ad41162d082708d1bd8198588e0c026c653422c0e1613625a57b60a61c492497" Mar 16 15:34:41 crc kubenswrapper[4736]: E0316 15:34:41.700973 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad41162d082708d1bd8198588e0c026c653422c0e1613625a57b60a61c492497\": container with ID starting with ad41162d082708d1bd8198588e0c026c653422c0e1613625a57b60a61c492497 not found: ID does not exist" containerID="ad41162d082708d1bd8198588e0c026c653422c0e1613625a57b60a61c492497" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.701021 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad41162d082708d1bd8198588e0c026c653422c0e1613625a57b60a61c492497"} err="failed to get container status \"ad41162d082708d1bd8198588e0c026c653422c0e1613625a57b60a61c492497\": rpc error: code = NotFound desc = could not find container \"ad41162d082708d1bd8198588e0c026c653422c0e1613625a57b60a61c492497\": container with ID starting with ad41162d082708d1bd8198588e0c026c653422c0e1613625a57b60a61c492497 not found: ID does not exist" Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.969725 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:34:41 crc kubenswrapper[4736]: I0316 15:34:41.991224 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.024194 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:34:42 crc kubenswrapper[4736]: E0316 15:34:42.024715 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" containerName="glance-httpd" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.024738 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" containerName="glance-httpd" Mar 16 15:34:42 crc kubenswrapper[4736]: E0316 15:34:42.024751 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" containerName="glance-log" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.024758 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" containerName="glance-log" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.024954 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" containerName="glance-log" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.024973 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" containerName="glance-httpd" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.026024 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.038541 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.038856 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.039068 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.170425 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.170544 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce1ada3f-941d-4468-8a04-0c780a84148b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.170607 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce1ada3f-941d-4468-8a04-0c780a84148b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.170635 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce1ada3f-941d-4468-8a04-0c780a84148b-logs\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.170691 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce1ada3f-941d-4468-8a04-0c780a84148b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.170719 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce1ada3f-941d-4468-8a04-0c780a84148b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.170769 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcj9r\" (UniqueName: \"kubernetes.io/projected/ce1ada3f-941d-4468-8a04-0c780a84148b-kube-api-access-mcj9r\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.170800 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce1ada3f-941d-4468-8a04-0c780a84148b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.176938 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 16 15:34:42 crc kubenswrapper[4736]: W0316 15:34:42.178573 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c913371_a3e8_4e40_a1e3_69f93eeef930.slice/crio-2d4102d05f1d636ae9f2ae6b73e5937b7898676a2391115f9797f8517a935d7b WatchSource:0}: Error finding container 2d4102d05f1d636ae9f2ae6b73e5937b7898676a2391115f9797f8517a935d7b: Status 404 returned error can't find the container with id 2d4102d05f1d636ae9f2ae6b73e5937b7898676a2391115f9797f8517a935d7b Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.272426 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce1ada3f-941d-4468-8a04-0c780a84148b-logs\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.272514 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce1ada3f-941d-4468-8a04-0c780a84148b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.272545 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce1ada3f-941d-4468-8a04-0c780a84148b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.272592 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcj9r\" (UniqueName: \"kubernetes.io/projected/ce1ada3f-941d-4468-8a04-0c780a84148b-kube-api-access-mcj9r\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.272624 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce1ada3f-941d-4468-8a04-0c780a84148b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.272683 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.272730 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce1ada3f-941d-4468-8a04-0c780a84148b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.272771 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce1ada3f-941d-4468-8a04-0c780a84148b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.273921 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.274271 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce1ada3f-941d-4468-8a04-0c780a84148b-logs\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.274544 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce1ada3f-941d-4468-8a04-0c780a84148b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.279824 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce1ada3f-941d-4468-8a04-0c780a84148b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.282861 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce1ada3f-941d-4468-8a04-0c780a84148b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.282938 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce1ada3f-941d-4468-8a04-0c780a84148b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.296028 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce1ada3f-941d-4468-8a04-0c780a84148b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.315241 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcj9r\" (UniqueName: \"kubernetes.io/projected/ce1ada3f-941d-4468-8a04-0c780a84148b-kube-api-access-mcj9r\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.328587 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"ce1ada3f-941d-4468-8a04-0c780a84148b\") " pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.356535 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.617750 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c913371-a3e8-4e40-a1e3-69f93eeef930","Type":"ContainerStarted","Data":"2d4102d05f1d636ae9f2ae6b73e5937b7898676a2391115f9797f8517a935d7b"} Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.880867 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.884951 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="ceilometer-central-agent" containerID="cri-o://59dac5c4ff2034eb73c5d7139bad942eab183ad8ea6904dbe578ebca1f2ac538" gracePeriod=30 Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.885738 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="proxy-httpd" containerID="cri-o://af1d6d2eb216fcb7467a2a133eaef09cf21510af8c407ea82e191f92b3c765ef" gracePeriod=30 Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.885921 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="sg-core" containerID="cri-o://136a3c08194fb89d7a70eb8fecff6e9b03f508f11cb32066a61246f505136ab9" gracePeriod=30 Mar 16 15:34:42 crc kubenswrapper[4736]: I0316 15:34:42.886025 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="ceilometer-notification-agent" containerID="cri-o://246f7a55a2baeb22ee7fca3e7461ad8352298b6864a6822e5f1ed71cdb66d087" gracePeriod=30 Mar 16 15:34:43 crc kubenswrapper[4736]: I0316 15:34:43.023089 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54" path="/var/lib/kubelet/pods/1c6fa22c-e2c3-4ccb-9b2d-84ee1a1e0c54/volumes" Mar 16 15:34:43 crc kubenswrapper[4736]: I0316 15:34:43.076879 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 16 15:34:43 crc kubenswrapper[4736]: I0316 15:34:43.648275 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c913371-a3e8-4e40-a1e3-69f93eeef930","Type":"ContainerStarted","Data":"1f0dca12e4739e006e30e1a2c3cca3012f3e172d117e5a071e22f2b603fe952b"} Mar 16 15:34:43 crc kubenswrapper[4736]: I0316 15:34:43.651950 4736 generic.go:334] "Generic (PLEG): container finished" podID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerID="af1d6d2eb216fcb7467a2a133eaef09cf21510af8c407ea82e191f92b3c765ef" exitCode=0 Mar 16 15:34:43 crc kubenswrapper[4736]: I0316 15:34:43.651971 4736 generic.go:334] "Generic (PLEG): container finished" podID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerID="136a3c08194fb89d7a70eb8fecff6e9b03f508f11cb32066a61246f505136ab9" exitCode=2 Mar 16 15:34:43 crc kubenswrapper[4736]: I0316 15:34:43.651979 4736 generic.go:334] "Generic (PLEG): container finished" podID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerID="246f7a55a2baeb22ee7fca3e7461ad8352298b6864a6822e5f1ed71cdb66d087" exitCode=0 Mar 16 15:34:43 crc kubenswrapper[4736]: I0316 15:34:43.652008 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"568db0f4-c4f9-498e-9a40-36bbd88936b8","Type":"ContainerDied","Data":"af1d6d2eb216fcb7467a2a133eaef09cf21510af8c407ea82e191f92b3c765ef"} Mar 16 15:34:43 crc kubenswrapper[4736]: I0316 15:34:43.652024 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"568db0f4-c4f9-498e-9a40-36bbd88936b8","Type":"ContainerDied","Data":"136a3c08194fb89d7a70eb8fecff6e9b03f508f11cb32066a61246f505136ab9"} Mar 16 15:34:43 crc kubenswrapper[4736]: I0316 15:34:43.652046 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"568db0f4-c4f9-498e-9a40-36bbd88936b8","Type":"ContainerDied","Data":"246f7a55a2baeb22ee7fca3e7461ad8352298b6864a6822e5f1ed71cdb66d087"} Mar 16 15:34:43 crc kubenswrapper[4736]: I0316 15:34:43.655420 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ce1ada3f-941d-4468-8a04-0c780a84148b","Type":"ContainerStarted","Data":"c37428e705e6d3f3aa187bf7c92ed0d7da05c781a5466227517a1ae257d32a0c"} Mar 16 15:34:44 crc kubenswrapper[4736]: I0316 15:34:44.667482 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ce1ada3f-941d-4468-8a04-0c780a84148b","Type":"ContainerStarted","Data":"5b947632bb0d62b1738241543d945cea9afc1333f41f106cd22872ee8cb909a9"} Mar 16 15:34:44 crc kubenswrapper[4736]: I0316 15:34:44.668227 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ce1ada3f-941d-4468-8a04-0c780a84148b","Type":"ContainerStarted","Data":"7473b3273cf2ac4bc3685f19b7fe8235182ea5c8829ef80ddb5b526d803b7d42"} Mar 16 15:34:44 crc kubenswrapper[4736]: I0316 15:34:44.670545 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5c913371-a3e8-4e40-a1e3-69f93eeef930","Type":"ContainerStarted","Data":"c07ca4e356e07dee857b55149393f1d70a7e2efabb8d7d82127bd21cd501b0dd"} Mar 16 15:34:44 crc kubenswrapper[4736]: I0316 15:34:44.689944 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.6899279270000003 podStartE2EDuration="3.689927927s" podCreationTimestamp="2026-03-16 15:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:34:44.688750835 +0000 UTC m=+1286.416141122" watchObservedRunningTime="2026-03-16 15:34:44.689927927 +0000 UTC m=+1286.417318214" Mar 16 15:34:44 crc kubenswrapper[4736]: I0316 15:34:44.794350 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.794317655 podStartE2EDuration="4.794317655s" podCreationTimestamp="2026-03-16 15:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:34:44.758866938 +0000 UTC m=+1286.486257225" watchObservedRunningTime="2026-03-16 15:34:44.794317655 +0000 UTC m=+1286.521707942" Mar 16 15:34:44 crc kubenswrapper[4736]: I0316 15:34:44.866943 4736 scope.go:117] "RemoveContainer" containerID="68b3e22b6bcecfec9f9712dbe7b564ffabed1fcd1e9a892dd27040f7a30c5290" Mar 16 15:34:47 crc kubenswrapper[4736]: I0316 15:34:47.696669 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zhxnt" event={"ID":"c548edb2-4f30-4790-bf13-c2509601cd25","Type":"ContainerStarted","Data":"2d5ccf73209154aeb5ca1a33715bc3834de2d61dc950286f511a9448e10724e9"} Mar 16 15:34:47 crc kubenswrapper[4736]: I0316 15:34:47.727834 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zhxnt" podStartSLOduration=16.343855094 podStartE2EDuration="49.727808617s" podCreationTimestamp="2026-03-16 15:33:58 +0000 UTC" firstStartedPulling="2026-03-16 15:34:13.789820924 +0000 UTC m=+1255.517211211" lastFinishedPulling="2026-03-16 15:34:47.173774457 +0000 UTC m=+1288.901164734" observedRunningTime="2026-03-16 15:34:47.71632775 +0000 UTC m=+1289.443718037" watchObservedRunningTime="2026-03-16 15:34:47.727808617 +0000 UTC m=+1289.455198904" Mar 16 15:34:51 crc kubenswrapper[4736]: I0316 15:34:51.006908 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 16 15:34:51 crc kubenswrapper[4736]: I0316 15:34:51.007284 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 16 15:34:51 crc kubenswrapper[4736]: I0316 15:34:51.047665 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 16 15:34:51 crc kubenswrapper[4736]: I0316 15:34:51.055364 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 16 15:34:51 crc kubenswrapper[4736]: I0316 15:34:51.742202 4736 generic.go:334] "Generic (PLEG): container finished" podID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerID="59dac5c4ff2034eb73c5d7139bad942eab183ad8ea6904dbe578ebca1f2ac538" exitCode=0 Mar 16 15:34:51 crc kubenswrapper[4736]: I0316 15:34:51.744903 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"568db0f4-c4f9-498e-9a40-36bbd88936b8","Type":"ContainerDied","Data":"59dac5c4ff2034eb73c5d7139bad942eab183ad8ea6904dbe578ebca1f2ac538"} Mar 16 15:34:51 crc kubenswrapper[4736]: I0316 15:34:51.744949 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 16 15:34:51 crc kubenswrapper[4736]: I0316 15:34:51.745156 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.165386 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.321602 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-combined-ca-bundle\") pod \"568db0f4-c4f9-498e-9a40-36bbd88936b8\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.321728 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/568db0f4-c4f9-498e-9a40-36bbd88936b8-run-httpd\") pod \"568db0f4-c4f9-498e-9a40-36bbd88936b8\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.321813 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-config-data\") pod \"568db0f4-c4f9-498e-9a40-36bbd88936b8\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.321975 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-scripts\") pod \"568db0f4-c4f9-498e-9a40-36bbd88936b8\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.322011 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/568db0f4-c4f9-498e-9a40-36bbd88936b8-log-httpd\") pod \"568db0f4-c4f9-498e-9a40-36bbd88936b8\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.322039 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-sg-core-conf-yaml\") pod \"568db0f4-c4f9-498e-9a40-36bbd88936b8\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.322119 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg4d7\" (UniqueName: \"kubernetes.io/projected/568db0f4-c4f9-498e-9a40-36bbd88936b8-kube-api-access-gg4d7\") pod \"568db0f4-c4f9-498e-9a40-36bbd88936b8\" (UID: \"568db0f4-c4f9-498e-9a40-36bbd88936b8\") " Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.324425 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568db0f4-c4f9-498e-9a40-36bbd88936b8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "568db0f4-c4f9-498e-9a40-36bbd88936b8" (UID: "568db0f4-c4f9-498e-9a40-36bbd88936b8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.324812 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/568db0f4-c4f9-498e-9a40-36bbd88936b8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "568db0f4-c4f9-498e-9a40-36bbd88936b8" (UID: "568db0f4-c4f9-498e-9a40-36bbd88936b8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.334326 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568db0f4-c4f9-498e-9a40-36bbd88936b8-kube-api-access-gg4d7" (OuterVolumeSpecName: "kube-api-access-gg4d7") pod "568db0f4-c4f9-498e-9a40-36bbd88936b8" (UID: "568db0f4-c4f9-498e-9a40-36bbd88936b8"). InnerVolumeSpecName "kube-api-access-gg4d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.334806 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-scripts" (OuterVolumeSpecName: "scripts") pod "568db0f4-c4f9-498e-9a40-36bbd88936b8" (UID: "568db0f4-c4f9-498e-9a40-36bbd88936b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.357470 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.357517 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.424321 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "568db0f4-c4f9-498e-9a40-36bbd88936b8" (UID: "568db0f4-c4f9-498e-9a40-36bbd88936b8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.425634 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/568db0f4-c4f9-498e-9a40-36bbd88936b8-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.425664 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.425674 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/568db0f4-c4f9-498e-9a40-36bbd88936b8-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.425686 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.425696 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg4d7\" (UniqueName: \"kubernetes.io/projected/568db0f4-c4f9-498e-9a40-36bbd88936b8-kube-api-access-gg4d7\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.425744 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.455293 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.518062 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-config-data" (OuterVolumeSpecName: "config-data") pod "568db0f4-c4f9-498e-9a40-36bbd88936b8" (UID: "568db0f4-c4f9-498e-9a40-36bbd88936b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.531276 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.540265 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "568db0f4-c4f9-498e-9a40-36bbd88936b8" (UID: "568db0f4-c4f9-498e-9a40-36bbd88936b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.633520 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568db0f4-c4f9-498e-9a40-36bbd88936b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.759031 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"568db0f4-c4f9-498e-9a40-36bbd88936b8","Type":"ContainerDied","Data":"74eaf53a2c02c05155125fea059dd194ff72952c3cdcf630b375fe0b0e62798c"} Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.759630 4736 scope.go:117] "RemoveContainer" containerID="af1d6d2eb216fcb7467a2a133eaef09cf21510af8c407ea82e191f92b3c765ef" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.759095 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.762099 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.762410 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.820274 4736 scope.go:117] "RemoveContainer" containerID="136a3c08194fb89d7a70eb8fecff6e9b03f508f11cb32066a61246f505136ab9" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.827808 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.843618 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.896166 4736 scope.go:117] "RemoveContainer" containerID="246f7a55a2baeb22ee7fca3e7461ad8352298b6864a6822e5f1ed71cdb66d087" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.912484 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:52 crc kubenswrapper[4736]: E0316 15:34:52.912950 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="ceilometer-notification-agent" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.912971 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="ceilometer-notification-agent" Mar 16 15:34:52 crc kubenswrapper[4736]: E0316 15:34:52.912983 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="proxy-httpd" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.912989 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="proxy-httpd" Mar 16 15:34:52 crc kubenswrapper[4736]: E0316 15:34:52.913016 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="ceilometer-central-agent" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.913023 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="ceilometer-central-agent" Mar 16 15:34:52 crc kubenswrapper[4736]: E0316 15:34:52.913035 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="sg-core" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.913040 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="sg-core" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.913222 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="sg-core" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.913247 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="ceilometer-central-agent" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.913255 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="proxy-httpd" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.913262 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" containerName="ceilometer-notification-agent" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.914969 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.922768 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.922980 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.944524 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2af13e8-a004-473f-8e5c-615fb566d1a9-log-httpd\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.944629 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.944654 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-scripts\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.944700 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-config-data\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.944720 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2af13e8-a004-473f-8e5c-615fb566d1a9-run-httpd\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.944775 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftttg\" (UniqueName: \"kubernetes.io/projected/d2af13e8-a004-473f-8e5c-615fb566d1a9-kube-api-access-ftttg\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.944813 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:52 crc kubenswrapper[4736]: I0316 15:34:52.947351 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.031574 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568db0f4-c4f9-498e-9a40-36bbd88936b8" path="/var/lib/kubelet/pods/568db0f4-c4f9-498e-9a40-36bbd88936b8/volumes" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.037530 4736 scope.go:117] "RemoveContainer" containerID="59dac5c4ff2034eb73c5d7139bad942eab183ad8ea6904dbe578ebca1f2ac538" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.046421 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.046495 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2af13e8-a004-473f-8e5c-615fb566d1a9-log-httpd\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.046607 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.046626 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-scripts\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.046684 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-config-data\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.046703 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2af13e8-a004-473f-8e5c-615fb566d1a9-run-httpd\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.046766 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftttg\" (UniqueName: \"kubernetes.io/projected/d2af13e8-a004-473f-8e5c-615fb566d1a9-kube-api-access-ftttg\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.050518 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2af13e8-a004-473f-8e5c-615fb566d1a9-run-httpd\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.051014 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2af13e8-a004-473f-8e5c-615fb566d1a9-log-httpd\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.057100 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.072561 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-config-data\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.073290 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-scripts\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.073846 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.080900 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftttg\" (UniqueName: \"kubernetes.io/projected/d2af13e8-a004-473f-8e5c-615fb566d1a9-kube-api-access-ftttg\") pod \"ceilometer-0\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.278393 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.771052 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.771403 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 16 15:34:53 crc kubenswrapper[4736]: I0316 15:34:53.843208 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:34:54 crc kubenswrapper[4736]: I0316 15:34:54.803488 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 16 15:34:54 crc kubenswrapper[4736]: I0316 15:34:54.804176 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 16 15:34:54 crc kubenswrapper[4736]: I0316 15:34:54.803937 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2af13e8-a004-473f-8e5c-615fb566d1a9","Type":"ContainerStarted","Data":"ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a"} Mar 16 15:34:54 crc kubenswrapper[4736]: I0316 15:34:54.804400 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2af13e8-a004-473f-8e5c-615fb566d1a9","Type":"ContainerStarted","Data":"b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e"} Mar 16 15:34:54 crc kubenswrapper[4736]: I0316 15:34:54.804415 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2af13e8-a004-473f-8e5c-615fb566d1a9","Type":"ContainerStarted","Data":"d816dce6e9901142cf87ac674d78f401dc945bd5bdb21f497c95a9ee1a39afc2"} Mar 16 15:34:55 crc kubenswrapper[4736]: I0316 15:34:55.812133 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2af13e8-a004-473f-8e5c-615fb566d1a9","Type":"ContainerStarted","Data":"aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295"} Mar 16 15:34:56 crc kubenswrapper[4736]: I0316 15:34:56.431553 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 16 15:34:56 crc kubenswrapper[4736]: I0316 15:34:56.431682 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 16 15:34:56 crc kubenswrapper[4736]: I0316 15:34:56.506675 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 16 15:34:56 crc kubenswrapper[4736]: I0316 15:34:56.506802 4736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 16 15:34:56 crc kubenswrapper[4736]: I0316 15:34:56.512367 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 16 15:34:56 crc kubenswrapper[4736]: I0316 15:34:56.598513 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 16 15:34:57 crc kubenswrapper[4736]: I0316 15:34:57.558381 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:34:57 crc kubenswrapper[4736]: E0316 15:34:57.558582 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:34:57 crc kubenswrapper[4736]: E0316 15:34:57.558618 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-678dd4f677-jxtsk: configmap "swift-ring-files" not found Mar 16 15:34:57 crc kubenswrapper[4736]: E0316 15:34:57.558693 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift podName:bccee937-d642-4483-87fb-033b157cf68c nodeName:}" failed. No retries permitted until 2026-03-16 15:36:01.558669108 +0000 UTC m=+1363.286059395 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift") pod "swift-proxy-678dd4f677-jxtsk" (UID: "bccee937-d642-4483-87fb-033b157cf68c") : configmap "swift-ring-files" not found Mar 16 15:34:58 crc kubenswrapper[4736]: I0316 15:34:58.851384 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2af13e8-a004-473f-8e5c-615fb566d1a9","Type":"ContainerStarted","Data":"35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f"} Mar 16 15:34:58 crc kubenswrapper[4736]: I0316 15:34:58.854143 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 16 15:34:58 crc kubenswrapper[4736]: I0316 15:34:58.900600 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.994094241 podStartE2EDuration="6.900571364s" podCreationTimestamp="2026-03-16 15:34:52 +0000 UTC" firstStartedPulling="2026-03-16 15:34:53.869765537 +0000 UTC m=+1295.597155824" lastFinishedPulling="2026-03-16 15:34:57.77624266 +0000 UTC m=+1299.503632947" observedRunningTime="2026-03-16 15:34:58.892929481 +0000 UTC m=+1300.620319768" watchObservedRunningTime="2026-03-16 15:34:58.900571364 +0000 UTC m=+1300.627961651" Mar 16 15:35:01 crc kubenswrapper[4736]: I0316 15:35:01.888418 4736 generic.go:334] "Generic (PLEG): container finished" podID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerID="bb34a40cb6b11f02510ae1feff999a3f1039c1a0ec12f398e7f5deb918dc9b15" exitCode=137 Mar 16 15:35:01 crc kubenswrapper[4736]: I0316 15:35:01.888666 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67c978df54-kdnqn" event={"ID":"b6de0392-402f-47e4-aa1d-19c956a68e1d","Type":"ContainerDied","Data":"bb34a40cb6b11f02510ae1feff999a3f1039c1a0ec12f398e7f5deb918dc9b15"} Mar 16 15:35:01 crc kubenswrapper[4736]: I0316 15:35:01.889156 4736 scope.go:117] "RemoveContainer" containerID="155657f8a696f56dc49ec2f184c01c7c2d808c7a65337e7a1c2bfaaa3b82d5cf" Mar 16 15:35:02 crc kubenswrapper[4736]: I0316 15:35:02.902724 4736 generic.go:334] "Generic (PLEG): container finished" podID="4a2c18b8-790c-4bb8-ac86-c70f0220ab3f" containerID="92eb5124b55e332240f3b250db39e95238d56dfb7b44e8b175f4bd70f4318710" exitCode=137 Mar 16 15:35:02 crc kubenswrapper[4736]: I0316 15:35:02.902781 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ff55bcd5b-psrsc" event={"ID":"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f","Type":"ContainerDied","Data":"92eb5124b55e332240f3b250db39e95238d56dfb7b44e8b175f4bd70f4318710"} Mar 16 15:35:02 crc kubenswrapper[4736]: I0316 15:35:02.904634 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-ff55bcd5b-psrsc" event={"ID":"4a2c18b8-790c-4bb8-ac86-c70f0220ab3f","Type":"ContainerStarted","Data":"a0a121df858b2b908a13af2a93654f8b52d2a613696467ad9e509770e16550ee"} Mar 16 15:35:02 crc kubenswrapper[4736]: I0316 15:35:02.904714 4736 scope.go:117] "RemoveContainer" containerID="ae03ef4d42b1153ca6b496c7c17ea26b167a6eef22d2794e60fdb93441e529e8" Mar 16 15:35:02 crc kubenswrapper[4736]: I0316 15:35:02.910520 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67c978df54-kdnqn" event={"ID":"b6de0392-402f-47e4-aa1d-19c956a68e1d","Type":"ContainerStarted","Data":"ec7a546e5b00a48f1b8e940316ff04389add7b34cee99d2f1b01654c4990b1c2"} Mar 16 15:35:04 crc kubenswrapper[4736]: E0316 15:35:04.620371 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="0892ebc9-dbd4-4652-9691-13028da07f80" Mar 16 15:35:04 crc kubenswrapper[4736]: I0316 15:35:04.935744 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 16 15:35:08 crc kubenswrapper[4736]: I0316 15:35:08.195225 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:35:08 crc kubenswrapper[4736]: E0316 15:35:08.195527 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:35:08 crc kubenswrapper[4736]: E0316 15:35:08.197532 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 16 15:35:08 crc kubenswrapper[4736]: E0316 15:35:08.197714 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift podName:0892ebc9-dbd4-4652-9691-13028da07f80 nodeName:}" failed. No retries permitted until 2026-03-16 15:37:10.197688027 +0000 UTC m=+1431.925078514 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift") pod "swift-storage-0" (UID: "0892ebc9-dbd4-4652-9691-13028da07f80") : configmap "swift-ring-files" not found Mar 16 15:35:08 crc kubenswrapper[4736]: I0316 15:35:08.991911 4736 generic.go:334] "Generic (PLEG): container finished" podID="c548edb2-4f30-4790-bf13-c2509601cd25" containerID="2d5ccf73209154aeb5ca1a33715bc3834de2d61dc950286f511a9448e10724e9" exitCode=0 Mar 16 15:35:08 crc kubenswrapper[4736]: I0316 15:35:08.992004 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zhxnt" event={"ID":"c548edb2-4f30-4790-bf13-c2509601cd25","Type":"ContainerDied","Data":"2d5ccf73209154aeb5ca1a33715bc3834de2d61dc950286f511a9448e10724e9"} Mar 16 15:35:10 crc kubenswrapper[4736]: I0316 15:35:10.398763 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:35:10 crc kubenswrapper[4736]: I0316 15:35:10.440716 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn9j5\" (UniqueName: \"kubernetes.io/projected/c548edb2-4f30-4790-bf13-c2509601cd25-kube-api-access-qn9j5\") pod \"c548edb2-4f30-4790-bf13-c2509601cd25\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " Mar 16 15:35:10 crc kubenswrapper[4736]: I0316 15:35:10.440756 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-scripts\") pod \"c548edb2-4f30-4790-bf13-c2509601cd25\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " Mar 16 15:35:10 crc kubenswrapper[4736]: I0316 15:35:10.440854 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-combined-ca-bundle\") pod \"c548edb2-4f30-4790-bf13-c2509601cd25\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " Mar 16 15:35:10 crc kubenswrapper[4736]: I0316 15:35:10.440900 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-config-data\") pod \"c548edb2-4f30-4790-bf13-c2509601cd25\" (UID: \"c548edb2-4f30-4790-bf13-c2509601cd25\") " Mar 16 15:35:10 crc kubenswrapper[4736]: I0316 15:35:10.471301 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c548edb2-4f30-4790-bf13-c2509601cd25-kube-api-access-qn9j5" (OuterVolumeSpecName: "kube-api-access-qn9j5") pod "c548edb2-4f30-4790-bf13-c2509601cd25" (UID: "c548edb2-4f30-4790-bf13-c2509601cd25"). InnerVolumeSpecName "kube-api-access-qn9j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:35:10 crc kubenswrapper[4736]: I0316 15:35:10.485378 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-scripts" (OuterVolumeSpecName: "scripts") pod "c548edb2-4f30-4790-bf13-c2509601cd25" (UID: "c548edb2-4f30-4790-bf13-c2509601cd25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:10 crc kubenswrapper[4736]: I0316 15:35:10.502578 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-config-data" (OuterVolumeSpecName: "config-data") pod "c548edb2-4f30-4790-bf13-c2509601cd25" (UID: "c548edb2-4f30-4790-bf13-c2509601cd25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:10 crc kubenswrapper[4736]: I0316 15:35:10.504842 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c548edb2-4f30-4790-bf13-c2509601cd25" (UID: "c548edb2-4f30-4790-bf13-c2509601cd25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:10 crc kubenswrapper[4736]: I0316 15:35:10.543202 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:10 crc kubenswrapper[4736]: I0316 15:35:10.543232 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn9j5\" (UniqueName: \"kubernetes.io/projected/c548edb2-4f30-4790-bf13-c2509601cd25-kube-api-access-qn9j5\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:10 crc kubenswrapper[4736]: I0316 15:35:10.543242 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:10 crc kubenswrapper[4736]: I0316 15:35:10.543252 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c548edb2-4f30-4790-bf13-c2509601cd25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.017334 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zhxnt" event={"ID":"c548edb2-4f30-4790-bf13-c2509601cd25","Type":"ContainerDied","Data":"21afe0fbca9b68412e19fdd037992c13cf9f091fda057b2ec010156a34cf4182"} Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.017388 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21afe0fbca9b68412e19fdd037992c13cf9f091fda057b2ec010156a34cf4182" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.017532 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zhxnt" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.156517 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 16 15:35:11 crc kubenswrapper[4736]: E0316 15:35:11.157331 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c548edb2-4f30-4790-bf13-c2509601cd25" containerName="nova-cell0-conductor-db-sync" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.157425 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c548edb2-4f30-4790-bf13-c2509601cd25" containerName="nova-cell0-conductor-db-sync" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.157766 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c548edb2-4f30-4790-bf13-c2509601cd25" containerName="nova-cell0-conductor-db-sync" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.158847 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.162606 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zld2d" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.162821 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.185763 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.256673 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1586d073-7299-41ec-9875-fc52e8cdc45d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1586d073-7299-41ec-9875-fc52e8cdc45d\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.257095 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1586d073-7299-41ec-9875-fc52e8cdc45d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1586d073-7299-41ec-9875-fc52e8cdc45d\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.257213 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drfsj\" (UniqueName: \"kubernetes.io/projected/1586d073-7299-41ec-9875-fc52e8cdc45d-kube-api-access-drfsj\") pod \"nova-cell0-conductor-0\" (UID: \"1586d073-7299-41ec-9875-fc52e8cdc45d\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.358884 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drfsj\" (UniqueName: \"kubernetes.io/projected/1586d073-7299-41ec-9875-fc52e8cdc45d-kube-api-access-drfsj\") pod \"nova-cell0-conductor-0\" (UID: \"1586d073-7299-41ec-9875-fc52e8cdc45d\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.358997 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1586d073-7299-41ec-9875-fc52e8cdc45d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1586d073-7299-41ec-9875-fc52e8cdc45d\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.359130 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1586d073-7299-41ec-9875-fc52e8cdc45d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1586d073-7299-41ec-9875-fc52e8cdc45d\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.363336 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1586d073-7299-41ec-9875-fc52e8cdc45d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1586d073-7299-41ec-9875-fc52e8cdc45d\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.363757 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1586d073-7299-41ec-9875-fc52e8cdc45d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1586d073-7299-41ec-9875-fc52e8cdc45d\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.385762 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drfsj\" (UniqueName: \"kubernetes.io/projected/1586d073-7299-41ec-9875-fc52e8cdc45d-kube-api-access-drfsj\") pod \"nova-cell0-conductor-0\" (UID: \"1586d073-7299-41ec-9875-fc52e8cdc45d\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.482178 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.604154 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.604196 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.814895 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:35:11 crc kubenswrapper[4736]: I0316 15:35:11.816027 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:35:12 crc kubenswrapper[4736]: I0316 15:35:12.131480 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 16 15:35:13 crc kubenswrapper[4736]: I0316 15:35:13.059766 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1586d073-7299-41ec-9875-fc52e8cdc45d","Type":"ContainerStarted","Data":"8891fd73b85e55e130d8a75f3482cc2b8fed0c3bea2ad8c1aece3d20f9808ced"} Mar 16 15:35:13 crc kubenswrapper[4736]: I0316 15:35:13.060240 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1586d073-7299-41ec-9875-fc52e8cdc45d","Type":"ContainerStarted","Data":"e9d9af43f9362ff2558509e0e4a958fcfe934e4d9f31461e0a59ae8e0db1a409"} Mar 16 15:35:13 crc kubenswrapper[4736]: I0316 15:35:13.060266 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:13 crc kubenswrapper[4736]: I0316 15:35:13.100265 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.100250059 podStartE2EDuration="2.100250059s" podCreationTimestamp="2026-03-16 15:35:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:35:13.09990146 +0000 UTC m=+1314.827291747" watchObservedRunningTime="2026-03-16 15:35:13.100250059 +0000 UTC m=+1314.827640346" Mar 16 15:35:15 crc kubenswrapper[4736]: I0316 15:35:15.725729 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:35:15 crc kubenswrapper[4736]: I0316 15:35:15.726590 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="sg-core" containerID="cri-o://aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295" gracePeriod=30 Mar 16 15:35:15 crc kubenswrapper[4736]: I0316 15:35:15.726609 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="proxy-httpd" containerID="cri-o://35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f" gracePeriod=30 Mar 16 15:35:15 crc kubenswrapper[4736]: I0316 15:35:15.726769 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="ceilometer-notification-agent" containerID="cri-o://ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a" gracePeriod=30 Mar 16 15:35:15 crc kubenswrapper[4736]: I0316 15:35:15.726867 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="ceilometer-central-agent" containerID="cri-o://b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e" gracePeriod=30 Mar 16 15:35:15 crc kubenswrapper[4736]: I0316 15:35:15.742419 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.203:3000/\": EOF" Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.090342 4736 generic.go:334] "Generic (PLEG): container finished" podID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerID="35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f" exitCode=0 Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.090677 4736 generic.go:334] "Generic (PLEG): container finished" podID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerID="aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295" exitCode=2 Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.090411 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2af13e8-a004-473f-8e5c-615fb566d1a9","Type":"ContainerDied","Data":"35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f"} Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.090712 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2af13e8-a004-473f-8e5c-615fb566d1a9","Type":"ContainerDied","Data":"aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295"} Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.825368 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.968353 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2af13e8-a004-473f-8e5c-615fb566d1a9-run-httpd\") pod \"d2af13e8-a004-473f-8e5c-615fb566d1a9\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.968465 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2af13e8-a004-473f-8e5c-615fb566d1a9-log-httpd\") pod \"d2af13e8-a004-473f-8e5c-615fb566d1a9\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.968584 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-scripts\") pod \"d2af13e8-a004-473f-8e5c-615fb566d1a9\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.968721 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-sg-core-conf-yaml\") pod \"d2af13e8-a004-473f-8e5c-615fb566d1a9\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.968916 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-config-data\") pod \"d2af13e8-a004-473f-8e5c-615fb566d1a9\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.968978 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftttg\" (UniqueName: \"kubernetes.io/projected/d2af13e8-a004-473f-8e5c-615fb566d1a9-kube-api-access-ftttg\") pod \"d2af13e8-a004-473f-8e5c-615fb566d1a9\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.969022 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-combined-ca-bundle\") pod \"d2af13e8-a004-473f-8e5c-615fb566d1a9\" (UID: \"d2af13e8-a004-473f-8e5c-615fb566d1a9\") " Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.991388 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2af13e8-a004-473f-8e5c-615fb566d1a9-kube-api-access-ftttg" (OuterVolumeSpecName: "kube-api-access-ftttg") pod "d2af13e8-a004-473f-8e5c-615fb566d1a9" (UID: "d2af13e8-a004-473f-8e5c-615fb566d1a9"). InnerVolumeSpecName "kube-api-access-ftttg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.993122 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-scripts" (OuterVolumeSpecName: "scripts") pod "d2af13e8-a004-473f-8e5c-615fb566d1a9" (UID: "d2af13e8-a004-473f-8e5c-615fb566d1a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.993697 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2af13e8-a004-473f-8e5c-615fb566d1a9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d2af13e8-a004-473f-8e5c-615fb566d1a9" (UID: "d2af13e8-a004-473f-8e5c-615fb566d1a9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:35:16 crc kubenswrapper[4736]: I0316 15:35:16.994897 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2af13e8-a004-473f-8e5c-615fb566d1a9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d2af13e8-a004-473f-8e5c-615fb566d1a9" (UID: "d2af13e8-a004-473f-8e5c-615fb566d1a9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.089141 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d2af13e8-a004-473f-8e5c-615fb566d1a9" (UID: "d2af13e8-a004-473f-8e5c-615fb566d1a9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.109828 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.110165 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.110259 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftttg\" (UniqueName: \"kubernetes.io/projected/d2af13e8-a004-473f-8e5c-615fb566d1a9-kube-api-access-ftttg\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.110333 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2af13e8-a004-473f-8e5c-615fb566d1a9-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.110480 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2af13e8-a004-473f-8e5c-615fb566d1a9-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.119969 4736 generic.go:334] "Generic (PLEG): container finished" podID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerID="ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a" exitCode=0 Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.120625 4736 generic.go:334] "Generic (PLEG): container finished" podID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerID="b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e" exitCode=0 Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.120215 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.160688 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2af13e8-a004-473f-8e5c-615fb566d1a9" (UID: "d2af13e8-a004-473f-8e5c-615fb566d1a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.176702 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-config-data" (OuterVolumeSpecName: "config-data") pod "d2af13e8-a004-473f-8e5c-615fb566d1a9" (UID: "d2af13e8-a004-473f-8e5c-615fb566d1a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.212404 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.212445 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2af13e8-a004-473f-8e5c-615fb566d1a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.274549 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2af13e8-a004-473f-8e5c-615fb566d1a9","Type":"ContainerDied","Data":"ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a"} Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.274603 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2af13e8-a004-473f-8e5c-615fb566d1a9","Type":"ContainerDied","Data":"b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e"} Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.274614 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2af13e8-a004-473f-8e5c-615fb566d1a9","Type":"ContainerDied","Data":"d816dce6e9901142cf87ac674d78f401dc945bd5bdb21f497c95a9ee1a39afc2"} Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.274632 4736 scope.go:117] "RemoveContainer" containerID="35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.295239 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.295552 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="1586d073-7299-41ec-9875-fc52e8cdc45d" containerName="nova-cell0-conductor-conductor" containerID="cri-o://8891fd73b85e55e130d8a75f3482cc2b8fed0c3bea2ad8c1aece3d20f9808ced" gracePeriod=30 Mar 16 15:35:17 crc kubenswrapper[4736]: E0316 15:35:17.299593 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8891fd73b85e55e130d8a75f3482cc2b8fed0c3bea2ad8c1aece3d20f9808ced" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 16 15:35:17 crc kubenswrapper[4736]: E0316 15:35:17.301907 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8891fd73b85e55e130d8a75f3482cc2b8fed0c3bea2ad8c1aece3d20f9808ced" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 16 15:35:17 crc kubenswrapper[4736]: E0316 15:35:17.303175 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8891fd73b85e55e130d8a75f3482cc2b8fed0c3bea2ad8c1aece3d20f9808ced" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 16 15:35:17 crc kubenswrapper[4736]: E0316 15:35:17.303220 4736 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="1586d073-7299-41ec-9875-fc52e8cdc45d" containerName="nova-cell0-conductor-conductor" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.310845 4736 scope.go:117] "RemoveContainer" containerID="aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.368568 4736 scope.go:117] "RemoveContainer" containerID="ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.399464 4736 scope.go:117] "RemoveContainer" containerID="b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.443805 4736 scope.go:117] "RemoveContainer" containerID="35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f" Mar 16 15:35:17 crc kubenswrapper[4736]: E0316 15:35:17.447873 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f\": container with ID starting with 35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f not found: ID does not exist" containerID="35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.447922 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f"} err="failed to get container status \"35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f\": rpc error: code = NotFound desc = could not find container \"35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f\": container with ID starting with 35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f not found: ID does not exist" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.447954 4736 scope.go:117] "RemoveContainer" containerID="aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295" Mar 16 15:35:17 crc kubenswrapper[4736]: E0316 15:35:17.451065 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295\": container with ID starting with aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295 not found: ID does not exist" containerID="aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.451158 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295"} err="failed to get container status \"aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295\": rpc error: code = NotFound desc = could not find container \"aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295\": container with ID starting with aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295 not found: ID does not exist" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.451183 4736 scope.go:117] "RemoveContainer" containerID="ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a" Mar 16 15:35:17 crc kubenswrapper[4736]: E0316 15:35:17.452050 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a\": container with ID starting with ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a not found: ID does not exist" containerID="ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.452133 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a"} err="failed to get container status \"ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a\": rpc error: code = NotFound desc = could not find container \"ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a\": container with ID starting with ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a not found: ID does not exist" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.452170 4736 scope.go:117] "RemoveContainer" containerID="b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e" Mar 16 15:35:17 crc kubenswrapper[4736]: E0316 15:35:17.456873 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e\": container with ID starting with b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e not found: ID does not exist" containerID="b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.456915 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e"} err="failed to get container status \"b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e\": rpc error: code = NotFound desc = could not find container \"b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e\": container with ID starting with b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e not found: ID does not exist" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.456943 4736 scope.go:117] "RemoveContainer" containerID="35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.464448 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f"} err="failed to get container status \"35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f\": rpc error: code = NotFound desc = could not find container \"35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f\": container with ID starting with 35d1b82a22086d7e85b16355705562eff050bc0e59a9ce73f728c98879160f4f not found: ID does not exist" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.464492 4736 scope.go:117] "RemoveContainer" containerID="aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.469635 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.476631 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295"} err="failed to get container status \"aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295\": rpc error: code = NotFound desc = could not find container \"aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295\": container with ID starting with aa4ea9632d1ddac4338c9af8eb23b44b22c25ccc53932baef325bd48f0ec7295 not found: ID does not exist" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.476685 4736 scope.go:117] "RemoveContainer" containerID="ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.480476 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a"} err="failed to get container status \"ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a\": rpc error: code = NotFound desc = could not find container \"ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a\": container with ID starting with ae3aaa5bce9a8d7a660f9e1fe6a86d5b12f576e4b88aa98dc1ca27dc7790047a not found: ID does not exist" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.480518 4736 scope.go:117] "RemoveContainer" containerID="b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.492208 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.492552 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e"} err="failed to get container status \"b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e\": rpc error: code = NotFound desc = could not find container \"b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e\": container with ID starting with b9916d33de0c49e2eb034d09af9b9529ea7662907938bea0c1eabc703e61712e not found: ID does not exist" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.511387 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:35:17 crc kubenswrapper[4736]: E0316 15:35:17.512008 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="sg-core" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.512037 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="sg-core" Mar 16 15:35:17 crc kubenswrapper[4736]: E0316 15:35:17.512059 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="ceilometer-central-agent" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.512067 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="ceilometer-central-agent" Mar 16 15:35:17 crc kubenswrapper[4736]: E0316 15:35:17.512091 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="proxy-httpd" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.512096 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="proxy-httpd" Mar 16 15:35:17 crc kubenswrapper[4736]: E0316 15:35:17.512128 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="ceilometer-notification-agent" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.512135 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="ceilometer-notification-agent" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.512413 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="sg-core" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.512436 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="ceilometer-notification-agent" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.512451 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="proxy-httpd" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.512464 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" containerName="ceilometer-central-agent" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.514268 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.520113 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.522289 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.556031 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.620917 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-config-data\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.621280 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-scripts\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.621452 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gscnh\" (UniqueName: \"kubernetes.io/projected/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-kube-api-access-gscnh\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.621582 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.621669 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.621737 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-run-httpd\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.621836 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-log-httpd\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.723177 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-config-data\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.723526 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-scripts\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.723636 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gscnh\" (UniqueName: \"kubernetes.io/projected/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-kube-api-access-gscnh\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.723799 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.723895 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.724440 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-run-httpd\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.724613 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-log-httpd\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.725321 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-run-httpd\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.725487 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-log-httpd\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.731319 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-scripts\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.731421 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-config-data\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.732648 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.732781 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.744147 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gscnh\" (UniqueName: \"kubernetes.io/projected/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-kube-api-access-gscnh\") pod \"ceilometer-0\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " pod="openstack/ceilometer-0" Mar 16 15:35:17 crc kubenswrapper[4736]: I0316 15:35:17.857043 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:35:18 crc kubenswrapper[4736]: I0316 15:35:18.426170 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.002160 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2af13e8-a004-473f-8e5c-615fb566d1a9" path="/var/lib/kubelet/pods/d2af13e8-a004-473f-8e5c-615fb566d1a9/volumes" Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.084222 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.160076 4736 generic.go:334] "Generic (PLEG): container finished" podID="1586d073-7299-41ec-9875-fc52e8cdc45d" containerID="8891fd73b85e55e130d8a75f3482cc2b8fed0c3bea2ad8c1aece3d20f9808ced" exitCode=0 Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.160400 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1586d073-7299-41ec-9875-fc52e8cdc45d","Type":"ContainerDied","Data":"8891fd73b85e55e130d8a75f3482cc2b8fed0c3bea2ad8c1aece3d20f9808ced"} Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.162083 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb55bec8-47b6-41fe-9d6c-eb35bf70be58","Type":"ContainerStarted","Data":"4ae9fad36c909c798459c135920fcb128fa9ec06b19deb556a81ed92a14394d6"} Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.162136 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb55bec8-47b6-41fe-9d6c-eb35bf70be58","Type":"ContainerStarted","Data":"559d3a6c3735c24362e527cfad1ae25d9ab884ae3f9c841cd98d72322317303a"} Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.322889 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.485901 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1586d073-7299-41ec-9875-fc52e8cdc45d-config-data\") pod \"1586d073-7299-41ec-9875-fc52e8cdc45d\" (UID: \"1586d073-7299-41ec-9875-fc52e8cdc45d\") " Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.485958 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1586d073-7299-41ec-9875-fc52e8cdc45d-combined-ca-bundle\") pod \"1586d073-7299-41ec-9875-fc52e8cdc45d\" (UID: \"1586d073-7299-41ec-9875-fc52e8cdc45d\") " Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.486018 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drfsj\" (UniqueName: \"kubernetes.io/projected/1586d073-7299-41ec-9875-fc52e8cdc45d-kube-api-access-drfsj\") pod \"1586d073-7299-41ec-9875-fc52e8cdc45d\" (UID: \"1586d073-7299-41ec-9875-fc52e8cdc45d\") " Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.497343 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1586d073-7299-41ec-9875-fc52e8cdc45d-kube-api-access-drfsj" (OuterVolumeSpecName: "kube-api-access-drfsj") pod "1586d073-7299-41ec-9875-fc52e8cdc45d" (UID: "1586d073-7299-41ec-9875-fc52e8cdc45d"). InnerVolumeSpecName "kube-api-access-drfsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.522879 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1586d073-7299-41ec-9875-fc52e8cdc45d-config-data" (OuterVolumeSpecName: "config-data") pod "1586d073-7299-41ec-9875-fc52e8cdc45d" (UID: "1586d073-7299-41ec-9875-fc52e8cdc45d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.553924 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1586d073-7299-41ec-9875-fc52e8cdc45d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1586d073-7299-41ec-9875-fc52e8cdc45d" (UID: "1586d073-7299-41ec-9875-fc52e8cdc45d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.589197 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1586d073-7299-41ec-9875-fc52e8cdc45d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.589611 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drfsj\" (UniqueName: \"kubernetes.io/projected/1586d073-7299-41ec-9875-fc52e8cdc45d-kube-api-access-drfsj\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:19 crc kubenswrapper[4736]: I0316 15:35:19.589624 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1586d073-7299-41ec-9875-fc52e8cdc45d-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.177582 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb55bec8-47b6-41fe-9d6c-eb35bf70be58","Type":"ContainerStarted","Data":"84655d5ab9194652899a28757e240127d4378db8b48bb6211a8fd4615f00f6f9"} Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.183857 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.183749 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1586d073-7299-41ec-9875-fc52e8cdc45d","Type":"ContainerDied","Data":"e9d9af43f9362ff2558509e0e4a958fcfe934e4d9f31461e0a59ae8e0db1a409"} Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.194420 4736 scope.go:117] "RemoveContainer" containerID="8891fd73b85e55e130d8a75f3482cc2b8fed0c3bea2ad8c1aece3d20f9808ced" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.368131 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.378388 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.389586 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 16 15:35:20 crc kubenswrapper[4736]: E0316 15:35:20.390009 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1586d073-7299-41ec-9875-fc52e8cdc45d" containerName="nova-cell0-conductor-conductor" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.390025 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1586d073-7299-41ec-9875-fc52e8cdc45d" containerName="nova-cell0-conductor-conductor" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.390226 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1586d073-7299-41ec-9875-fc52e8cdc45d" containerName="nova-cell0-conductor-conductor" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.390867 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.393738 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zld2d" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.394067 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.424851 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.508807 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dbhx\" (UniqueName: \"kubernetes.io/projected/24e0aaa8-4eb1-408e-b98a-f99ce1f8e909-kube-api-access-2dbhx\") pod \"nova-cell0-conductor-0\" (UID: \"24e0aaa8-4eb1-408e-b98a-f99ce1f8e909\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.508916 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e0aaa8-4eb1-408e-b98a-f99ce1f8e909-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"24e0aaa8-4eb1-408e-b98a-f99ce1f8e909\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.508989 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e0aaa8-4eb1-408e-b98a-f99ce1f8e909-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"24e0aaa8-4eb1-408e-b98a-f99ce1f8e909\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.611497 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e0aaa8-4eb1-408e-b98a-f99ce1f8e909-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"24e0aaa8-4eb1-408e-b98a-f99ce1f8e909\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.611655 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e0aaa8-4eb1-408e-b98a-f99ce1f8e909-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"24e0aaa8-4eb1-408e-b98a-f99ce1f8e909\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.611744 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dbhx\" (UniqueName: \"kubernetes.io/projected/24e0aaa8-4eb1-408e-b98a-f99ce1f8e909-kube-api-access-2dbhx\") pod \"nova-cell0-conductor-0\" (UID: \"24e0aaa8-4eb1-408e-b98a-f99ce1f8e909\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.616976 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e0aaa8-4eb1-408e-b98a-f99ce1f8e909-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"24e0aaa8-4eb1-408e-b98a-f99ce1f8e909\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.617588 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e0aaa8-4eb1-408e-b98a-f99ce1f8e909-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"24e0aaa8-4eb1-408e-b98a-f99ce1f8e909\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.634634 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dbhx\" (UniqueName: \"kubernetes.io/projected/24e0aaa8-4eb1-408e-b98a-f99ce1f8e909-kube-api-access-2dbhx\") pod \"nova-cell0-conductor-0\" (UID: \"24e0aaa8-4eb1-408e-b98a-f99ce1f8e909\") " pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.719618 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:20 crc kubenswrapper[4736]: I0316 15:35:20.992465 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1586d073-7299-41ec-9875-fc52e8cdc45d" path="/var/lib/kubelet/pods/1586d073-7299-41ec-9875-fc52e8cdc45d/volumes" Mar 16 15:35:21 crc kubenswrapper[4736]: I0316 15:35:21.202172 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 16 15:35:21 crc kubenswrapper[4736]: I0316 15:35:21.206690 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb55bec8-47b6-41fe-9d6c-eb35bf70be58","Type":"ContainerStarted","Data":"edfd603f6f28ca8b93ec10d9c2aeadce2b65a08b4094cbb2fc320f05a4b56fb2"} Mar 16 15:35:21 crc kubenswrapper[4736]: W0316 15:35:21.209123 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24e0aaa8_4eb1_408e_b98a_f99ce1f8e909.slice/crio-734f579b235f6a702cad95f59482995f038b3dd6f21b21526cb693e1bdf9bc38 WatchSource:0}: Error finding container 734f579b235f6a702cad95f59482995f038b3dd6f21b21526cb693e1bdf9bc38: Status 404 returned error can't find the container with id 734f579b235f6a702cad95f59482995f038b3dd6f21b21526cb693e1bdf9bc38 Mar 16 15:35:21 crc kubenswrapper[4736]: I0316 15:35:21.606960 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 16 15:35:21 crc kubenswrapper[4736]: I0316 15:35:21.816163 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ff55bcd5b-psrsc" podUID="4a2c18b8-790c-4bb8-ac86-c70f0220ab3f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Mar 16 15:35:22 crc kubenswrapper[4736]: I0316 15:35:22.219720 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"24e0aaa8-4eb1-408e-b98a-f99ce1f8e909","Type":"ContainerStarted","Data":"82e1653fd5840495cb79cc9b94cc6ee3c5c96a65d7aa1b73f5003fcd9f3ad877"} Mar 16 15:35:22 crc kubenswrapper[4736]: I0316 15:35:22.220279 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"24e0aaa8-4eb1-408e-b98a-f99ce1f8e909","Type":"ContainerStarted","Data":"734f579b235f6a702cad95f59482995f038b3dd6f21b21526cb693e1bdf9bc38"} Mar 16 15:35:22 crc kubenswrapper[4736]: I0316 15:35:22.220300 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:22 crc kubenswrapper[4736]: I0316 15:35:22.242268 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.242249699 podStartE2EDuration="2.242249699s" podCreationTimestamp="2026-03-16 15:35:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:35:22.235708514 +0000 UTC m=+1323.963098801" watchObservedRunningTime="2026-03-16 15:35:22.242249699 +0000 UTC m=+1323.969639986" Mar 16 15:35:24 crc kubenswrapper[4736]: I0316 15:35:24.254686 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb55bec8-47b6-41fe-9d6c-eb35bf70be58","Type":"ContainerStarted","Data":"45153d3c0cb831aef38f93a8dc7026fd4e6eeb182abc1b3691c571355469951c"} Mar 16 15:35:24 crc kubenswrapper[4736]: I0316 15:35:24.255587 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 16 15:35:24 crc kubenswrapper[4736]: I0316 15:35:24.255500 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="proxy-httpd" containerID="cri-o://45153d3c0cb831aef38f93a8dc7026fd4e6eeb182abc1b3691c571355469951c" gracePeriod=30 Mar 16 15:35:24 crc kubenswrapper[4736]: I0316 15:35:24.255021 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="ceilometer-central-agent" containerID="cri-o://4ae9fad36c909c798459c135920fcb128fa9ec06b19deb556a81ed92a14394d6" gracePeriod=30 Mar 16 15:35:24 crc kubenswrapper[4736]: I0316 15:35:24.255505 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="sg-core" containerID="cri-o://edfd603f6f28ca8b93ec10d9c2aeadce2b65a08b4094cbb2fc320f05a4b56fb2" gracePeriod=30 Mar 16 15:35:24 crc kubenswrapper[4736]: I0316 15:35:24.255522 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="ceilometer-notification-agent" containerID="cri-o://84655d5ab9194652899a28757e240127d4378db8b48bb6211a8fd4615f00f6f9" gracePeriod=30 Mar 16 15:35:25 crc kubenswrapper[4736]: I0316 15:35:25.267469 4736 generic.go:334] "Generic (PLEG): container finished" podID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerID="45153d3c0cb831aef38f93a8dc7026fd4e6eeb182abc1b3691c571355469951c" exitCode=0 Mar 16 15:35:25 crc kubenswrapper[4736]: I0316 15:35:25.269125 4736 generic.go:334] "Generic (PLEG): container finished" podID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerID="edfd603f6f28ca8b93ec10d9c2aeadce2b65a08b4094cbb2fc320f05a4b56fb2" exitCode=2 Mar 16 15:35:25 crc kubenswrapper[4736]: I0316 15:35:25.269246 4736 generic.go:334] "Generic (PLEG): container finished" podID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerID="84655d5ab9194652899a28757e240127d4378db8b48bb6211a8fd4615f00f6f9" exitCode=0 Mar 16 15:35:25 crc kubenswrapper[4736]: I0316 15:35:25.267547 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb55bec8-47b6-41fe-9d6c-eb35bf70be58","Type":"ContainerDied","Data":"45153d3c0cb831aef38f93a8dc7026fd4e6eeb182abc1b3691c571355469951c"} Mar 16 15:35:25 crc kubenswrapper[4736]: I0316 15:35:25.269432 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb55bec8-47b6-41fe-9d6c-eb35bf70be58","Type":"ContainerDied","Data":"edfd603f6f28ca8b93ec10d9c2aeadce2b65a08b4094cbb2fc320f05a4b56fb2"} Mar 16 15:35:25 crc kubenswrapper[4736]: I0316 15:35:25.269518 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb55bec8-47b6-41fe-9d6c-eb35bf70be58","Type":"ContainerDied","Data":"84655d5ab9194652899a28757e240127d4378db8b48bb6211a8fd4615f00f6f9"} Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.674494 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.878659 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-run-httpd\") pod \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.878723 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-log-httpd\") pod \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.878842 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-config-data\") pod \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.878887 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-combined-ca-bundle\") pod \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.878923 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-sg-core-conf-yaml\") pod \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.879027 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gscnh\" (UniqueName: \"kubernetes.io/projected/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-kube-api-access-gscnh\") pod \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.879049 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-scripts\") pod \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\" (UID: \"eb55bec8-47b6-41fe-9d6c-eb35bf70be58\") " Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.880622 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eb55bec8-47b6-41fe-9d6c-eb35bf70be58" (UID: "eb55bec8-47b6-41fe-9d6c-eb35bf70be58"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.881228 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eb55bec8-47b6-41fe-9d6c-eb35bf70be58" (UID: "eb55bec8-47b6-41fe-9d6c-eb35bf70be58"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.893662 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-kube-api-access-gscnh" (OuterVolumeSpecName: "kube-api-access-gscnh") pod "eb55bec8-47b6-41fe-9d6c-eb35bf70be58" (UID: "eb55bec8-47b6-41fe-9d6c-eb35bf70be58"). InnerVolumeSpecName "kube-api-access-gscnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.902752 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-scripts" (OuterVolumeSpecName: "scripts") pod "eb55bec8-47b6-41fe-9d6c-eb35bf70be58" (UID: "eb55bec8-47b6-41fe-9d6c-eb35bf70be58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.912329 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eb55bec8-47b6-41fe-9d6c-eb35bf70be58" (UID: "eb55bec8-47b6-41fe-9d6c-eb35bf70be58"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.967296 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb55bec8-47b6-41fe-9d6c-eb35bf70be58" (UID: "eb55bec8-47b6-41fe-9d6c-eb35bf70be58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.982652 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gscnh\" (UniqueName: \"kubernetes.io/projected/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-kube-api-access-gscnh\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.982680 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.982689 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.982698 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.982706 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.982714 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:26 crc kubenswrapper[4736]: I0316 15:35:26.993494 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-config-data" (OuterVolumeSpecName: "config-data") pod "eb55bec8-47b6-41fe-9d6c-eb35bf70be58" (UID: "eb55bec8-47b6-41fe-9d6c-eb35bf70be58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.085805 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb55bec8-47b6-41fe-9d6c-eb35bf70be58-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.305864 4736 generic.go:334] "Generic (PLEG): container finished" podID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerID="4ae9fad36c909c798459c135920fcb128fa9ec06b19deb556a81ed92a14394d6" exitCode=0 Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.306372 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb55bec8-47b6-41fe-9d6c-eb35bf70be58","Type":"ContainerDied","Data":"4ae9fad36c909c798459c135920fcb128fa9ec06b19deb556a81ed92a14394d6"} Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.306455 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb55bec8-47b6-41fe-9d6c-eb35bf70be58","Type":"ContainerDied","Data":"559d3a6c3735c24362e527cfad1ae25d9ab884ae3f9c841cd98d72322317303a"} Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.306484 4736 scope.go:117] "RemoveContainer" containerID="45153d3c0cb831aef38f93a8dc7026fd4e6eeb182abc1b3691c571355469951c" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.306732 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.338320 4736 scope.go:117] "RemoveContainer" containerID="edfd603f6f28ca8b93ec10d9c2aeadce2b65a08b4094cbb2fc320f05a4b56fb2" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.363544 4736 scope.go:117] "RemoveContainer" containerID="84655d5ab9194652899a28757e240127d4378db8b48bb6211a8fd4615f00f6f9" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.374151 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.398324 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.405332 4736 scope.go:117] "RemoveContainer" containerID="4ae9fad36c909c798459c135920fcb128fa9ec06b19deb556a81ed92a14394d6" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.410704 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:35:27 crc kubenswrapper[4736]: E0316 15:35:27.411284 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="ceilometer-central-agent" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.411311 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="ceilometer-central-agent" Mar 16 15:35:27 crc kubenswrapper[4736]: E0316 15:35:27.411348 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="proxy-httpd" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.411356 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="proxy-httpd" Mar 16 15:35:27 crc kubenswrapper[4736]: E0316 15:35:27.411370 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="ceilometer-notification-agent" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.411381 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="ceilometer-notification-agent" Mar 16 15:35:27 crc kubenswrapper[4736]: E0316 15:35:27.411404 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="sg-core" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.411411 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="sg-core" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.411624 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="proxy-httpd" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.411656 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="ceilometer-central-agent" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.411671 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="ceilometer-notification-agent" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.411687 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" containerName="sg-core" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.413436 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.419866 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.420136 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.420840 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.469427 4736 scope.go:117] "RemoveContainer" containerID="45153d3c0cb831aef38f93a8dc7026fd4e6eeb182abc1b3691c571355469951c" Mar 16 15:35:27 crc kubenswrapper[4736]: E0316 15:35:27.469937 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45153d3c0cb831aef38f93a8dc7026fd4e6eeb182abc1b3691c571355469951c\": container with ID starting with 45153d3c0cb831aef38f93a8dc7026fd4e6eeb182abc1b3691c571355469951c not found: ID does not exist" containerID="45153d3c0cb831aef38f93a8dc7026fd4e6eeb182abc1b3691c571355469951c" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.469977 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45153d3c0cb831aef38f93a8dc7026fd4e6eeb182abc1b3691c571355469951c"} err="failed to get container status \"45153d3c0cb831aef38f93a8dc7026fd4e6eeb182abc1b3691c571355469951c\": rpc error: code = NotFound desc = could not find container \"45153d3c0cb831aef38f93a8dc7026fd4e6eeb182abc1b3691c571355469951c\": container with ID starting with 45153d3c0cb831aef38f93a8dc7026fd4e6eeb182abc1b3691c571355469951c not found: ID does not exist" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.469999 4736 scope.go:117] "RemoveContainer" containerID="edfd603f6f28ca8b93ec10d9c2aeadce2b65a08b4094cbb2fc320f05a4b56fb2" Mar 16 15:35:27 crc kubenswrapper[4736]: E0316 15:35:27.470195 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edfd603f6f28ca8b93ec10d9c2aeadce2b65a08b4094cbb2fc320f05a4b56fb2\": container with ID starting with edfd603f6f28ca8b93ec10d9c2aeadce2b65a08b4094cbb2fc320f05a4b56fb2 not found: ID does not exist" containerID="edfd603f6f28ca8b93ec10d9c2aeadce2b65a08b4094cbb2fc320f05a4b56fb2" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.470220 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edfd603f6f28ca8b93ec10d9c2aeadce2b65a08b4094cbb2fc320f05a4b56fb2"} err="failed to get container status \"edfd603f6f28ca8b93ec10d9c2aeadce2b65a08b4094cbb2fc320f05a4b56fb2\": rpc error: code = NotFound desc = could not find container \"edfd603f6f28ca8b93ec10d9c2aeadce2b65a08b4094cbb2fc320f05a4b56fb2\": container with ID starting with edfd603f6f28ca8b93ec10d9c2aeadce2b65a08b4094cbb2fc320f05a4b56fb2 not found: ID does not exist" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.470236 4736 scope.go:117] "RemoveContainer" containerID="84655d5ab9194652899a28757e240127d4378db8b48bb6211a8fd4615f00f6f9" Mar 16 15:35:27 crc kubenswrapper[4736]: E0316 15:35:27.470408 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84655d5ab9194652899a28757e240127d4378db8b48bb6211a8fd4615f00f6f9\": container with ID starting with 84655d5ab9194652899a28757e240127d4378db8b48bb6211a8fd4615f00f6f9 not found: ID does not exist" containerID="84655d5ab9194652899a28757e240127d4378db8b48bb6211a8fd4615f00f6f9" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.470433 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84655d5ab9194652899a28757e240127d4378db8b48bb6211a8fd4615f00f6f9"} err="failed to get container status \"84655d5ab9194652899a28757e240127d4378db8b48bb6211a8fd4615f00f6f9\": rpc error: code = NotFound desc = could not find container \"84655d5ab9194652899a28757e240127d4378db8b48bb6211a8fd4615f00f6f9\": container with ID starting with 84655d5ab9194652899a28757e240127d4378db8b48bb6211a8fd4615f00f6f9 not found: ID does not exist" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.470445 4736 scope.go:117] "RemoveContainer" containerID="4ae9fad36c909c798459c135920fcb128fa9ec06b19deb556a81ed92a14394d6" Mar 16 15:35:27 crc kubenswrapper[4736]: E0316 15:35:27.470609 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae9fad36c909c798459c135920fcb128fa9ec06b19deb556a81ed92a14394d6\": container with ID starting with 4ae9fad36c909c798459c135920fcb128fa9ec06b19deb556a81ed92a14394d6 not found: ID does not exist" containerID="4ae9fad36c909c798459c135920fcb128fa9ec06b19deb556a81ed92a14394d6" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.470632 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae9fad36c909c798459c135920fcb128fa9ec06b19deb556a81ed92a14394d6"} err="failed to get container status \"4ae9fad36c909c798459c135920fcb128fa9ec06b19deb556a81ed92a14394d6\": rpc error: code = NotFound desc = could not find container \"4ae9fad36c909c798459c135920fcb128fa9ec06b19deb556a81ed92a14394d6\": container with ID starting with 4ae9fad36c909c798459c135920fcb128fa9ec06b19deb556a81ed92a14394d6 not found: ID does not exist" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.595207 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.595259 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-scripts\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.595320 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.595351 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-run-httpd\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.595391 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-config-data\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.595415 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfw2b\" (UniqueName: \"kubernetes.io/projected/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-kube-api-access-xfw2b\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.595469 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-log-httpd\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.696828 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.697388 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-scripts\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.697533 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.697627 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-run-httpd\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.697734 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-config-data\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.697820 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfw2b\" (UniqueName: \"kubernetes.io/projected/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-kube-api-access-xfw2b\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.697913 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-log-httpd\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.698470 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-log-httpd\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.698777 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-run-httpd\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.723099 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.723132 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-scripts\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.724254 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.725052 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-config-data\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:27 crc kubenswrapper[4736]: I0316 15:35:27.745338 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfw2b\" (UniqueName: \"kubernetes.io/projected/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-kube-api-access-xfw2b\") pod \"ceilometer-0\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " pod="openstack/ceilometer-0" Mar 16 15:35:28 crc kubenswrapper[4736]: I0316 15:35:28.042206 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:35:28 crc kubenswrapper[4736]: W0316 15:35:28.558283 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode34d5cc4_8821_4cc0_85f2_0dc7c745fd48.slice/crio-6e5a70f364d71579dbf70be2a13506502469ff1c0710c6881625d43409e7c532 WatchSource:0}: Error finding container 6e5a70f364d71579dbf70be2a13506502469ff1c0710c6881625d43409e7c532: Status 404 returned error can't find the container with id 6e5a70f364d71579dbf70be2a13506502469ff1c0710c6881625d43409e7c532 Mar 16 15:35:28 crc kubenswrapper[4736]: I0316 15:35:28.564827 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:35:29 crc kubenswrapper[4736]: I0316 15:35:29.029080 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb55bec8-47b6-41fe-9d6c-eb35bf70be58" path="/var/lib/kubelet/pods/eb55bec8-47b6-41fe-9d6c-eb35bf70be58/volumes" Mar 16 15:35:29 crc kubenswrapper[4736]: I0316 15:35:29.330006 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48","Type":"ContainerStarted","Data":"3722f183efc0219ffd1185b30f0daacef5b4c17d3b6b76626a5ec88c78e3323f"} Mar 16 15:35:29 crc kubenswrapper[4736]: I0316 15:35:29.330073 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48","Type":"ContainerStarted","Data":"6e5a70f364d71579dbf70be2a13506502469ff1c0710c6881625d43409e7c532"} Mar 16 15:35:30 crc kubenswrapper[4736]: I0316 15:35:30.343001 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48","Type":"ContainerStarted","Data":"acba7046e116755f14b29b90e864c1e6f3901cf2732d5e38c6002c4d08eb1be9"} Mar 16 15:35:30 crc kubenswrapper[4736]: I0316 15:35:30.343477 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48","Type":"ContainerStarted","Data":"68d0741c28017254de3a55137dee72d23723cb087378a23bbeeddc744181d7e2"} Mar 16 15:35:30 crc kubenswrapper[4736]: I0316 15:35:30.768397 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.455270 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-gmclf"] Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.456620 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.458419 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.458825 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.479073 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-scripts\") pod \"nova-cell0-cell-mapping-gmclf\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.479151 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gmclf\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.479181 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-config-data\") pod \"nova-cell0-cell-mapping-gmclf\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.479226 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sq2p\" (UniqueName: \"kubernetes.io/projected/4c9139de-bf71-4fc5-8c71-071cb42f9f35-kube-api-access-2sq2p\") pod \"nova-cell0-cell-mapping-gmclf\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.486659 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gmclf"] Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.581759 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-scripts\") pod \"nova-cell0-cell-mapping-gmclf\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.581805 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gmclf\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.581827 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-config-data\") pod \"nova-cell0-cell-mapping-gmclf\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.581855 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sq2p\" (UniqueName: \"kubernetes.io/projected/4c9139de-bf71-4fc5-8c71-071cb42f9f35-kube-api-access-2sq2p\") pod \"nova-cell0-cell-mapping-gmclf\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.591754 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-scripts\") pod \"nova-cell0-cell-mapping-gmclf\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.592075 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gmclf\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.592950 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-config-data\") pod \"nova-cell0-cell-mapping-gmclf\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.615822 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sq2p\" (UniqueName: \"kubernetes.io/projected/4c9139de-bf71-4fc5-8c71-071cb42f9f35-kube-api-access-2sq2p\") pod \"nova-cell0-cell-mapping-gmclf\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.788729 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.912076 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.914186 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.928612 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.940738 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.943998 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.983557 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.989791 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.997793 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm6xv\" (UniqueName: \"kubernetes.io/projected/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-kube-api-access-pm6xv\") pod \"nova-scheduler-0\" (UID: \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.997927 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-config-data\") pod \"nova-scheduler-0\" (UID: \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.998148 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzfcl\" (UniqueName: \"kubernetes.io/projected/966115ec-ce4c-4ce1-be50-9f8f5106f460-kube-api-access-kzfcl\") pod \"nova-api-0\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " pod="openstack/nova-api-0" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.998206 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966115ec-ce4c-4ce1-be50-9f8f5106f460-config-data\") pod \"nova-api-0\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " pod="openstack/nova-api-0" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.998257 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966115ec-ce4c-4ce1-be50-9f8f5106f460-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " pod="openstack/nova-api-0" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.998393 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/966115ec-ce4c-4ce1-be50-9f8f5106f460-logs\") pod \"nova-api-0\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " pod="openstack/nova-api-0" Mar 16 15:35:31 crc kubenswrapper[4736]: I0316 15:35:31.998513 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.034322 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.102083 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-config-data\") pod \"nova-scheduler-0\" (UID: \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.102469 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzfcl\" (UniqueName: \"kubernetes.io/projected/966115ec-ce4c-4ce1-be50-9f8f5106f460-kube-api-access-kzfcl\") pod \"nova-api-0\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " pod="openstack/nova-api-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.102504 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966115ec-ce4c-4ce1-be50-9f8f5106f460-config-data\") pod \"nova-api-0\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " pod="openstack/nova-api-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.102529 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966115ec-ce4c-4ce1-be50-9f8f5106f460-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " pod="openstack/nova-api-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.102589 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/966115ec-ce4c-4ce1-be50-9f8f5106f460-logs\") pod \"nova-api-0\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " pod="openstack/nova-api-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.102628 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.102665 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm6xv\" (UniqueName: \"kubernetes.io/projected/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-kube-api-access-pm6xv\") pod \"nova-scheduler-0\" (UID: \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.111764 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/966115ec-ce4c-4ce1-be50-9f8f5106f460-logs\") pod \"nova-api-0\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " pod="openstack/nova-api-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.125936 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.130320 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966115ec-ce4c-4ce1-be50-9f8f5106f460-config-data\") pod \"nova-api-0\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " pod="openstack/nova-api-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.181762 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzfcl\" (UniqueName: \"kubernetes.io/projected/966115ec-ce4c-4ce1-be50-9f8f5106f460-kube-api-access-kzfcl\") pod \"nova-api-0\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " pod="openstack/nova-api-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.187142 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-config-data\") pod \"nova-scheduler-0\" (UID: \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.193693 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966115ec-ce4c-4ce1-be50-9f8f5106f460-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " pod="openstack/nova-api-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.276133 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm6xv\" (UniqueName: \"kubernetes.io/projected/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-kube-api-access-pm6xv\") pod \"nova-scheduler-0\" (UID: \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.282882 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.297185 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.299206 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.303529 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.338226 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.354002 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.360059 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4363417a-5e4f-42f9-bca4-97a11ac04864-config-data\") pod \"nova-metadata-0\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " pod="openstack/nova-metadata-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.360124 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4363417a-5e4f-42f9-bca4-97a11ac04864-logs\") pod \"nova-metadata-0\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " pod="openstack/nova-metadata-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.360209 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj662\" (UniqueName: \"kubernetes.io/projected/4363417a-5e4f-42f9-bca4-97a11ac04864-kube-api-access-vj662\") pod \"nova-metadata-0\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " pod="openstack/nova-metadata-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.360480 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4363417a-5e4f-42f9-bca4-97a11ac04864-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " pod="openstack/nova-metadata-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.370995 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.372263 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.396544 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.402746 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.462507 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn6kl\" (UniqueName: \"kubernetes.io/projected/f212d838-a214-4337-9fd5-1c6ce7eaf547-kube-api-access-cn6kl\") pod \"nova-cell1-novncproxy-0\" (UID: \"f212d838-a214-4337-9fd5-1c6ce7eaf547\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.462974 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4363417a-5e4f-42f9-bca4-97a11ac04864-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " pod="openstack/nova-metadata-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.463037 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f212d838-a214-4337-9fd5-1c6ce7eaf547-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f212d838-a214-4337-9fd5-1c6ce7eaf547\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.463075 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4363417a-5e4f-42f9-bca4-97a11ac04864-config-data\") pod \"nova-metadata-0\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " pod="openstack/nova-metadata-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.463095 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4363417a-5e4f-42f9-bca4-97a11ac04864-logs\") pod \"nova-metadata-0\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " pod="openstack/nova-metadata-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.463150 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj662\" (UniqueName: \"kubernetes.io/projected/4363417a-5e4f-42f9-bca4-97a11ac04864-kube-api-access-vj662\") pod \"nova-metadata-0\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " pod="openstack/nova-metadata-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.463167 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f212d838-a214-4337-9fd5-1c6ce7eaf547-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f212d838-a214-4337-9fd5-1c6ce7eaf547\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.466690 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4363417a-5e4f-42f9-bca4-97a11ac04864-logs\") pod \"nova-metadata-0\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " pod="openstack/nova-metadata-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.473525 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4363417a-5e4f-42f9-bca4-97a11ac04864-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " pod="openstack/nova-metadata-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.475152 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4363417a-5e4f-42f9-bca4-97a11ac04864-config-data\") pod \"nova-metadata-0\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " pod="openstack/nova-metadata-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.525845 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj662\" (UniqueName: \"kubernetes.io/projected/4363417a-5e4f-42f9-bca4-97a11ac04864-kube-api-access-vj662\") pod \"nova-metadata-0\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " pod="openstack/nova-metadata-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.529178 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68fd9f6bc5-ddxzj"] Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.530997 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.566093 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f212d838-a214-4337-9fd5-1c6ce7eaf547-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f212d838-a214-4337-9fd5-1c6ce7eaf547\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.566399 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-dns-svc\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.566576 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-config\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.566712 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f212d838-a214-4337-9fd5-1c6ce7eaf547-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f212d838-a214-4337-9fd5-1c6ce7eaf547\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.566908 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn6kl\" (UniqueName: \"kubernetes.io/projected/f212d838-a214-4337-9fd5-1c6ce7eaf547-kube-api-access-cn6kl\") pod \"nova-cell1-novncproxy-0\" (UID: \"f212d838-a214-4337-9fd5-1c6ce7eaf547\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.567041 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8vzt\" (UniqueName: \"kubernetes.io/projected/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-kube-api-access-b8vzt\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.573576 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-ovsdbserver-sb\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.574163 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-ovsdbserver-nb\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.595409 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f212d838-a214-4337-9fd5-1c6ce7eaf547-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f212d838-a214-4337-9fd5-1c6ce7eaf547\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.602900 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f212d838-a214-4337-9fd5-1c6ce7eaf547-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f212d838-a214-4337-9fd5-1c6ce7eaf547\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.620610 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68fd9f6bc5-ddxzj"] Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.671497 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gmclf"] Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.684385 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-ovsdbserver-sb\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.684451 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-ovsdbserver-nb\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.684527 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-dns-svc\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.684578 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-config\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.684627 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8vzt\" (UniqueName: \"kubernetes.io/projected/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-kube-api-access-b8vzt\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.685875 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-ovsdbserver-sb\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.689079 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-dns-svc\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.689774 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-config\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.699632 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.700034 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-ovsdbserver-nb\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.703759 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn6kl\" (UniqueName: \"kubernetes.io/projected/f212d838-a214-4337-9fd5-1c6ce7eaf547-kube-api-access-cn6kl\") pod \"nova-cell1-novncproxy-0\" (UID: \"f212d838-a214-4337-9fd5-1c6ce7eaf547\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.729936 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8vzt\" (UniqueName: \"kubernetes.io/projected/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-kube-api-access-b8vzt\") pod \"dnsmasq-dns-68fd9f6bc5-ddxzj\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.741600 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:35:32 crc kubenswrapper[4736]: I0316 15:35:32.807521 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="51e06fc2-19ee-4e32-8118-d4596cb6b124" containerName="galera" probeResult="failure" output="command timed out" Mar 16 15:35:33 crc kubenswrapper[4736]: I0316 15:35:33.089829 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:33 crc kubenswrapper[4736]: I0316 15:35:33.207517 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:35:33 crc kubenswrapper[4736]: W0316 15:35:33.301536 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod966115ec_ce4c_4ce1_be50_9f8f5106f460.slice/crio-c72c204daf46faf9c81d93c5285d1589642957da1027be86d543c906bc3dfdc7 WatchSource:0}: Error finding container c72c204daf46faf9c81d93c5285d1589642957da1027be86d543c906bc3dfdc7: Status 404 returned error can't find the container with id c72c204daf46faf9c81d93c5285d1589642957da1027be86d543c906bc3dfdc7 Mar 16 15:35:33 crc kubenswrapper[4736]: I0316 15:35:33.503693 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:35:33 crc kubenswrapper[4736]: I0316 15:35:33.510506 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gmclf" event={"ID":"4c9139de-bf71-4fc5-8c71-071cb42f9f35","Type":"ContainerStarted","Data":"a66d942085c6f4f820dc3d5e8e243683fd8879d8ab14eb213a8522ef2dfcca36"} Mar 16 15:35:33 crc kubenswrapper[4736]: W0316 15:35:33.529540 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc73dcbef_b4a2_4b23_a493_954ad3ec7c4c.slice/crio-e20144ea7f86c04ef439d256d13aa647ca37f8719cf318596282aa28559eb671 WatchSource:0}: Error finding container e20144ea7f86c04ef439d256d13aa647ca37f8719cf318596282aa28559eb671: Status 404 returned error can't find the container with id e20144ea7f86c04ef439d256d13aa647ca37f8719cf318596282aa28559eb671 Mar 16 15:35:33 crc kubenswrapper[4736]: I0316 15:35:33.554782 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48","Type":"ContainerStarted","Data":"cc6efb9dde20e90bbfd5b01123087696e9d336f1cb445f1922ed92edb75751cb"} Mar 16 15:35:33 crc kubenswrapper[4736]: I0316 15:35:33.554863 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 16 15:35:33 crc kubenswrapper[4736]: I0316 15:35:33.574522 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"966115ec-ce4c-4ce1-be50-9f8f5106f460","Type":"ContainerStarted","Data":"c72c204daf46faf9c81d93c5285d1589642957da1027be86d543c906bc3dfdc7"} Mar 16 15:35:33 crc kubenswrapper[4736]: I0316 15:35:33.712586 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.275183092 podStartE2EDuration="6.712095632s" podCreationTimestamp="2026-03-16 15:35:27 +0000 UTC" firstStartedPulling="2026-03-16 15:35:28.56094164 +0000 UTC m=+1330.288331947" lastFinishedPulling="2026-03-16 15:35:31.9978542 +0000 UTC m=+1333.725244487" observedRunningTime="2026-03-16 15:35:33.625418357 +0000 UTC m=+1335.352808644" watchObservedRunningTime="2026-03-16 15:35:33.712095632 +0000 UTC m=+1335.439485909" Mar 16 15:35:33 crc kubenswrapper[4736]: I0316 15:35:33.716482 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:33 crc kubenswrapper[4736]: I0316 15:35:33.782172 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.125010 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68fd9f6bc5-ddxzj"] Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.369928 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vlmpz"] Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.372532 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.376354 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.405018 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.466644 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-config-data\") pod \"nova-cell1-conductor-db-sync-vlmpz\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.466758 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-scripts\") pod \"nova-cell1-conductor-db-sync-vlmpz\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.466905 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lswbv\" (UniqueName: \"kubernetes.io/projected/d3db0b47-39c5-4414-b863-6c472b6ee78a-kube-api-access-lswbv\") pod \"nova-cell1-conductor-db-sync-vlmpz\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.467186 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vlmpz\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.487486 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vlmpz"] Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.571443 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lswbv\" (UniqueName: \"kubernetes.io/projected/d3db0b47-39c5-4414-b863-6c472b6ee78a-kube-api-access-lswbv\") pod \"nova-cell1-conductor-db-sync-vlmpz\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.571610 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vlmpz\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.571670 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-config-data\") pod \"nova-cell1-conductor-db-sync-vlmpz\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.571706 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-scripts\") pod \"nova-cell1-conductor-db-sync-vlmpz\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.580345 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-vlmpz\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.580727 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-config-data\") pod \"nova-cell1-conductor-db-sync-vlmpz\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.583803 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-scripts\") pod \"nova-cell1-conductor-db-sync-vlmpz\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.622378 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gmclf" event={"ID":"4c9139de-bf71-4fc5-8c71-071cb42f9f35","Type":"ContainerStarted","Data":"593f6c4f6e367017237c6e34d929180b33ecce95153e774ffaccc1d0e1699812"} Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.633241 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lswbv\" (UniqueName: \"kubernetes.io/projected/d3db0b47-39c5-4414-b863-6c472b6ee78a-kube-api-access-lswbv\") pod \"nova-cell1-conductor-db-sync-vlmpz\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.633417 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" event={"ID":"83cfe74a-03e4-42bf-bed9-2d21ef6031eb","Type":"ContainerStarted","Data":"b99977ef5188118e5a1377400e5c03e1b2265586af0f375376ca77e243507518"} Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.649863 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-gmclf" podStartSLOduration=3.649839822 podStartE2EDuration="3.649839822s" podCreationTimestamp="2026-03-16 15:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:35:34.64081293 +0000 UTC m=+1336.368203217" watchObservedRunningTime="2026-03-16 15:35:34.649839822 +0000 UTC m=+1336.377230109" Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.650696 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f212d838-a214-4337-9fd5-1c6ce7eaf547","Type":"ContainerStarted","Data":"bbea828173cd892a54720f5971bb81989c2e29b2317a09abb4b9cfcdcbde629f"} Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.677852 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c","Type":"ContainerStarted","Data":"e20144ea7f86c04ef439d256d13aa647ca37f8719cf318596282aa28559eb671"} Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.696777 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4363417a-5e4f-42f9-bca4-97a11ac04864","Type":"ContainerStarted","Data":"4a568a28710e0cc4e5ed1f0bd5f14ed9ac0b0987ee9edc6860faf3a42d808f88"} Mar 16 15:35:34 crc kubenswrapper[4736]: I0316 15:35:34.762835 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:35 crc kubenswrapper[4736]: I0316 15:35:35.475645 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vlmpz"] Mar 16 15:35:35 crc kubenswrapper[4736]: W0316 15:35:35.553505 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3db0b47_39c5_4414_b863_6c472b6ee78a.slice/crio-aaabaf34ba3ef2f79f57d0e3e57e80547178dcc3c5240e0c8336fc55b5e67911 WatchSource:0}: Error finding container aaabaf34ba3ef2f79f57d0e3e57e80547178dcc3c5240e0c8336fc55b5e67911: Status 404 returned error can't find the container with id aaabaf34ba3ef2f79f57d0e3e57e80547178dcc3c5240e0c8336fc55b5e67911 Mar 16 15:35:35 crc kubenswrapper[4736]: I0316 15:35:35.736796 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vlmpz" event={"ID":"d3db0b47-39c5-4414-b863-6c472b6ee78a","Type":"ContainerStarted","Data":"aaabaf34ba3ef2f79f57d0e3e57e80547178dcc3c5240e0c8336fc55b5e67911"} Mar 16 15:35:35 crc kubenswrapper[4736]: I0316 15:35:35.744351 4736 generic.go:334] "Generic (PLEG): container finished" podID="83cfe74a-03e4-42bf-bed9-2d21ef6031eb" containerID="be5a097bcf03f53f9eb56666a1eabe7d37d49a9d982069b1fc0665220e771a7d" exitCode=0 Mar 16 15:35:35 crc kubenswrapper[4736]: I0316 15:35:35.746747 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" event={"ID":"83cfe74a-03e4-42bf-bed9-2d21ef6031eb","Type":"ContainerDied","Data":"be5a097bcf03f53f9eb56666a1eabe7d37d49a9d982069b1fc0665220e771a7d"} Mar 16 15:35:36 crc kubenswrapper[4736]: I0316 15:35:36.611519 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:35:36 crc kubenswrapper[4736]: I0316 15:35:36.707323 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:36 crc kubenswrapper[4736]: I0316 15:35:36.735976 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 16 15:35:36 crc kubenswrapper[4736]: I0316 15:35:36.792099 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vlmpz" event={"ID":"d3db0b47-39c5-4414-b863-6c472b6ee78a","Type":"ContainerStarted","Data":"7087e06342218756c470df5b1083d0946e2775c0af67bc6e95e97996790157b3"} Mar 16 15:35:36 crc kubenswrapper[4736]: I0316 15:35:36.809976 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" event={"ID":"83cfe74a-03e4-42bf-bed9-2d21ef6031eb","Type":"ContainerStarted","Data":"eee3f0277c6514c70468deded615fd0d24df773bc0c724f2ae9e436b3f43bcb2"} Mar 16 15:35:36 crc kubenswrapper[4736]: I0316 15:35:36.810576 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:36 crc kubenswrapper[4736]: I0316 15:35:36.822094 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-vlmpz" podStartSLOduration=2.8220693580000002 podStartE2EDuration="2.822069358s" podCreationTimestamp="2026-03-16 15:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:35:36.812936154 +0000 UTC m=+1338.540326441" watchObservedRunningTime="2026-03-16 15:35:36.822069358 +0000 UTC m=+1338.549459645" Mar 16 15:35:36 crc kubenswrapper[4736]: I0316 15:35:36.823485 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-ff55bcd5b-psrsc" podUID="4a2c18b8-790c-4bb8-ac86-c70f0220ab3f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:35:39 crc kubenswrapper[4736]: I0316 15:35:39.028132 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" podStartSLOduration=7.028075509 podStartE2EDuration="7.028075509s" podCreationTimestamp="2026-03-16 15:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:35:36.854875355 +0000 UTC m=+1338.582265642" watchObservedRunningTime="2026-03-16 15:35:39.028075509 +0000 UTC m=+1340.755465816" Mar 16 15:35:39 crc kubenswrapper[4736]: I0316 15:35:39.842976 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f212d838-a214-4337-9fd5-1c6ce7eaf547","Type":"ContainerStarted","Data":"b3f41b4fa31a9db9fcb699283a5b9f329cea03627ab19bf51de2834478ad1186"} Mar 16 15:35:39 crc kubenswrapper[4736]: I0316 15:35:39.843962 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f212d838-a214-4337-9fd5-1c6ce7eaf547" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b3f41b4fa31a9db9fcb699283a5b9f329cea03627ab19bf51de2834478ad1186" gracePeriod=30 Mar 16 15:35:39 crc kubenswrapper[4736]: I0316 15:35:39.857752 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"966115ec-ce4c-4ce1-be50-9f8f5106f460","Type":"ContainerStarted","Data":"260bc04f2b7314280eeb5276217cdedf9a48a24b570c5400e6ad8060f5304cef"} Mar 16 15:35:39 crc kubenswrapper[4736]: I0316 15:35:39.863955 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.342016809 podStartE2EDuration="7.863933236s" podCreationTimestamp="2026-03-16 15:35:32 +0000 UTC" firstStartedPulling="2026-03-16 15:35:33.830263759 +0000 UTC m=+1335.557654046" lastFinishedPulling="2026-03-16 15:35:39.352180186 +0000 UTC m=+1341.079570473" observedRunningTime="2026-03-16 15:35:39.857260698 +0000 UTC m=+1341.584650995" watchObservedRunningTime="2026-03-16 15:35:39.863933236 +0000 UTC m=+1341.591323523" Mar 16 15:35:39 crc kubenswrapper[4736]: I0316 15:35:39.876323 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c","Type":"ContainerStarted","Data":"6bb64c83f1466a2af3ab02ec66584b8d7e81fcb4716c4c031ed9b36ea964598d"} Mar 16 15:35:39 crc kubenswrapper[4736]: I0316 15:35:39.890459 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4363417a-5e4f-42f9-bca4-97a11ac04864","Type":"ContainerStarted","Data":"ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3"} Mar 16 15:35:39 crc kubenswrapper[4736]: I0316 15:35:39.920335 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.120961283 podStartE2EDuration="8.920318092s" podCreationTimestamp="2026-03-16 15:35:31 +0000 UTC" firstStartedPulling="2026-03-16 15:35:33.548948164 +0000 UTC m=+1335.276338451" lastFinishedPulling="2026-03-16 15:35:39.348304973 +0000 UTC m=+1341.075695260" observedRunningTime="2026-03-16 15:35:39.916396828 +0000 UTC m=+1341.643787115" watchObservedRunningTime="2026-03-16 15:35:39.920318092 +0000 UTC m=+1341.647708379" Mar 16 15:35:40 crc kubenswrapper[4736]: I0316 15:35:40.913609 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"966115ec-ce4c-4ce1-be50-9f8f5106f460","Type":"ContainerStarted","Data":"ed2e364e94e4119006a1b7f1d84a91c9ec60f2ab835110868408aa2e4a4ac68b"} Mar 16 15:35:40 crc kubenswrapper[4736]: I0316 15:35:40.928822 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4363417a-5e4f-42f9-bca4-97a11ac04864","Type":"ContainerStarted","Data":"7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268"} Mar 16 15:35:40 crc kubenswrapper[4736]: I0316 15:35:40.929056 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4363417a-5e4f-42f9-bca4-97a11ac04864" containerName="nova-metadata-log" containerID="cri-o://ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3" gracePeriod=30 Mar 16 15:35:40 crc kubenswrapper[4736]: I0316 15:35:40.929101 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4363417a-5e4f-42f9-bca4-97a11ac04864" containerName="nova-metadata-metadata" containerID="cri-o://7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268" gracePeriod=30 Mar 16 15:35:40 crc kubenswrapper[4736]: I0316 15:35:40.946522 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.946077055 podStartE2EDuration="9.946506965s" podCreationTimestamp="2026-03-16 15:35:31 +0000 UTC" firstStartedPulling="2026-03-16 15:35:33.365983476 +0000 UTC m=+1335.093373763" lastFinishedPulling="2026-03-16 15:35:39.366413386 +0000 UTC m=+1341.093803673" observedRunningTime="2026-03-16 15:35:40.943387201 +0000 UTC m=+1342.670777488" watchObservedRunningTime="2026-03-16 15:35:40.946506965 +0000 UTC m=+1342.673897252" Mar 16 15:35:40 crc kubenswrapper[4736]: I0316 15:35:40.976556 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.417108067 podStartE2EDuration="8.976521767s" podCreationTimestamp="2026-03-16 15:35:32 +0000 UTC" firstStartedPulling="2026-03-16 15:35:33.791932084 +0000 UTC m=+1335.519322371" lastFinishedPulling="2026-03-16 15:35:39.351345784 +0000 UTC m=+1341.078736071" observedRunningTime="2026-03-16 15:35:40.971244976 +0000 UTC m=+1342.698635263" watchObservedRunningTime="2026-03-16 15:35:40.976521767 +0000 UTC m=+1342.703912054" Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.645230 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.670693 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4363417a-5e4f-42f9-bca4-97a11ac04864-combined-ca-bundle\") pod \"4363417a-5e4f-42f9-bca4-97a11ac04864\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.671087 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4363417a-5e4f-42f9-bca4-97a11ac04864-logs\") pod \"4363417a-5e4f-42f9-bca4-97a11ac04864\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.671252 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4363417a-5e4f-42f9-bca4-97a11ac04864-config-data\") pod \"4363417a-5e4f-42f9-bca4-97a11ac04864\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.671516 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj662\" (UniqueName: \"kubernetes.io/projected/4363417a-5e4f-42f9-bca4-97a11ac04864-kube-api-access-vj662\") pod \"4363417a-5e4f-42f9-bca4-97a11ac04864\" (UID: \"4363417a-5e4f-42f9-bca4-97a11ac04864\") " Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.674493 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4363417a-5e4f-42f9-bca4-97a11ac04864-logs" (OuterVolumeSpecName: "logs") pod "4363417a-5e4f-42f9-bca4-97a11ac04864" (UID: "4363417a-5e4f-42f9-bca4-97a11ac04864"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.715381 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4363417a-5e4f-42f9-bca4-97a11ac04864-kube-api-access-vj662" (OuterVolumeSpecName: "kube-api-access-vj662") pod "4363417a-5e4f-42f9-bca4-97a11ac04864" (UID: "4363417a-5e4f-42f9-bca4-97a11ac04864"). InnerVolumeSpecName "kube-api-access-vj662". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.776560 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4363417a-5e4f-42f9-bca4-97a11ac04864-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.777055 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj662\" (UniqueName: \"kubernetes.io/projected/4363417a-5e4f-42f9-bca4-97a11ac04864-kube-api-access-vj662\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.793942 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4363417a-5e4f-42f9-bca4-97a11ac04864-config-data" (OuterVolumeSpecName: "config-data") pod "4363417a-5e4f-42f9-bca4-97a11ac04864" (UID: "4363417a-5e4f-42f9-bca4-97a11ac04864"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.807781 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4363417a-5e4f-42f9-bca4-97a11ac04864-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4363417a-5e4f-42f9-bca4-97a11ac04864" (UID: "4363417a-5e4f-42f9-bca4-97a11ac04864"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.879587 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4363417a-5e4f-42f9-bca4-97a11ac04864-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.879925 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4363417a-5e4f-42f9-bca4-97a11ac04864-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.942557 4736 generic.go:334] "Generic (PLEG): container finished" podID="4363417a-5e4f-42f9-bca4-97a11ac04864" containerID="7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268" exitCode=0 Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.942593 4736 generic.go:334] "Generic (PLEG): container finished" podID="4363417a-5e4f-42f9-bca4-97a11ac04864" containerID="ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3" exitCode=143 Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.943782 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.945211 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4363417a-5e4f-42f9-bca4-97a11ac04864","Type":"ContainerDied","Data":"7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268"} Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.945355 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4363417a-5e4f-42f9-bca4-97a11ac04864","Type":"ContainerDied","Data":"ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3"} Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.945432 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4363417a-5e4f-42f9-bca4-97a11ac04864","Type":"ContainerDied","Data":"4a568a28710e0cc4e5ed1f0bd5f14ed9ac0b0987ee9edc6860faf3a42d808f88"} Mar 16 15:35:41 crc kubenswrapper[4736]: I0316 15:35:41.945433 4736 scope.go:117] "RemoveContainer" containerID="7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.021770 4736 scope.go:117] "RemoveContainer" containerID="ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.007738 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.047766 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.056286 4736 scope.go:117] "RemoveContainer" containerID="7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268" Mar 16 15:35:42 crc kubenswrapper[4736]: E0316 15:35:42.060309 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268\": container with ID starting with 7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268 not found: ID does not exist" containerID="7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.060374 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268"} err="failed to get container status \"7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268\": rpc error: code = NotFound desc = could not find container \"7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268\": container with ID starting with 7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268 not found: ID does not exist" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.060413 4736 scope.go:117] "RemoveContainer" containerID="ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3" Mar 16 15:35:42 crc kubenswrapper[4736]: E0316 15:35:42.061196 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3\": container with ID starting with ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3 not found: ID does not exist" containerID="ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.061227 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3"} err="failed to get container status \"ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3\": rpc error: code = NotFound desc = could not find container \"ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3\": container with ID starting with ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3 not found: ID does not exist" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.061246 4736 scope.go:117] "RemoveContainer" containerID="7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.061520 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268"} err="failed to get container status \"7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268\": rpc error: code = NotFound desc = could not find container \"7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268\": container with ID starting with 7aa8294889fbb53573dce8c7109840822157c4ea98ba8c016371e7dce5aeb268 not found: ID does not exist" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.061540 4736 scope.go:117] "RemoveContainer" containerID="ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.061698 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:42 crc kubenswrapper[4736]: E0316 15:35:42.070616 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4363417a-5e4f-42f9-bca4-97a11ac04864" containerName="nova-metadata-log" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.070670 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4363417a-5e4f-42f9-bca4-97a11ac04864" containerName="nova-metadata-log" Mar 16 15:35:42 crc kubenswrapper[4736]: E0316 15:35:42.070688 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4363417a-5e4f-42f9-bca4-97a11ac04864" containerName="nova-metadata-metadata" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.070701 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4363417a-5e4f-42f9-bca4-97a11ac04864" containerName="nova-metadata-metadata" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.070927 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4363417a-5e4f-42f9-bca4-97a11ac04864" containerName="nova-metadata-log" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.070955 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4363417a-5e4f-42f9-bca4-97a11ac04864" containerName="nova-metadata-metadata" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.072036 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.064292 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3"} err="failed to get container status \"ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3\": rpc error: code = NotFound desc = could not find container \"ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3\": container with ID starting with ccb9382af0f3c28a5a6984c8d05bfb3bb23fbc0401dcf1834fc4bbc2999591b3 not found: ID does not exist" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.075155 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.075937 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.076396 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.090665 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.090842 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.094744 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce9aa7ba-6875-49ce-a0fe-d613426094c4-logs\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.094919 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm9xj\" (UniqueName: \"kubernetes.io/projected/ce9aa7ba-6875-49ce-a0fe-d613426094c4-kube-api-access-hm9xj\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.095058 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-config-data\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.203688 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.203801 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.203890 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce9aa7ba-6875-49ce-a0fe-d613426094c4-logs\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.203928 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm9xj\" (UniqueName: \"kubernetes.io/projected/ce9aa7ba-6875-49ce-a0fe-d613426094c4-kube-api-access-hm9xj\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.203964 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-config-data\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.204659 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce9aa7ba-6875-49ce-a0fe-d613426094c4-logs\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.212891 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.213073 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-config-data\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.216075 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.227786 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm9xj\" (UniqueName: \"kubernetes.io/projected/ce9aa7ba-6875-49ce-a0fe-d613426094c4-kube-api-access-hm9xj\") pod \"nova-metadata-0\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.283616 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.284304 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.333305 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.358314 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.358382 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.417745 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 16 15:35:42 crc kubenswrapper[4736]: I0316 15:35:42.743477 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:35:43 crc kubenswrapper[4736]: I0316 15:35:43.053458 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4363417a-5e4f-42f9-bca4-97a11ac04864" path="/var/lib/kubelet/pods/4363417a-5e4f-42f9-bca4-97a11ac04864/volumes" Mar 16 15:35:43 crc kubenswrapper[4736]: I0316 15:35:43.055600 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:43 crc kubenswrapper[4736]: I0316 15:35:43.095246 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:35:43 crc kubenswrapper[4736]: I0316 15:35:43.139951 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 16 15:35:43 crc kubenswrapper[4736]: I0316 15:35:43.314548 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b87666975-blf44"] Mar 16 15:35:43 crc kubenswrapper[4736]: I0316 15:35:43.315366 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b87666975-blf44" podUID="5f712fb8-4a0f-400d-b21d-1fc72a671b31" containerName="dnsmasq-dns" containerID="cri-o://c5b423c00c460f9a66eeab99191397a8cfd3e65e0e00d9710ca0facad764d09c" gracePeriod=10 Mar 16 15:35:43 crc kubenswrapper[4736]: I0316 15:35:43.444398 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="966115ec-ce4c-4ce1-be50-9f8f5106f460" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 15:35:43 crc kubenswrapper[4736]: I0316 15:35:43.444825 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="966115ec-ce4c-4ce1-be50-9f8f5106f460" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.056782 4736 generic.go:334] "Generic (PLEG): container finished" podID="5f712fb8-4a0f-400d-b21d-1fc72a671b31" containerID="c5b423c00c460f9a66eeab99191397a8cfd3e65e0e00d9710ca0facad764d09c" exitCode=0 Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.057504 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b87666975-blf44" event={"ID":"5f712fb8-4a0f-400d-b21d-1fc72a671b31","Type":"ContainerDied","Data":"c5b423c00c460f9a66eeab99191397a8cfd3e65e0e00d9710ca0facad764d09c"} Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.057540 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b87666975-blf44" event={"ID":"5f712fb8-4a0f-400d-b21d-1fc72a671b31","Type":"ContainerDied","Data":"9b7b7c334e0f45c052302d3eaba8d1a61e61e9a850f1d4beb95d1a990d50b594"} Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.057552 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b7b7c334e0f45c052302d3eaba8d1a61e61e9a850f1d4beb95d1a990d50b594" Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.061553 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.075584 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce9aa7ba-6875-49ce-a0fe-d613426094c4","Type":"ContainerStarted","Data":"da0b1b75fb4ddd4c3b1399430b62175dd281dd9e2b5d152f7901b21bb1d62dc7"} Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.075623 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce9aa7ba-6875-49ce-a0fe-d613426094c4","Type":"ContainerStarted","Data":"e4ea279b38e40622a4cae4c00df9923d354e397c148a89dbe40aa28391806713"} Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.075633 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce9aa7ba-6875-49ce-a0fe-d613426094c4","Type":"ContainerStarted","Data":"3a7d436bec9b45c4bc2e1076e8ce9903d8d8efcd60733fe3d5d28ff9da16db57"} Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.100492 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-ovsdbserver-nb\") pod \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.100569 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-config\") pod \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.100672 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-ovsdbserver-sb\") pod \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.100760 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-dns-svc\") pod \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.100911 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7nzz\" (UniqueName: \"kubernetes.io/projected/5f712fb8-4a0f-400d-b21d-1fc72a671b31-kube-api-access-n7nzz\") pod \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\" (UID: \"5f712fb8-4a0f-400d-b21d-1fc72a671b31\") " Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.133535 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f712fb8-4a0f-400d-b21d-1fc72a671b31-kube-api-access-n7nzz" (OuterVolumeSpecName: "kube-api-access-n7nzz") pod "5f712fb8-4a0f-400d-b21d-1fc72a671b31" (UID: "5f712fb8-4a0f-400d-b21d-1fc72a671b31"). InnerVolumeSpecName "kube-api-access-n7nzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.181921 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.181891941 podStartE2EDuration="3.181891941s" podCreationTimestamp="2026-03-16 15:35:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:35:44.140429104 +0000 UTC m=+1345.867819391" watchObservedRunningTime="2026-03-16 15:35:44.181891941 +0000 UTC m=+1345.909282228" Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.203716 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7nzz\" (UniqueName: \"kubernetes.io/projected/5f712fb8-4a0f-400d-b21d-1fc72a671b31-kube-api-access-n7nzz\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.234979 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f712fb8-4a0f-400d-b21d-1fc72a671b31" (UID: "5f712fb8-4a0f-400d-b21d-1fc72a671b31"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.261739 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f712fb8-4a0f-400d-b21d-1fc72a671b31" (UID: "5f712fb8-4a0f-400d-b21d-1fc72a671b31"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.267731 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-config" (OuterVolumeSpecName: "config") pod "5f712fb8-4a0f-400d-b21d-1fc72a671b31" (UID: "5f712fb8-4a0f-400d-b21d-1fc72a671b31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.292795 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f712fb8-4a0f-400d-b21d-1fc72a671b31" (UID: "5f712fb8-4a0f-400d-b21d-1fc72a671b31"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.310186 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.310220 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.310229 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:44 crc kubenswrapper[4736]: I0316 15:35:44.310241 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f712fb8-4a0f-400d-b21d-1fc72a671b31-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:45 crc kubenswrapper[4736]: I0316 15:35:45.083691 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b87666975-blf44" Mar 16 15:35:45 crc kubenswrapper[4736]: I0316 15:35:45.111900 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b87666975-blf44"] Mar 16 15:35:45 crc kubenswrapper[4736]: I0316 15:35:45.135395 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b87666975-blf44"] Mar 16 15:35:46 crc kubenswrapper[4736]: I0316 15:35:46.094438 4736 generic.go:334] "Generic (PLEG): container finished" podID="4c9139de-bf71-4fc5-8c71-071cb42f9f35" containerID="593f6c4f6e367017237c6e34d929180b33ecce95153e774ffaccc1d0e1699812" exitCode=0 Mar 16 15:35:46 crc kubenswrapper[4736]: I0316 15:35:46.094519 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gmclf" event={"ID":"4c9139de-bf71-4fc5-8c71-071cb42f9f35","Type":"ContainerDied","Data":"593f6c4f6e367017237c6e34d929180b33ecce95153e774ffaccc1d0e1699812"} Mar 16 15:35:46 crc kubenswrapper[4736]: I0316 15:35:46.539186 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:35:46 crc kubenswrapper[4736]: I0316 15:35:46.545807 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:35:46 crc kubenswrapper[4736]: I0316 15:35:46.992523 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f712fb8-4a0f-400d-b21d-1fc72a671b31" path="/var/lib/kubelet/pods/5f712fb8-4a0f-400d-b21d-1fc72a671b31/volumes" Mar 16 15:35:47 crc kubenswrapper[4736]: I0316 15:35:47.551918 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:47 crc kubenswrapper[4736]: I0316 15:35:47.593198 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-combined-ca-bundle\") pod \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " Mar 16 15:35:47 crc kubenswrapper[4736]: I0316 15:35:47.593246 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-scripts\") pod \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " Mar 16 15:35:47 crc kubenswrapper[4736]: I0316 15:35:47.593375 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sq2p\" (UniqueName: \"kubernetes.io/projected/4c9139de-bf71-4fc5-8c71-071cb42f9f35-kube-api-access-2sq2p\") pod \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " Mar 16 15:35:47 crc kubenswrapper[4736]: I0316 15:35:47.593652 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-config-data\") pod \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\" (UID: \"4c9139de-bf71-4fc5-8c71-071cb42f9f35\") " Mar 16 15:35:47 crc kubenswrapper[4736]: I0316 15:35:47.605340 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-scripts" (OuterVolumeSpecName: "scripts") pod "4c9139de-bf71-4fc5-8c71-071cb42f9f35" (UID: "4c9139de-bf71-4fc5-8c71-071cb42f9f35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:47 crc kubenswrapper[4736]: I0316 15:35:47.605384 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c9139de-bf71-4fc5-8c71-071cb42f9f35-kube-api-access-2sq2p" (OuterVolumeSpecName: "kube-api-access-2sq2p") pod "4c9139de-bf71-4fc5-8c71-071cb42f9f35" (UID: "4c9139de-bf71-4fc5-8c71-071cb42f9f35"). InnerVolumeSpecName "kube-api-access-2sq2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:35:47 crc kubenswrapper[4736]: I0316 15:35:47.633229 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-config-data" (OuterVolumeSpecName: "config-data") pod "4c9139de-bf71-4fc5-8c71-071cb42f9f35" (UID: "4c9139de-bf71-4fc5-8c71-071cb42f9f35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:47 crc kubenswrapper[4736]: I0316 15:35:47.650650 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c9139de-bf71-4fc5-8c71-071cb42f9f35" (UID: "4c9139de-bf71-4fc5-8c71-071cb42f9f35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:47 crc kubenswrapper[4736]: I0316 15:35:47.697457 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:47 crc kubenswrapper[4736]: I0316 15:35:47.697504 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:47 crc kubenswrapper[4736]: I0316 15:35:47.697518 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c9139de-bf71-4fc5-8c71-071cb42f9f35-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:47 crc kubenswrapper[4736]: I0316 15:35:47.697527 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sq2p\" (UniqueName: \"kubernetes.io/projected/4c9139de-bf71-4fc5-8c71-071cb42f9f35-kube-api-access-2sq2p\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:48 crc kubenswrapper[4736]: I0316 15:35:48.117420 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gmclf" event={"ID":"4c9139de-bf71-4fc5-8c71-071cb42f9f35","Type":"ContainerDied","Data":"a66d942085c6f4f820dc3d5e8e243683fd8879d8ab14eb213a8522ef2dfcca36"} Mar 16 15:35:48 crc kubenswrapper[4736]: I0316 15:35:48.117814 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a66d942085c6f4f820dc3d5e8e243683fd8879d8ab14eb213a8522ef2dfcca36" Mar 16 15:35:48 crc kubenswrapper[4736]: I0316 15:35:48.117491 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gmclf" Mar 16 15:35:48 crc kubenswrapper[4736]: I0316 15:35:48.307078 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:35:48 crc kubenswrapper[4736]: I0316 15:35:48.307406 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="966115ec-ce4c-4ce1-be50-9f8f5106f460" containerName="nova-api-log" containerID="cri-o://260bc04f2b7314280eeb5276217cdedf9a48a24b570c5400e6ad8060f5304cef" gracePeriod=30 Mar 16 15:35:48 crc kubenswrapper[4736]: I0316 15:35:48.307546 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="966115ec-ce4c-4ce1-be50-9f8f5106f460" containerName="nova-api-api" containerID="cri-o://ed2e364e94e4119006a1b7f1d84a91c9ec60f2ab835110868408aa2e4a4ac68b" gracePeriod=30 Mar 16 15:35:48 crc kubenswrapper[4736]: I0316 15:35:48.324053 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:35:48 crc kubenswrapper[4736]: I0316 15:35:48.324448 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c73dcbef-b4a2-4b23-a493-954ad3ec7c4c" containerName="nova-scheduler-scheduler" containerID="cri-o://6bb64c83f1466a2af3ab02ec66584b8d7e81fcb4716c4c031ed9b36ea964598d" gracePeriod=30 Mar 16 15:35:48 crc kubenswrapper[4736]: I0316 15:35:48.369837 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:48 crc kubenswrapper[4736]: I0316 15:35:48.370263 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ce9aa7ba-6875-49ce-a0fe-d613426094c4" containerName="nova-metadata-log" containerID="cri-o://e4ea279b38e40622a4cae4c00df9923d354e397c148a89dbe40aa28391806713" gracePeriod=30 Mar 16 15:35:48 crc kubenswrapper[4736]: I0316 15:35:48.370357 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ce9aa7ba-6875-49ce-a0fe-d613426094c4" containerName="nova-metadata-metadata" containerID="cri-o://da0b1b75fb4ddd4c3b1399430b62175dd281dd9e2b5d152f7901b21bb1d62dc7" gracePeriod=30 Mar 16 15:35:48 crc kubenswrapper[4736]: I0316 15:35:48.827000 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:35:48 crc kubenswrapper[4736]: I0316 15:35:48.843372 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-ff55bcd5b-psrsc" Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.056936 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67c978df54-kdnqn"] Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.132686 4736 generic.go:334] "Generic (PLEG): container finished" podID="d3db0b47-39c5-4414-b863-6c472b6ee78a" containerID="7087e06342218756c470df5b1083d0946e2775c0af67bc6e95e97996790157b3" exitCode=0 Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.132767 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vlmpz" event={"ID":"d3db0b47-39c5-4414-b863-6c472b6ee78a","Type":"ContainerDied","Data":"7087e06342218756c470df5b1083d0946e2775c0af67bc6e95e97996790157b3"} Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.138073 4736 generic.go:334] "Generic (PLEG): container finished" podID="966115ec-ce4c-4ce1-be50-9f8f5106f460" containerID="260bc04f2b7314280eeb5276217cdedf9a48a24b570c5400e6ad8060f5304cef" exitCode=143 Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.138251 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"966115ec-ce4c-4ce1-be50-9f8f5106f460","Type":"ContainerDied","Data":"260bc04f2b7314280eeb5276217cdedf9a48a24b570c5400e6ad8060f5304cef"} Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.141523 4736 generic.go:334] "Generic (PLEG): container finished" podID="ce9aa7ba-6875-49ce-a0fe-d613426094c4" containerID="da0b1b75fb4ddd4c3b1399430b62175dd281dd9e2b5d152f7901b21bb1d62dc7" exitCode=0 Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.141589 4736 generic.go:334] "Generic (PLEG): container finished" podID="ce9aa7ba-6875-49ce-a0fe-d613426094c4" containerID="e4ea279b38e40622a4cae4c00df9923d354e397c148a89dbe40aa28391806713" exitCode=143 Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.141804 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon-log" containerID="cri-o://afec60def9ccf57438ca3ca1ad9b7e51a3abf15a3e6c9d520c24fac1d0bca222" gracePeriod=30 Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.141930 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce9aa7ba-6875-49ce-a0fe-d613426094c4","Type":"ContainerDied","Data":"da0b1b75fb4ddd4c3b1399430b62175dd281dd9e2b5d152f7901b21bb1d62dc7"} Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.142002 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce9aa7ba-6875-49ce-a0fe-d613426094c4","Type":"ContainerDied","Data":"e4ea279b38e40622a4cae4c00df9923d354e397c148a89dbe40aa28391806713"} Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.142090 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" containerID="cri-o://ec7a546e5b00a48f1b8e940316ff04389add7b34cee99d2f1b01654c4990b1c2" gracePeriod=30 Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.510249 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.647314 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce9aa7ba-6875-49ce-a0fe-d613426094c4-logs\") pod \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.647393 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-config-data\") pod \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.647430 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9xj\" (UniqueName: \"kubernetes.io/projected/ce9aa7ba-6875-49ce-a0fe-d613426094c4-kube-api-access-hm9xj\") pod \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.647597 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-nova-metadata-tls-certs\") pod \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.647698 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-combined-ca-bundle\") pod \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\" (UID: \"ce9aa7ba-6875-49ce-a0fe-d613426094c4\") " Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.647946 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9aa7ba-6875-49ce-a0fe-d613426094c4-logs" (OuterVolumeSpecName: "logs") pod "ce9aa7ba-6875-49ce-a0fe-d613426094c4" (UID: "ce9aa7ba-6875-49ce-a0fe-d613426094c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.648402 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce9aa7ba-6875-49ce-a0fe-d613426094c4-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.655660 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9aa7ba-6875-49ce-a0fe-d613426094c4-kube-api-access-hm9xj" (OuterVolumeSpecName: "kube-api-access-hm9xj") pod "ce9aa7ba-6875-49ce-a0fe-d613426094c4" (UID: "ce9aa7ba-6875-49ce-a0fe-d613426094c4"). InnerVolumeSpecName "kube-api-access-hm9xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.687242 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce9aa7ba-6875-49ce-a0fe-d613426094c4" (UID: "ce9aa7ba-6875-49ce-a0fe-d613426094c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.689469 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-config-data" (OuterVolumeSpecName: "config-data") pod "ce9aa7ba-6875-49ce-a0fe-d613426094c4" (UID: "ce9aa7ba-6875-49ce-a0fe-d613426094c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.750797 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.751250 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.751264 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm9xj\" (UniqueName: \"kubernetes.io/projected/ce9aa7ba-6875-49ce-a0fe-d613426094c4-kube-api-access-hm9xj\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.752133 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ce9aa7ba-6875-49ce-a0fe-d613426094c4" (UID: "ce9aa7ba-6875-49ce-a0fe-d613426094c4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.817144 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.861749 4736 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce9aa7ba-6875-49ce-a0fe-d613426094c4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.962772 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-combined-ca-bundle\") pod \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\" (UID: \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\") " Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.963019 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-config-data\") pod \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\" (UID: \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\") " Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.963132 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm6xv\" (UniqueName: \"kubernetes.io/projected/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-kube-api-access-pm6xv\") pod \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\" (UID: \"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c\") " Mar 16 15:35:49 crc kubenswrapper[4736]: I0316 15:35:49.976480 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-kube-api-access-pm6xv" (OuterVolumeSpecName: "kube-api-access-pm6xv") pod "c73dcbef-b4a2-4b23-a493-954ad3ec7c4c" (UID: "c73dcbef-b4a2-4b23-a493-954ad3ec7c4c"). InnerVolumeSpecName "kube-api-access-pm6xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.003260 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-config-data" (OuterVolumeSpecName: "config-data") pod "c73dcbef-b4a2-4b23-a493-954ad3ec7c4c" (UID: "c73dcbef-b4a2-4b23-a493-954ad3ec7c4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.009079 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c73dcbef-b4a2-4b23-a493-954ad3ec7c4c" (UID: "c73dcbef-b4a2-4b23-a493-954ad3ec7c4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.066780 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.066852 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm6xv\" (UniqueName: \"kubernetes.io/projected/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-kube-api-access-pm6xv\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.066865 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.156621 4736 generic.go:334] "Generic (PLEG): container finished" podID="c73dcbef-b4a2-4b23-a493-954ad3ec7c4c" containerID="6bb64c83f1466a2af3ab02ec66584b8d7e81fcb4716c4c031ed9b36ea964598d" exitCode=0 Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.156785 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.157255 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c","Type":"ContainerDied","Data":"6bb64c83f1466a2af3ab02ec66584b8d7e81fcb4716c4c031ed9b36ea964598d"} Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.157286 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c73dcbef-b4a2-4b23-a493-954ad3ec7c4c","Type":"ContainerDied","Data":"e20144ea7f86c04ef439d256d13aa647ca37f8719cf318596282aa28559eb671"} Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.157304 4736 scope.go:117] "RemoveContainer" containerID="6bb64c83f1466a2af3ab02ec66584b8d7e81fcb4716c4c031ed9b36ea964598d" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.162158 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.165715 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce9aa7ba-6875-49ce-a0fe-d613426094c4","Type":"ContainerDied","Data":"3a7d436bec9b45c4bc2e1076e8ce9903d8d8efcd60733fe3d5d28ff9da16db57"} Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.225624 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.247593 4736 scope.go:117] "RemoveContainer" containerID="6bb64c83f1466a2af3ab02ec66584b8d7e81fcb4716c4c031ed9b36ea964598d" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.250968 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.269504 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.277488 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:50 crc kubenswrapper[4736]: E0316 15:35:50.277882 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bb64c83f1466a2af3ab02ec66584b8d7e81fcb4716c4c031ed9b36ea964598d\": container with ID starting with 6bb64c83f1466a2af3ab02ec66584b8d7e81fcb4716c4c031ed9b36ea964598d not found: ID does not exist" containerID="6bb64c83f1466a2af3ab02ec66584b8d7e81fcb4716c4c031ed9b36ea964598d" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.277946 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bb64c83f1466a2af3ab02ec66584b8d7e81fcb4716c4c031ed9b36ea964598d"} err="failed to get container status \"6bb64c83f1466a2af3ab02ec66584b8d7e81fcb4716c4c031ed9b36ea964598d\": rpc error: code = NotFound desc = could not find container \"6bb64c83f1466a2af3ab02ec66584b8d7e81fcb4716c4c031ed9b36ea964598d\": container with ID starting with 6bb64c83f1466a2af3ab02ec66584b8d7e81fcb4716c4c031ed9b36ea964598d not found: ID does not exist" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.277981 4736 scope.go:117] "RemoveContainer" containerID="da0b1b75fb4ddd4c3b1399430b62175dd281dd9e2b5d152f7901b21bb1d62dc7" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.288636 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:35:50 crc kubenswrapper[4736]: E0316 15:35:50.289243 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f712fb8-4a0f-400d-b21d-1fc72a671b31" containerName="dnsmasq-dns" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.289271 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f712fb8-4a0f-400d-b21d-1fc72a671b31" containerName="dnsmasq-dns" Mar 16 15:35:50 crc kubenswrapper[4736]: E0316 15:35:50.289284 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f712fb8-4a0f-400d-b21d-1fc72a671b31" containerName="init" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.289294 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f712fb8-4a0f-400d-b21d-1fc72a671b31" containerName="init" Mar 16 15:35:50 crc kubenswrapper[4736]: E0316 15:35:50.289306 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9139de-bf71-4fc5-8c71-071cb42f9f35" containerName="nova-manage" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.289314 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9139de-bf71-4fc5-8c71-071cb42f9f35" containerName="nova-manage" Mar 16 15:35:50 crc kubenswrapper[4736]: E0316 15:35:50.289333 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9aa7ba-6875-49ce-a0fe-d613426094c4" containerName="nova-metadata-metadata" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.289339 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9aa7ba-6875-49ce-a0fe-d613426094c4" containerName="nova-metadata-metadata" Mar 16 15:35:50 crc kubenswrapper[4736]: E0316 15:35:50.289354 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9aa7ba-6875-49ce-a0fe-d613426094c4" containerName="nova-metadata-log" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.289360 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9aa7ba-6875-49ce-a0fe-d613426094c4" containerName="nova-metadata-log" Mar 16 15:35:50 crc kubenswrapper[4736]: E0316 15:35:50.289388 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73dcbef-b4a2-4b23-a493-954ad3ec7c4c" containerName="nova-scheduler-scheduler" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.289398 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73dcbef-b4a2-4b23-a493-954ad3ec7c4c" containerName="nova-scheduler-scheduler" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.293623 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c9139de-bf71-4fc5-8c71-071cb42f9f35" containerName="nova-manage" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.293679 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9aa7ba-6875-49ce-a0fe-d613426094c4" containerName="nova-metadata-metadata" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.293694 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f712fb8-4a0f-400d-b21d-1fc72a671b31" containerName="dnsmasq-dns" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.293723 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9aa7ba-6875-49ce-a0fe-d613426094c4" containerName="nova-metadata-log" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.293735 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73dcbef-b4a2-4b23-a493-954ad3ec7c4c" containerName="nova-scheduler-scheduler" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.294565 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.296501 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.300638 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.319231 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.321361 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.326684 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.326909 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.336686 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.355279 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.355355 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.369615 4736 scope.go:117] "RemoveContainer" containerID="e4ea279b38e40622a4cae4c00df9923d354e397c148a89dbe40aa28391806713" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.483204 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-config-data\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.483244 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.483270 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl885\" (UniqueName: \"kubernetes.io/projected/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-kube-api-access-vl885\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.483298 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-logs\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.483340 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d4a559-748a-4e80-8bce-c66068084394-config-data\") pod \"nova-scheduler-0\" (UID: \"36d4a559-748a-4e80-8bce-c66068084394\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.483378 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d4a559-748a-4e80-8bce-c66068084394-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36d4a559-748a-4e80-8bce-c66068084394\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.483397 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9n9s\" (UniqueName: \"kubernetes.io/projected/36d4a559-748a-4e80-8bce-c66068084394-kube-api-access-t9n9s\") pod \"nova-scheduler-0\" (UID: \"36d4a559-748a-4e80-8bce-c66068084394\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.483434 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.570275 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.585303 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d4a559-748a-4e80-8bce-c66068084394-config-data\") pod \"nova-scheduler-0\" (UID: \"36d4a559-748a-4e80-8bce-c66068084394\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.585375 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d4a559-748a-4e80-8bce-c66068084394-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36d4a559-748a-4e80-8bce-c66068084394\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.585397 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9n9s\" (UniqueName: \"kubernetes.io/projected/36d4a559-748a-4e80-8bce-c66068084394-kube-api-access-t9n9s\") pod \"nova-scheduler-0\" (UID: \"36d4a559-748a-4e80-8bce-c66068084394\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.585447 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.585531 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-config-data\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.585551 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.585571 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl885\" (UniqueName: \"kubernetes.io/projected/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-kube-api-access-vl885\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.585602 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-logs\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.586807 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-logs\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.597471 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d4a559-748a-4e80-8bce-c66068084394-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36d4a559-748a-4e80-8bce-c66068084394\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.598488 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.598709 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d4a559-748a-4e80-8bce-c66068084394-config-data\") pod \"nova-scheduler-0\" (UID: \"36d4a559-748a-4e80-8bce-c66068084394\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.600633 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-config-data\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.600707 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.618139 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl885\" (UniqueName: \"kubernetes.io/projected/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-kube-api-access-vl885\") pod \"nova-metadata-0\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.620732 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9n9s\" (UniqueName: \"kubernetes.io/projected/36d4a559-748a-4e80-8bce-c66068084394-kube-api-access-t9n9s\") pod \"nova-scheduler-0\" (UID: \"36d4a559-748a-4e80-8bce-c66068084394\") " pod="openstack/nova-scheduler-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.655721 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.672636 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.687238 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-config-data\") pod \"d3db0b47-39c5-4414-b863-6c472b6ee78a\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.687398 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lswbv\" (UniqueName: \"kubernetes.io/projected/d3db0b47-39c5-4414-b863-6c472b6ee78a-kube-api-access-lswbv\") pod \"d3db0b47-39c5-4414-b863-6c472b6ee78a\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.687624 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-scripts\") pod \"d3db0b47-39c5-4414-b863-6c472b6ee78a\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.687711 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-combined-ca-bundle\") pod \"d3db0b47-39c5-4414-b863-6c472b6ee78a\" (UID: \"d3db0b47-39c5-4414-b863-6c472b6ee78a\") " Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.693239 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3db0b47-39c5-4414-b863-6c472b6ee78a-kube-api-access-lswbv" (OuterVolumeSpecName: "kube-api-access-lswbv") pod "d3db0b47-39c5-4414-b863-6c472b6ee78a" (UID: "d3db0b47-39c5-4414-b863-6c472b6ee78a"). InnerVolumeSpecName "kube-api-access-lswbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.699287 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-scripts" (OuterVolumeSpecName: "scripts") pod "d3db0b47-39c5-4414-b863-6c472b6ee78a" (UID: "d3db0b47-39c5-4414-b863-6c472b6ee78a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.741537 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3db0b47-39c5-4414-b863-6c472b6ee78a" (UID: "d3db0b47-39c5-4414-b863-6c472b6ee78a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.779283 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-config-data" (OuterVolumeSpecName: "config-data") pod "d3db0b47-39c5-4414-b863-6c472b6ee78a" (UID: "d3db0b47-39c5-4414-b863-6c472b6ee78a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.791030 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.791055 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.791065 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3db0b47-39c5-4414-b863-6c472b6ee78a-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.791076 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lswbv\" (UniqueName: \"kubernetes.io/projected/d3db0b47-39c5-4414-b863-6c472b6ee78a-kube-api-access-lswbv\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:50 crc kubenswrapper[4736]: I0316 15:35:50.999075 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c73dcbef-b4a2-4b23-a493-954ad3ec7c4c" path="/var/lib/kubelet/pods/c73dcbef-b4a2-4b23-a493-954ad3ec7c4c/volumes" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.000020 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9aa7ba-6875-49ce-a0fe-d613426094c4" path="/var/lib/kubelet/pods/ce9aa7ba-6875-49ce-a0fe-d613426094c4/volumes" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.179833 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-vlmpz" event={"ID":"d3db0b47-39c5-4414-b863-6c472b6ee78a","Type":"ContainerDied","Data":"aaabaf34ba3ef2f79f57d0e3e57e80547178dcc3c5240e0c8336fc55b5e67911"} Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.179898 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaabaf34ba3ef2f79f57d0e3e57e80547178dcc3c5240e0c8336fc55b5e67911" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.180009 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-vlmpz" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.277636 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 16 15:35:51 crc kubenswrapper[4736]: E0316 15:35:51.278097 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3db0b47-39c5-4414-b863-6c472b6ee78a" containerName="nova-cell1-conductor-db-sync" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.278197 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3db0b47-39c5-4414-b863-6c472b6ee78a" containerName="nova-cell1-conductor-db-sync" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.278375 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3db0b47-39c5-4414-b863-6c472b6ee78a" containerName="nova-cell1-conductor-db-sync" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.279071 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.284713 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.291046 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.303695 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 16 15:35:51 crc kubenswrapper[4736]: W0316 15:35:51.361410 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3709ecdf_1b56_4e47_8c2a_31fc3a7e940f.slice/crio-fa035525bad3324349297ec4e52dd11c82d48b6a12146eaca8f648ea2402dee7 WatchSource:0}: Error finding container fa035525bad3324349297ec4e52dd11c82d48b6a12146eaca8f648ea2402dee7: Status 404 returned error can't find the container with id fa035525bad3324349297ec4e52dd11c82d48b6a12146eaca8f648ea2402dee7 Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.374156 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.420806 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q55l4\" (UniqueName: \"kubernetes.io/projected/202b09c4-bf70-46c9-aff5-b536e3f7ef9d-kube-api-access-q55l4\") pod \"nova-cell1-conductor-0\" (UID: \"202b09c4-bf70-46c9-aff5-b536e3f7ef9d\") " pod="openstack/nova-cell1-conductor-0" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.420966 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202b09c4-bf70-46c9-aff5-b536e3f7ef9d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"202b09c4-bf70-46c9-aff5-b536e3f7ef9d\") " pod="openstack/nova-cell1-conductor-0" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.421029 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202b09c4-bf70-46c9-aff5-b536e3f7ef9d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"202b09c4-bf70-46c9-aff5-b536e3f7ef9d\") " pod="openstack/nova-cell1-conductor-0" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.522605 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202b09c4-bf70-46c9-aff5-b536e3f7ef9d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"202b09c4-bf70-46c9-aff5-b536e3f7ef9d\") " pod="openstack/nova-cell1-conductor-0" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.523133 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202b09c4-bf70-46c9-aff5-b536e3f7ef9d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"202b09c4-bf70-46c9-aff5-b536e3f7ef9d\") " pod="openstack/nova-cell1-conductor-0" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.523191 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q55l4\" (UniqueName: \"kubernetes.io/projected/202b09c4-bf70-46c9-aff5-b536e3f7ef9d-kube-api-access-q55l4\") pod \"nova-cell1-conductor-0\" (UID: \"202b09c4-bf70-46c9-aff5-b536e3f7ef9d\") " pod="openstack/nova-cell1-conductor-0" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.534984 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/202b09c4-bf70-46c9-aff5-b536e3f7ef9d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"202b09c4-bf70-46c9-aff5-b536e3f7ef9d\") " pod="openstack/nova-cell1-conductor-0" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.535019 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/202b09c4-bf70-46c9-aff5-b536e3f7ef9d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"202b09c4-bf70-46c9-aff5-b536e3f7ef9d\") " pod="openstack/nova-cell1-conductor-0" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.546807 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q55l4\" (UniqueName: \"kubernetes.io/projected/202b09c4-bf70-46c9-aff5-b536e3f7ef9d-kube-api-access-q55l4\") pod \"nova-cell1-conductor-0\" (UID: \"202b09c4-bf70-46c9-aff5-b536e3f7ef9d\") " pod="openstack/nova-cell1-conductor-0" Mar 16 15:35:51 crc kubenswrapper[4736]: I0316 15:35:51.652741 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.009693 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.141420 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966115ec-ce4c-4ce1-be50-9f8f5106f460-combined-ca-bundle\") pod \"966115ec-ce4c-4ce1-be50-9f8f5106f460\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.141530 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966115ec-ce4c-4ce1-be50-9f8f5106f460-config-data\") pod \"966115ec-ce4c-4ce1-be50-9f8f5106f460\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.141571 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/966115ec-ce4c-4ce1-be50-9f8f5106f460-logs\") pod \"966115ec-ce4c-4ce1-be50-9f8f5106f460\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.141617 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzfcl\" (UniqueName: \"kubernetes.io/projected/966115ec-ce4c-4ce1-be50-9f8f5106f460-kube-api-access-kzfcl\") pod \"966115ec-ce4c-4ce1-be50-9f8f5106f460\" (UID: \"966115ec-ce4c-4ce1-be50-9f8f5106f460\") " Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.143232 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/966115ec-ce4c-4ce1-be50-9f8f5106f460-logs" (OuterVolumeSpecName: "logs") pod "966115ec-ce4c-4ce1-be50-9f8f5106f460" (UID: "966115ec-ce4c-4ce1-be50-9f8f5106f460"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.150419 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/966115ec-ce4c-4ce1-be50-9f8f5106f460-kube-api-access-kzfcl" (OuterVolumeSpecName: "kube-api-access-kzfcl") pod "966115ec-ce4c-4ce1-be50-9f8f5106f460" (UID: "966115ec-ce4c-4ce1-be50-9f8f5106f460"). InnerVolumeSpecName "kube-api-access-kzfcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.172493 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966115ec-ce4c-4ce1-be50-9f8f5106f460-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "966115ec-ce4c-4ce1-be50-9f8f5106f460" (UID: "966115ec-ce4c-4ce1-be50-9f8f5106f460"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.175294 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966115ec-ce4c-4ce1-be50-9f8f5106f460-config-data" (OuterVolumeSpecName: "config-data") pod "966115ec-ce4c-4ce1-be50-9f8f5106f460" (UID: "966115ec-ce4c-4ce1-be50-9f8f5106f460"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.234783 4736 generic.go:334] "Generic (PLEG): container finished" podID="966115ec-ce4c-4ce1-be50-9f8f5106f460" containerID="ed2e364e94e4119006a1b7f1d84a91c9ec60f2ab835110868408aa2e4a4ac68b" exitCode=0 Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.235281 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.235303 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"966115ec-ce4c-4ce1-be50-9f8f5106f460","Type":"ContainerDied","Data":"ed2e364e94e4119006a1b7f1d84a91c9ec60f2ab835110868408aa2e4a4ac68b"} Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.235572 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"966115ec-ce4c-4ce1-be50-9f8f5106f460","Type":"ContainerDied","Data":"c72c204daf46faf9c81d93c5285d1589642957da1027be86d543c906bc3dfdc7"} Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.235604 4736 scope.go:117] "RemoveContainer" containerID="ed2e364e94e4119006a1b7f1d84a91c9ec60f2ab835110868408aa2e4a4ac68b" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.243555 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966115ec-ce4c-4ce1-be50-9f8f5106f460-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.243583 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966115ec-ce4c-4ce1-be50-9f8f5106f460-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.243592 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/966115ec-ce4c-4ce1-be50-9f8f5106f460-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.243609 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzfcl\" (UniqueName: \"kubernetes.io/projected/966115ec-ce4c-4ce1-be50-9f8f5106f460-kube-api-access-kzfcl\") on node \"crc\" DevicePath \"\"" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.244965 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36d4a559-748a-4e80-8bce-c66068084394","Type":"ContainerStarted","Data":"88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4"} Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.245007 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36d4a559-748a-4e80-8bce-c66068084394","Type":"ContainerStarted","Data":"f428914989dd517b72661482fb993c5ca043f5ba54f69fad9c88627e454c2cea"} Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.248628 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f","Type":"ContainerStarted","Data":"1d9a2f0840afd93e704ea2c5502027afec9421a5a42aa1bfbe8d099dc5322bde"} Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.248658 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f","Type":"ContainerStarted","Data":"6481211df4dc96e48aa6f1afaf366f2a8ca0e8dbd2fb4da9b3315cfc0ebd6dc1"} Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.248671 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f","Type":"ContainerStarted","Data":"fa035525bad3324349297ec4e52dd11c82d48b6a12146eaca8f648ea2402dee7"} Mar 16 15:35:52 crc kubenswrapper[4736]: W0316 15:35:52.251585 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod202b09c4_bf70_46c9_aff5_b536e3f7ef9d.slice/crio-b65321272247ea4638943cbc6aa5c88b7feb2a973352874065de7aeb89b70f65 WatchSource:0}: Error finding container b65321272247ea4638943cbc6aa5c88b7feb2a973352874065de7aeb89b70f65: Status 404 returned error can't find the container with id b65321272247ea4638943cbc6aa5c88b7feb2a973352874065de7aeb89b70f65 Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.274807 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.280323 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.280311534 podStartE2EDuration="2.280311534s" podCreationTimestamp="2026-03-16 15:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:35:52.263910706 +0000 UTC m=+1353.991300993" watchObservedRunningTime="2026-03-16 15:35:52.280311534 +0000 UTC m=+1354.007701821" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.290545 4736 scope.go:117] "RemoveContainer" containerID="260bc04f2b7314280eeb5276217cdedf9a48a24b570c5400e6ad8060f5304cef" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.308794 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.308767654 podStartE2EDuration="2.308767654s" podCreationTimestamp="2026-03-16 15:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:35:52.283982902 +0000 UTC m=+1354.011373189" watchObservedRunningTime="2026-03-16 15:35:52.308767654 +0000 UTC m=+1354.036157941" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.345206 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.365167 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.379687 4736 scope.go:117] "RemoveContainer" containerID="ed2e364e94e4119006a1b7f1d84a91c9ec60f2ab835110868408aa2e4a4ac68b" Mar 16 15:35:52 crc kubenswrapper[4736]: E0316 15:35:52.380204 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed2e364e94e4119006a1b7f1d84a91c9ec60f2ab835110868408aa2e4a4ac68b\": container with ID starting with ed2e364e94e4119006a1b7f1d84a91c9ec60f2ab835110868408aa2e4a4ac68b not found: ID does not exist" containerID="ed2e364e94e4119006a1b7f1d84a91c9ec60f2ab835110868408aa2e4a4ac68b" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.380239 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed2e364e94e4119006a1b7f1d84a91c9ec60f2ab835110868408aa2e4a4ac68b"} err="failed to get container status \"ed2e364e94e4119006a1b7f1d84a91c9ec60f2ab835110868408aa2e4a4ac68b\": rpc error: code = NotFound desc = could not find container \"ed2e364e94e4119006a1b7f1d84a91c9ec60f2ab835110868408aa2e4a4ac68b\": container with ID starting with ed2e364e94e4119006a1b7f1d84a91c9ec60f2ab835110868408aa2e4a4ac68b not found: ID does not exist" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.380257 4736 scope.go:117] "RemoveContainer" containerID="260bc04f2b7314280eeb5276217cdedf9a48a24b570c5400e6ad8060f5304cef" Mar 16 15:35:52 crc kubenswrapper[4736]: E0316 15:35:52.380555 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"260bc04f2b7314280eeb5276217cdedf9a48a24b570c5400e6ad8060f5304cef\": container with ID starting with 260bc04f2b7314280eeb5276217cdedf9a48a24b570c5400e6ad8060f5304cef not found: ID does not exist" containerID="260bc04f2b7314280eeb5276217cdedf9a48a24b570c5400e6ad8060f5304cef" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.380582 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"260bc04f2b7314280eeb5276217cdedf9a48a24b570c5400e6ad8060f5304cef"} err="failed to get container status \"260bc04f2b7314280eeb5276217cdedf9a48a24b570c5400e6ad8060f5304cef\": rpc error: code = NotFound desc = could not find container \"260bc04f2b7314280eeb5276217cdedf9a48a24b570c5400e6ad8060f5304cef\": container with ID starting with 260bc04f2b7314280eeb5276217cdedf9a48a24b570c5400e6ad8060f5304cef not found: ID does not exist" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.388253 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:54012->10.217.0.152:8443: read: connection reset by peer" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.388340 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 16 15:35:52 crc kubenswrapper[4736]: E0316 15:35:52.388726 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966115ec-ce4c-4ce1-be50-9f8f5106f460" containerName="nova-api-log" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.388739 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="966115ec-ce4c-4ce1-be50-9f8f5106f460" containerName="nova-api-log" Mar 16 15:35:52 crc kubenswrapper[4736]: E0316 15:35:52.388766 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966115ec-ce4c-4ce1-be50-9f8f5106f460" containerName="nova-api-api" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.388772 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="966115ec-ce4c-4ce1-be50-9f8f5106f460" containerName="nova-api-api" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.388941 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="966115ec-ce4c-4ce1-be50-9f8f5106f460" containerName="nova-api-log" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.388963 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="966115ec-ce4c-4ce1-be50-9f8f5106f460" containerName="nova-api-api" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.389971 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.393774 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.399638 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.557626 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-logs\") pod \"nova-api-0\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.557739 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.557815 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-config-data\") pod \"nova-api-0\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.557860 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64rq2\" (UniqueName: \"kubernetes.io/projected/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-kube-api-access-64rq2\") pod \"nova-api-0\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.659817 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-config-data\") pod \"nova-api-0\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.660010 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64rq2\" (UniqueName: \"kubernetes.io/projected/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-kube-api-access-64rq2\") pod \"nova-api-0\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.660473 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-logs\") pod \"nova-api-0\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.660621 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.660901 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-logs\") pod \"nova-api-0\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.664568 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-config-data\") pod \"nova-api-0\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.665227 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.683200 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64rq2\" (UniqueName: \"kubernetes.io/projected/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-kube-api-access-64rq2\") pod \"nova-api-0\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.754141 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 16 15:35:52 crc kubenswrapper[4736]: I0316 15:35:52.994662 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="966115ec-ce4c-4ce1-be50-9f8f5106f460" path="/var/lib/kubelet/pods/966115ec-ce4c-4ce1-be50-9f8f5106f460/volumes" Mar 16 15:35:53 crc kubenswrapper[4736]: I0316 15:35:53.244987 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:35:53 crc kubenswrapper[4736]: I0316 15:35:53.261356 4736 generic.go:334] "Generic (PLEG): container finished" podID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerID="ec7a546e5b00a48f1b8e940316ff04389add7b34cee99d2f1b01654c4990b1c2" exitCode=0 Mar 16 15:35:53 crc kubenswrapper[4736]: I0316 15:35:53.261445 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67c978df54-kdnqn" event={"ID":"b6de0392-402f-47e4-aa1d-19c956a68e1d","Type":"ContainerDied","Data":"ec7a546e5b00a48f1b8e940316ff04389add7b34cee99d2f1b01654c4990b1c2"} Mar 16 15:35:53 crc kubenswrapper[4736]: I0316 15:35:53.261493 4736 scope.go:117] "RemoveContainer" containerID="bb34a40cb6b11f02510ae1feff999a3f1039c1a0ec12f398e7f5deb918dc9b15" Mar 16 15:35:53 crc kubenswrapper[4736]: I0316 15:35:53.263059 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"202b09c4-bf70-46c9-aff5-b536e3f7ef9d","Type":"ContainerStarted","Data":"6f741919ccc829321c259e8db574fb8254478388dbe8f31b47cc1bc33dd56564"} Mar 16 15:35:53 crc kubenswrapper[4736]: I0316 15:35:53.263160 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"202b09c4-bf70-46c9-aff5-b536e3f7ef9d","Type":"ContainerStarted","Data":"b65321272247ea4638943cbc6aa5c88b7feb2a973352874065de7aeb89b70f65"} Mar 16 15:35:53 crc kubenswrapper[4736]: I0316 15:35:53.263701 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 16 15:35:53 crc kubenswrapper[4736]: W0316 15:35:53.271383 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c6a4a2b_9065_493d_9ebb_e0b47f35825f.slice/crio-44aedae27a88fef0ceb1c855ca9dde981ab9787dbc95af2053a29fdfe50ad340 WatchSource:0}: Error finding container 44aedae27a88fef0ceb1c855ca9dde981ab9787dbc95af2053a29fdfe50ad340: Status 404 returned error can't find the container with id 44aedae27a88fef0ceb1c855ca9dde981ab9787dbc95af2053a29fdfe50ad340 Mar 16 15:35:53 crc kubenswrapper[4736]: I0316 15:35:53.290514 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.2904865389999998 podStartE2EDuration="2.290486539s" podCreationTimestamp="2026-03-16 15:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:35:53.285714611 +0000 UTC m=+1355.013104898" watchObservedRunningTime="2026-03-16 15:35:53.290486539 +0000 UTC m=+1355.017876826" Mar 16 15:35:54 crc kubenswrapper[4736]: I0316 15:35:54.281979 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c6a4a2b-9065-493d-9ebb-e0b47f35825f","Type":"ContainerStarted","Data":"beabe5a8489926985c0b0e07f65433ecb29747a5d718276125a4ad04707ba4ca"} Mar 16 15:35:54 crc kubenswrapper[4736]: I0316 15:35:54.283171 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c6a4a2b-9065-493d-9ebb-e0b47f35825f","Type":"ContainerStarted","Data":"5b4adcffc9cb00dd409ec7ecc6d806edace18c4e885d88283380fc26d84d8fb3"} Mar 16 15:35:54 crc kubenswrapper[4736]: I0316 15:35:54.283234 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c6a4a2b-9065-493d-9ebb-e0b47f35825f","Type":"ContainerStarted","Data":"44aedae27a88fef0ceb1c855ca9dde981ab9787dbc95af2053a29fdfe50ad340"} Mar 16 15:35:54 crc kubenswrapper[4736]: I0316 15:35:54.309490 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.309462468 podStartE2EDuration="2.309462468s" podCreationTimestamp="2026-03-16 15:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:35:54.304955309 +0000 UTC m=+1356.032345626" watchObservedRunningTime="2026-03-16 15:35:54.309462468 +0000 UTC m=+1356.036852775" Mar 16 15:35:55 crc kubenswrapper[4736]: I0316 15:35:55.656853 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 16 15:35:56 crc kubenswrapper[4736]: E0316 15:35:56.965857 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-proxy-678dd4f677-jxtsk" podUID="bccee937-d642-4483-87fb-033b157cf68c" Mar 16 15:35:57 crc kubenswrapper[4736]: I0316 15:35:57.337692 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:35:58 crc kubenswrapper[4736]: I0316 15:35:58.116070 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 16 15:36:00 crc kubenswrapper[4736]: I0316 15:36:00.147902 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561256-bpsnb"] Mar 16 15:36:00 crc kubenswrapper[4736]: I0316 15:36:00.156022 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561256-bpsnb" Mar 16 15:36:00 crc kubenswrapper[4736]: I0316 15:36:00.163099 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:36:00 crc kubenswrapper[4736]: I0316 15:36:00.163411 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:36:00 crc kubenswrapper[4736]: I0316 15:36:00.163777 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:36:00 crc kubenswrapper[4736]: I0316 15:36:00.190233 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561256-bpsnb"] Mar 16 15:36:00 crc kubenswrapper[4736]: I0316 15:36:00.257481 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxfh4\" (UniqueName: \"kubernetes.io/projected/defbe484-8be6-4777-a843-c6e7dbd7e29e-kube-api-access-cxfh4\") pod \"auto-csr-approver-29561256-bpsnb\" (UID: \"defbe484-8be6-4777-a843-c6e7dbd7e29e\") " pod="openshift-infra/auto-csr-approver-29561256-bpsnb" Mar 16 15:36:00 crc kubenswrapper[4736]: I0316 15:36:00.359312 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxfh4\" (UniqueName: \"kubernetes.io/projected/defbe484-8be6-4777-a843-c6e7dbd7e29e-kube-api-access-cxfh4\") pod \"auto-csr-approver-29561256-bpsnb\" (UID: \"defbe484-8be6-4777-a843-c6e7dbd7e29e\") " pod="openshift-infra/auto-csr-approver-29561256-bpsnb" Mar 16 15:36:00 crc kubenswrapper[4736]: I0316 15:36:00.380865 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxfh4\" (UniqueName: \"kubernetes.io/projected/defbe484-8be6-4777-a843-c6e7dbd7e29e-kube-api-access-cxfh4\") pod \"auto-csr-approver-29561256-bpsnb\" (UID: \"defbe484-8be6-4777-a843-c6e7dbd7e29e\") " pod="openshift-infra/auto-csr-approver-29561256-bpsnb" Mar 16 15:36:00 crc kubenswrapper[4736]: I0316 15:36:00.507670 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561256-bpsnb" Mar 16 15:36:00 crc kubenswrapper[4736]: I0316 15:36:00.680484 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 16 15:36:00 crc kubenswrapper[4736]: I0316 15:36:00.681087 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 16 15:36:00 crc kubenswrapper[4736]: I0316 15:36:00.684285 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 16 15:36:00 crc kubenswrapper[4736]: I0316 15:36:00.875369 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 16 15:36:01 crc kubenswrapper[4736]: I0316 15:36:01.100395 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561256-bpsnb"] Mar 16 15:36:01 crc kubenswrapper[4736]: W0316 15:36:01.103411 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddefbe484_8be6_4777_a843_c6e7dbd7e29e.slice/crio-212b44c62025de2b129cfcb24a1fd5582b59d459097e6996e44cf87a2fc5c710 WatchSource:0}: Error finding container 212b44c62025de2b129cfcb24a1fd5582b59d459097e6996e44cf87a2fc5c710: Status 404 returned error can't find the container with id 212b44c62025de2b129cfcb24a1fd5582b59d459097e6996e44cf87a2fc5c710 Mar 16 15:36:01 crc kubenswrapper[4736]: I0316 15:36:01.384354 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561256-bpsnb" event={"ID":"defbe484-8be6-4777-a843-c6e7dbd7e29e","Type":"ContainerStarted","Data":"212b44c62025de2b129cfcb24a1fd5582b59d459097e6996e44cf87a2fc5c710"} Mar 16 15:36:01 crc kubenswrapper[4736]: I0316 15:36:01.421170 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 16 15:36:01 crc kubenswrapper[4736]: I0316 15:36:01.604661 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:36:01 crc kubenswrapper[4736]: E0316 15:36:01.604911 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:36:01 crc kubenswrapper[4736]: E0316 15:36:01.604935 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-678dd4f677-jxtsk: configmap "swift-ring-files" not found Mar 16 15:36:01 crc kubenswrapper[4736]: E0316 15:36:01.605003 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift podName:bccee937-d642-4483-87fb-033b157cf68c nodeName:}" failed. No retries permitted until 2026-03-16 15:38:03.604978724 +0000 UTC m=+1485.332369011 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift") pod "swift-proxy-678dd4f677-jxtsk" (UID: "bccee937-d642-4483-87fb-033b157cf68c") : configmap "swift-ring-files" not found Mar 16 15:36:01 crc kubenswrapper[4736]: I0316 15:36:01.606324 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 16 15:36:01 crc kubenswrapper[4736]: I0316 15:36:01.683334 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 16 15:36:01 crc kubenswrapper[4736]: I0316 15:36:01.685487 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:36:01 crc kubenswrapper[4736]: I0316 15:36:01.689264 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:36:02 crc kubenswrapper[4736]: I0316 15:36:02.500950 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 16 15:36:02 crc kubenswrapper[4736]: I0316 15:36:02.501554 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0d509d96-9987-4162-8f43-55188067aa4e" containerName="kube-state-metrics" containerID="cri-o://ffbf9268e958c06ef02bbe2e8b37ee98559287a23e36086653603ffe9a194254" gracePeriod=30 Mar 16 15:36:02 crc kubenswrapper[4736]: I0316 15:36:02.756045 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 16 15:36:02 crc kubenswrapper[4736]: I0316 15:36:02.756195 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.204232 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.347736 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrmnw\" (UniqueName: \"kubernetes.io/projected/0d509d96-9987-4162-8f43-55188067aa4e-kube-api-access-vrmnw\") pod \"0d509d96-9987-4162-8f43-55188067aa4e\" (UID: \"0d509d96-9987-4162-8f43-55188067aa4e\") " Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.372743 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d509d96-9987-4162-8f43-55188067aa4e-kube-api-access-vrmnw" (OuterVolumeSpecName: "kube-api-access-vrmnw") pod "0d509d96-9987-4162-8f43-55188067aa4e" (UID: "0d509d96-9987-4162-8f43-55188067aa4e"). InnerVolumeSpecName "kube-api-access-vrmnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.402229 4736 generic.go:334] "Generic (PLEG): container finished" podID="0d509d96-9987-4162-8f43-55188067aa4e" containerID="ffbf9268e958c06ef02bbe2e8b37ee98559287a23e36086653603ffe9a194254" exitCode=2 Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.402303 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.402328 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d509d96-9987-4162-8f43-55188067aa4e","Type":"ContainerDied","Data":"ffbf9268e958c06ef02bbe2e8b37ee98559287a23e36086653603ffe9a194254"} Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.405659 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d509d96-9987-4162-8f43-55188067aa4e","Type":"ContainerDied","Data":"33e254db6fa911e83922cfffec8e34cccdf8c2884ebd9f6a82a612e304936aab"} Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.405693 4736 scope.go:117] "RemoveContainer" containerID="ffbf9268e958c06ef02bbe2e8b37ee98559287a23e36086653603ffe9a194254" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.409654 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561256-bpsnb" event={"ID":"defbe484-8be6-4777-a843-c6e7dbd7e29e","Type":"ContainerStarted","Data":"6ae727ef6d343c071767b767ab9085c460258b2868c0cb6387f7b7855bd6e788"} Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.445616 4736 scope.go:117] "RemoveContainer" containerID="ffbf9268e958c06ef02bbe2e8b37ee98559287a23e36086653603ffe9a194254" Mar 16 15:36:03 crc kubenswrapper[4736]: E0316 15:36:03.446481 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffbf9268e958c06ef02bbe2e8b37ee98559287a23e36086653603ffe9a194254\": container with ID starting with ffbf9268e958c06ef02bbe2e8b37ee98559287a23e36086653603ffe9a194254 not found: ID does not exist" containerID="ffbf9268e958c06ef02bbe2e8b37ee98559287a23e36086653603ffe9a194254" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.446584 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffbf9268e958c06ef02bbe2e8b37ee98559287a23e36086653603ffe9a194254"} err="failed to get container status \"ffbf9268e958c06ef02bbe2e8b37ee98559287a23e36086653603ffe9a194254\": rpc error: code = NotFound desc = could not find container \"ffbf9268e958c06ef02bbe2e8b37ee98559287a23e36086653603ffe9a194254\": container with ID starting with ffbf9268e958c06ef02bbe2e8b37ee98559287a23e36086653603ffe9a194254 not found: ID does not exist" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.450845 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrmnw\" (UniqueName: \"kubernetes.io/projected/0d509d96-9987-4162-8f43-55188067aa4e-kube-api-access-vrmnw\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.466666 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561256-bpsnb" podStartSLOduration=2.052343435 podStartE2EDuration="3.466646124s" podCreationTimestamp="2026-03-16 15:36:00 +0000 UTC" firstStartedPulling="2026-03-16 15:36:01.106032406 +0000 UTC m=+1362.833422693" lastFinishedPulling="2026-03-16 15:36:02.520335095 +0000 UTC m=+1364.247725382" observedRunningTime="2026-03-16 15:36:03.44574884 +0000 UTC m=+1365.173139137" watchObservedRunningTime="2026-03-16 15:36:03.466646124 +0000 UTC m=+1365.194036411" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.482606 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.493193 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.500851 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 16 15:36:03 crc kubenswrapper[4736]: E0316 15:36:03.501315 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d509d96-9987-4162-8f43-55188067aa4e" containerName="kube-state-metrics" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.501333 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d509d96-9987-4162-8f43-55188067aa4e" containerName="kube-state-metrics" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.507267 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d509d96-9987-4162-8f43-55188067aa4e" containerName="kube-state-metrics" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.508179 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.512771 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.512776 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.523801 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.655390 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65ntp\" (UniqueName: \"kubernetes.io/projected/02f0ab2b-3871-4319-a39a-2c1d13a8c6e6-kube-api-access-65ntp\") pod \"kube-state-metrics-0\" (UID: \"02f0ab2b-3871-4319-a39a-2c1d13a8c6e6\") " pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.655848 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/02f0ab2b-3871-4319-a39a-2c1d13a8c6e6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"02f0ab2b-3871-4319-a39a-2c1d13a8c6e6\") " pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.655984 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f0ab2b-3871-4319-a39a-2c1d13a8c6e6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"02f0ab2b-3871-4319-a39a-2c1d13a8c6e6\") " pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.656115 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f0ab2b-3871-4319-a39a-2c1d13a8c6e6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"02f0ab2b-3871-4319-a39a-2c1d13a8c6e6\") " pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.758391 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f0ab2b-3871-4319-a39a-2c1d13a8c6e6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"02f0ab2b-3871-4319-a39a-2c1d13a8c6e6\") " pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.758462 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f0ab2b-3871-4319-a39a-2c1d13a8c6e6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"02f0ab2b-3871-4319-a39a-2c1d13a8c6e6\") " pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.758542 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65ntp\" (UniqueName: \"kubernetes.io/projected/02f0ab2b-3871-4319-a39a-2c1d13a8c6e6-kube-api-access-65ntp\") pod \"kube-state-metrics-0\" (UID: \"02f0ab2b-3871-4319-a39a-2c1d13a8c6e6\") " pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.758641 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/02f0ab2b-3871-4319-a39a-2c1d13a8c6e6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"02f0ab2b-3871-4319-a39a-2c1d13a8c6e6\") " pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.763944 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f0ab2b-3871-4319-a39a-2c1d13a8c6e6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"02f0ab2b-3871-4319-a39a-2c1d13a8c6e6\") " pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.763988 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/02f0ab2b-3871-4319-a39a-2c1d13a8c6e6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"02f0ab2b-3871-4319-a39a-2c1d13a8c6e6\") " pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.769733 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f0ab2b-3871-4319-a39a-2c1d13a8c6e6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"02f0ab2b-3871-4319-a39a-2c1d13a8c6e6\") " pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.781138 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65ntp\" (UniqueName: \"kubernetes.io/projected/02f0ab2b-3871-4319-a39a-2c1d13a8c6e6-kube-api-access-65ntp\") pod \"kube-state-metrics-0\" (UID: \"02f0ab2b-3871-4319-a39a-2c1d13a8c6e6\") " pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.833587 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.840360 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9c6a4a2b-9065-493d-9ebb-e0b47f35825f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.219:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 15:36:03 crc kubenswrapper[4736]: I0316 15:36:03.840380 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9c6a4a2b-9065-493d-9ebb-e0b47f35825f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.219:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 15:36:04 crc kubenswrapper[4736]: I0316 15:36:04.403951 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 16 15:36:04 crc kubenswrapper[4736]: W0316 15:36:04.418294 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02f0ab2b_3871_4319_a39a_2c1d13a8c6e6.slice/crio-10ed581c7c2bf7e3a7b252691f47f426047292579f9eeb5ff3e43f0abfab3bfd WatchSource:0}: Error finding container 10ed581c7c2bf7e3a7b252691f47f426047292579f9eeb5ff3e43f0abfab3bfd: Status 404 returned error can't find the container with id 10ed581c7c2bf7e3a7b252691f47f426047292579f9eeb5ff3e43f0abfab3bfd Mar 16 15:36:04 crc kubenswrapper[4736]: I0316 15:36:04.991175 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d509d96-9987-4162-8f43-55188067aa4e" path="/var/lib/kubelet/pods/0d509d96-9987-4162-8f43-55188067aa4e/volumes" Mar 16 15:36:05 crc kubenswrapper[4736]: I0316 15:36:05.256678 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:36:05 crc kubenswrapper[4736]: I0316 15:36:05.256992 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="ceilometer-central-agent" containerID="cri-o://3722f183efc0219ffd1185b30f0daacef5b4c17d3b6b76626a5ec88c78e3323f" gracePeriod=30 Mar 16 15:36:05 crc kubenswrapper[4736]: I0316 15:36:05.257190 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="proxy-httpd" containerID="cri-o://cc6efb9dde20e90bbfd5b01123087696e9d336f1cb445f1922ed92edb75751cb" gracePeriod=30 Mar 16 15:36:05 crc kubenswrapper[4736]: I0316 15:36:05.257280 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="ceilometer-notification-agent" containerID="cri-o://68d0741c28017254de3a55137dee72d23723cb087378a23bbeeddc744181d7e2" gracePeriod=30 Mar 16 15:36:05 crc kubenswrapper[4736]: I0316 15:36:05.257396 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="sg-core" containerID="cri-o://acba7046e116755f14b29b90e864c1e6f3901cf2732d5e38c6002c4d08eb1be9" gracePeriod=30 Mar 16 15:36:05 crc kubenswrapper[4736]: I0316 15:36:05.444253 4736 generic.go:334] "Generic (PLEG): container finished" podID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerID="acba7046e116755f14b29b90e864c1e6f3901cf2732d5e38c6002c4d08eb1be9" exitCode=2 Mar 16 15:36:05 crc kubenswrapper[4736]: I0316 15:36:05.444337 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48","Type":"ContainerDied","Data":"acba7046e116755f14b29b90e864c1e6f3901cf2732d5e38c6002c4d08eb1be9"} Mar 16 15:36:05 crc kubenswrapper[4736]: I0316 15:36:05.447331 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"02f0ab2b-3871-4319-a39a-2c1d13a8c6e6","Type":"ContainerStarted","Data":"3002a92650f8fd29171afbd78e5f31d71d876d3b65447fe623108e8f92091e96"} Mar 16 15:36:05 crc kubenswrapper[4736]: I0316 15:36:05.447377 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"02f0ab2b-3871-4319-a39a-2c1d13a8c6e6","Type":"ContainerStarted","Data":"10ed581c7c2bf7e3a7b252691f47f426047292579f9eeb5ff3e43f0abfab3bfd"} Mar 16 15:36:05 crc kubenswrapper[4736]: I0316 15:36:05.447481 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 16 15:36:05 crc kubenswrapper[4736]: I0316 15:36:05.449649 4736 generic.go:334] "Generic (PLEG): container finished" podID="defbe484-8be6-4777-a843-c6e7dbd7e29e" containerID="6ae727ef6d343c071767b767ab9085c460258b2868c0cb6387f7b7855bd6e788" exitCode=0 Mar 16 15:36:05 crc kubenswrapper[4736]: I0316 15:36:05.449718 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561256-bpsnb" event={"ID":"defbe484-8be6-4777-a843-c6e7dbd7e29e","Type":"ContainerDied","Data":"6ae727ef6d343c071767b767ab9085c460258b2868c0cb6387f7b7855bd6e788"} Mar 16 15:36:05 crc kubenswrapper[4736]: I0316 15:36:05.471714 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.067023542 podStartE2EDuration="2.471688136s" podCreationTimestamp="2026-03-16 15:36:03 +0000 UTC" firstStartedPulling="2026-03-16 15:36:04.421316014 +0000 UTC m=+1366.148706311" lastFinishedPulling="2026-03-16 15:36:04.825980588 +0000 UTC m=+1366.553370905" observedRunningTime="2026-03-16 15:36:05.463704246 +0000 UTC m=+1367.191094553" watchObservedRunningTime="2026-03-16 15:36:05.471688136 +0000 UTC m=+1367.199078433" Mar 16 15:36:06 crc kubenswrapper[4736]: I0316 15:36:06.462855 4736 generic.go:334] "Generic (PLEG): container finished" podID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerID="cc6efb9dde20e90bbfd5b01123087696e9d336f1cb445f1922ed92edb75751cb" exitCode=0 Mar 16 15:36:06 crc kubenswrapper[4736]: I0316 15:36:06.462903 4736 generic.go:334] "Generic (PLEG): container finished" podID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerID="3722f183efc0219ffd1185b30f0daacef5b4c17d3b6b76626a5ec88c78e3323f" exitCode=0 Mar 16 15:36:06 crc kubenswrapper[4736]: I0316 15:36:06.464226 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48","Type":"ContainerDied","Data":"cc6efb9dde20e90bbfd5b01123087696e9d336f1cb445f1922ed92edb75751cb"} Mar 16 15:36:06 crc kubenswrapper[4736]: I0316 15:36:06.464261 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48","Type":"ContainerDied","Data":"3722f183efc0219ffd1185b30f0daacef5b4c17d3b6b76626a5ec88c78e3323f"} Mar 16 15:36:06 crc kubenswrapper[4736]: I0316 15:36:06.904988 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561256-bpsnb" Mar 16 15:36:07 crc kubenswrapper[4736]: I0316 15:36:07.042000 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxfh4\" (UniqueName: \"kubernetes.io/projected/defbe484-8be6-4777-a843-c6e7dbd7e29e-kube-api-access-cxfh4\") pod \"defbe484-8be6-4777-a843-c6e7dbd7e29e\" (UID: \"defbe484-8be6-4777-a843-c6e7dbd7e29e\") " Mar 16 15:36:07 crc kubenswrapper[4736]: I0316 15:36:07.048005 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/defbe484-8be6-4777-a843-c6e7dbd7e29e-kube-api-access-cxfh4" (OuterVolumeSpecName: "kube-api-access-cxfh4") pod "defbe484-8be6-4777-a843-c6e7dbd7e29e" (UID: "defbe484-8be6-4777-a843-c6e7dbd7e29e"). InnerVolumeSpecName "kube-api-access-cxfh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:36:07 crc kubenswrapper[4736]: I0316 15:36:07.144540 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxfh4\" (UniqueName: \"kubernetes.io/projected/defbe484-8be6-4777-a843-c6e7dbd7e29e-kube-api-access-cxfh4\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:07 crc kubenswrapper[4736]: I0316 15:36:07.480929 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561256-bpsnb" event={"ID":"defbe484-8be6-4777-a843-c6e7dbd7e29e","Type":"ContainerDied","Data":"212b44c62025de2b129cfcb24a1fd5582b59d459097e6996e44cf87a2fc5c710"} Mar 16 15:36:07 crc kubenswrapper[4736]: I0316 15:36:07.480990 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="212b44c62025de2b129cfcb24a1fd5582b59d459097e6996e44cf87a2fc5c710" Mar 16 15:36:07 crc kubenswrapper[4736]: I0316 15:36:07.481050 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561256-bpsnb" Mar 16 15:36:07 crc kubenswrapper[4736]: I0316 15:36:07.563987 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561250-v4sxj"] Mar 16 15:36:07 crc kubenswrapper[4736]: I0316 15:36:07.573203 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561250-v4sxj"] Mar 16 15:36:08 crc kubenswrapper[4736]: I0316 15:36:08.508435 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:36:08 crc kubenswrapper[4736]: I0316 15:36:08.508875 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:36:08 crc kubenswrapper[4736]: I0316 15:36:08.672824 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 16 15:36:08 crc kubenswrapper[4736]: I0316 15:36:08.672912 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 16 15:36:09 crc kubenswrapper[4736]: I0316 15:36:09.007525 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9803e05f-288f-4f40-9a58-8e0d8622ce48" path="/var/lib/kubelet/pods/9803e05f-288f-4f40-9a58-8e0d8622ce48/volumes" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.289505 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.421999 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f212d838-a214-4337-9fd5-1c6ce7eaf547-config-data\") pod \"f212d838-a214-4337-9fd5-1c6ce7eaf547\" (UID: \"f212d838-a214-4337-9fd5-1c6ce7eaf547\") " Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.422280 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f212d838-a214-4337-9fd5-1c6ce7eaf547-combined-ca-bundle\") pod \"f212d838-a214-4337-9fd5-1c6ce7eaf547\" (UID: \"f212d838-a214-4337-9fd5-1c6ce7eaf547\") " Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.422403 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn6kl\" (UniqueName: \"kubernetes.io/projected/f212d838-a214-4337-9fd5-1c6ce7eaf547-kube-api-access-cn6kl\") pod \"f212d838-a214-4337-9fd5-1c6ce7eaf547\" (UID: \"f212d838-a214-4337-9fd5-1c6ce7eaf547\") " Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.437823 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f212d838-a214-4337-9fd5-1c6ce7eaf547-kube-api-access-cn6kl" (OuterVolumeSpecName: "kube-api-access-cn6kl") pod "f212d838-a214-4337-9fd5-1c6ce7eaf547" (UID: "f212d838-a214-4337-9fd5-1c6ce7eaf547"). InnerVolumeSpecName "kube-api-access-cn6kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.456963 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f212d838-a214-4337-9fd5-1c6ce7eaf547-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f212d838-a214-4337-9fd5-1c6ce7eaf547" (UID: "f212d838-a214-4337-9fd5-1c6ce7eaf547"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.458948 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f212d838-a214-4337-9fd5-1c6ce7eaf547-config-data" (OuterVolumeSpecName: "config-data") pod "f212d838-a214-4337-9fd5-1c6ce7eaf547" (UID: "f212d838-a214-4337-9fd5-1c6ce7eaf547"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.510933 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.510959 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f212d838-a214-4337-9fd5-1c6ce7eaf547","Type":"ContainerDied","Data":"b3f41b4fa31a9db9fcb699283a5b9f329cea03627ab19bf51de2834478ad1186"} Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.511010 4736 scope.go:117] "RemoveContainer" containerID="b3f41b4fa31a9db9fcb699283a5b9f329cea03627ab19bf51de2834478ad1186" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.510924 4736 generic.go:334] "Generic (PLEG): container finished" podID="f212d838-a214-4337-9fd5-1c6ce7eaf547" containerID="b3f41b4fa31a9db9fcb699283a5b9f329cea03627ab19bf51de2834478ad1186" exitCode=137 Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.511136 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f212d838-a214-4337-9fd5-1c6ce7eaf547","Type":"ContainerDied","Data":"bbea828173cd892a54720f5971bb81989c2e29b2317a09abb4b9cfcdcbde629f"} Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.522545 4736 generic.go:334] "Generic (PLEG): container finished" podID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerID="68d0741c28017254de3a55137dee72d23723cb087378a23bbeeddc744181d7e2" exitCode=0 Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.522598 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48","Type":"ContainerDied","Data":"68d0741c28017254de3a55137dee72d23723cb087378a23bbeeddc744181d7e2"} Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.522629 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48","Type":"ContainerDied","Data":"6e5a70f364d71579dbf70be2a13506502469ff1c0710c6881625d43409e7c532"} Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.522645 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e5a70f364d71579dbf70be2a13506502469ff1c0710c6881625d43409e7c532" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.525274 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f212d838-a214-4337-9fd5-1c6ce7eaf547-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.525296 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f212d838-a214-4337-9fd5-1c6ce7eaf547-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.525331 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn6kl\" (UniqueName: \"kubernetes.io/projected/f212d838-a214-4337-9fd5-1c6ce7eaf547-kube-api-access-cn6kl\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.555597 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.571433 4736 scope.go:117] "RemoveContainer" containerID="b3f41b4fa31a9db9fcb699283a5b9f329cea03627ab19bf51de2834478ad1186" Mar 16 15:36:10 crc kubenswrapper[4736]: E0316 15:36:10.572286 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3f41b4fa31a9db9fcb699283a5b9f329cea03627ab19bf51de2834478ad1186\": container with ID starting with b3f41b4fa31a9db9fcb699283a5b9f329cea03627ab19bf51de2834478ad1186 not found: ID does not exist" containerID="b3f41b4fa31a9db9fcb699283a5b9f329cea03627ab19bf51de2834478ad1186" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.572345 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3f41b4fa31a9db9fcb699283a5b9f329cea03627ab19bf51de2834478ad1186"} err="failed to get container status \"b3f41b4fa31a9db9fcb699283a5b9f329cea03627ab19bf51de2834478ad1186\": rpc error: code = NotFound desc = could not find container \"b3f41b4fa31a9db9fcb699283a5b9f329cea03627ab19bf51de2834478ad1186\": container with ID starting with b3f41b4fa31a9db9fcb699283a5b9f329cea03627ab19bf51de2834478ad1186 not found: ID does not exist" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.572293 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.581249 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.611794 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 16 15:36:10 crc kubenswrapper[4736]: E0316 15:36:10.612340 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defbe484-8be6-4777-a843-c6e7dbd7e29e" containerName="oc" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.612368 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="defbe484-8be6-4777-a843-c6e7dbd7e29e" containerName="oc" Mar 16 15:36:10 crc kubenswrapper[4736]: E0316 15:36:10.612381 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="ceilometer-central-agent" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.612405 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="ceilometer-central-agent" Mar 16 15:36:10 crc kubenswrapper[4736]: E0316 15:36:10.612429 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="sg-core" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.612437 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="sg-core" Mar 16 15:36:10 crc kubenswrapper[4736]: E0316 15:36:10.612462 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f212d838-a214-4337-9fd5-1c6ce7eaf547" containerName="nova-cell1-novncproxy-novncproxy" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.612470 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f212d838-a214-4337-9fd5-1c6ce7eaf547" containerName="nova-cell1-novncproxy-novncproxy" Mar 16 15:36:10 crc kubenswrapper[4736]: E0316 15:36:10.612482 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="proxy-httpd" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.612490 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="proxy-httpd" Mar 16 15:36:10 crc kubenswrapper[4736]: E0316 15:36:10.612505 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="ceilometer-notification-agent" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.612513 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="ceilometer-notification-agent" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.612723 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f212d838-a214-4337-9fd5-1c6ce7eaf547" containerName="nova-cell1-novncproxy-novncproxy" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.612750 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="defbe484-8be6-4777-a843-c6e7dbd7e29e" containerName="oc" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.612774 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="ceilometer-central-agent" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.612799 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="ceilometer-notification-agent" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.612812 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="sg-core" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.612829 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" containerName="proxy-httpd" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.613939 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.619901 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.620087 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.620207 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.626514 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfw2b\" (UniqueName: \"kubernetes.io/projected/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-kube-api-access-xfw2b\") pod \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.626645 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-log-httpd\") pod \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.626676 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-sg-core-conf-yaml\") pod \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.626719 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-combined-ca-bundle\") pod \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.626797 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-config-data\") pod \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.626841 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-run-httpd\") pod \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.626904 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-scripts\") pod \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\" (UID: \"e34d5cc4-8821-4cc0-85f2-0dc7c745fd48\") " Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.633083 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" (UID: "e34d5cc4-8821-4cc0-85f2-0dc7c745fd48"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.645136 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" (UID: "e34d5cc4-8821-4cc0-85f2-0dc7c745fd48"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.650466 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.664303 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-scripts" (OuterVolumeSpecName: "scripts") pod "e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" (UID: "e34d5cc4-8821-4cc0-85f2-0dc7c745fd48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.674122 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-kube-api-access-xfw2b" (OuterVolumeSpecName: "kube-api-access-xfw2b") pod "e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" (UID: "e34d5cc4-8821-4cc0-85f2-0dc7c745fd48"). InnerVolumeSpecName "kube-api-access-xfw2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.683692 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.691236 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.694623 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.708505 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" (UID: "e34d5cc4-8821-4cc0-85f2-0dc7c745fd48"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.735648 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ca423d-d8ce-437c-9fca-1b57025ab173-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.736044 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ca423d-d8ce-437c-9fca-1b57025ab173-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.737067 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ca423d-d8ce-437c-9fca-1b57025ab173-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.737989 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkrfh\" (UniqueName: \"kubernetes.io/projected/b1ca423d-d8ce-437c-9fca-1b57025ab173-kube-api-access-dkrfh\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.738147 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ca423d-d8ce-437c-9fca-1b57025ab173-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.738308 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.738400 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.738481 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfw2b\" (UniqueName: \"kubernetes.io/projected/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-kube-api-access-xfw2b\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.738548 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.738614 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.755393 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.755665 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.774345 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" (UID: "e34d5cc4-8821-4cc0-85f2-0dc7c745fd48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.789410 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-config-data" (OuterVolumeSpecName: "config-data") pod "e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" (UID: "e34d5cc4-8821-4cc0-85f2-0dc7c745fd48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.840411 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ca423d-d8ce-437c-9fca-1b57025ab173-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.841018 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkrfh\" (UniqueName: \"kubernetes.io/projected/b1ca423d-d8ce-437c-9fca-1b57025ab173-kube-api-access-dkrfh\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.841137 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ca423d-d8ce-437c-9fca-1b57025ab173-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.841303 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ca423d-d8ce-437c-9fca-1b57025ab173-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.841374 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ca423d-d8ce-437c-9fca-1b57025ab173-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.841624 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.841692 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.843859 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ca423d-d8ce-437c-9fca-1b57025ab173-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.846260 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ca423d-d8ce-437c-9fca-1b57025ab173-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.849003 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ca423d-d8ce-437c-9fca-1b57025ab173-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.849357 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ca423d-d8ce-437c-9fca-1b57025ab173-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.859909 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkrfh\" (UniqueName: \"kubernetes.io/projected/b1ca423d-d8ce-437c-9fca-1b57025ab173-kube-api-access-dkrfh\") pod \"nova-cell1-novncproxy-0\" (UID: \"b1ca423d-d8ce-437c-9fca-1b57025ab173\") " pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.942829 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:10 crc kubenswrapper[4736]: I0316 15:36:10.998734 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f212d838-a214-4337-9fd5-1c6ce7eaf547" path="/var/lib/kubelet/pods/f212d838-a214-4337-9fd5-1c6ce7eaf547/volumes" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.450961 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.533477 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b1ca423d-d8ce-437c-9fca-1b57025ab173","Type":"ContainerStarted","Data":"507d145633cf791050add9c8a146e27a6c12fc59e86ccfe0322481efe01ebe2b"} Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.537753 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.543170 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.614843 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-67c978df54-kdnqn" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.615025 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.706372 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.725742 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.771658 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.774059 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.777483 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.777643 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.777757 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.786227 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.878214 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-scripts\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.878536 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-config-data\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.878567 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.878599 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.878654 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg6bl\" (UniqueName: \"kubernetes.io/projected/ac80c6a5-0050-462e-b2d4-59f82ac50751-kube-api-access-jg6bl\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.878670 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac80c6a5-0050-462e-b2d4-59f82ac50751-run-httpd\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.878732 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.878753 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac80c6a5-0050-462e-b2d4-59f82ac50751-log-httpd\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.979741 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-config-data\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.979786 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.979820 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.979866 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg6bl\" (UniqueName: \"kubernetes.io/projected/ac80c6a5-0050-462e-b2d4-59f82ac50751-kube-api-access-jg6bl\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.979884 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac80c6a5-0050-462e-b2d4-59f82ac50751-run-httpd\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.979947 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.979976 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac80c6a5-0050-462e-b2d4-59f82ac50751-log-httpd\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.980031 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-scripts\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.980824 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac80c6a5-0050-462e-b2d4-59f82ac50751-run-httpd\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.980964 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac80c6a5-0050-462e-b2d4-59f82ac50751-log-httpd\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.987800 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.988092 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-scripts\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.989464 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-config-data\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:11 crc kubenswrapper[4736]: I0316 15:36:11.989685 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:12 crc kubenswrapper[4736]: I0316 15:36:11.991516 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:12 crc kubenswrapper[4736]: I0316 15:36:12.009254 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg6bl\" (UniqueName: \"kubernetes.io/projected/ac80c6a5-0050-462e-b2d4-59f82ac50751-kube-api-access-jg6bl\") pod \"ceilometer-0\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " pod="openstack/ceilometer-0" Mar 16 15:36:12 crc kubenswrapper[4736]: I0316 15:36:12.091306 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:36:12 crc kubenswrapper[4736]: I0316 15:36:12.564199 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b1ca423d-d8ce-437c-9fca-1b57025ab173","Type":"ContainerStarted","Data":"490492affe84b3a0fe45ad9c52c5021fc03987aa490b65ea17a047ac6bc4dd4a"} Mar 16 15:36:12 crc kubenswrapper[4736]: I0316 15:36:12.597914 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.5978976129999998 podStartE2EDuration="2.597897613s" podCreationTimestamp="2026-03-16 15:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:36:12.592986717 +0000 UTC m=+1374.320377004" watchObservedRunningTime="2026-03-16 15:36:12.597897613 +0000 UTC m=+1374.325287900" Mar 16 15:36:12 crc kubenswrapper[4736]: I0316 15:36:12.750924 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:36:12 crc kubenswrapper[4736]: I0316 15:36:12.769088 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 16 15:36:12 crc kubenswrapper[4736]: I0316 15:36:12.776091 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 16 15:36:12 crc kubenswrapper[4736]: I0316 15:36:12.783350 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 16 15:36:12 crc kubenswrapper[4736]: I0316 15:36:12.994575 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e34d5cc4-8821-4cc0-85f2-0dc7c745fd48" path="/var/lib/kubelet/pods/e34d5cc4-8821-4cc0-85f2-0dc7c745fd48/volumes" Mar 16 15:36:13 crc kubenswrapper[4736]: I0316 15:36:13.573769 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac80c6a5-0050-462e-b2d4-59f82ac50751","Type":"ContainerStarted","Data":"431703604afce90ff9eeb647b442eb33f08c0776a5f61d72180fd77545333a95"} Mar 16 15:36:13 crc kubenswrapper[4736]: I0316 15:36:13.573815 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac80c6a5-0050-462e-b2d4-59f82ac50751","Type":"ContainerStarted","Data":"585f164de5e09db83db8ebd5909d3d4d3c4a96106c0fb6014fdb1c9ecc7f9f09"} Mar 16 15:36:13 crc kubenswrapper[4736]: I0316 15:36:13.581025 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 16 15:36:13 crc kubenswrapper[4736]: I0316 15:36:13.802093 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f7855c887-z2wwb"] Mar 16 15:36:13 crc kubenswrapper[4736]: I0316 15:36:13.804154 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:13 crc kubenswrapper[4736]: I0316 15:36:13.824746 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f7855c887-z2wwb"] Mar 16 15:36:13 crc kubenswrapper[4736]: I0316 15:36:13.905389 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 16 15:36:13 crc kubenswrapper[4736]: I0316 15:36:13.925410 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-config\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:13 crc kubenswrapper[4736]: I0316 15:36:13.925468 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbx9r\" (UniqueName: \"kubernetes.io/projected/e662df69-1ac9-4967-a5a3-e72675cf70ff-kube-api-access-mbx9r\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:13 crc kubenswrapper[4736]: I0316 15:36:13.925527 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-dns-svc\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:13 crc kubenswrapper[4736]: I0316 15:36:13.925588 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-ovsdbserver-sb\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:13 crc kubenswrapper[4736]: I0316 15:36:13.925680 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-ovsdbserver-nb\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:14 crc kubenswrapper[4736]: I0316 15:36:14.027554 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-ovsdbserver-sb\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:14 crc kubenswrapper[4736]: I0316 15:36:14.028612 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-ovsdbserver-sb\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:14 crc kubenswrapper[4736]: I0316 15:36:14.029812 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-ovsdbserver-nb\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:14 crc kubenswrapper[4736]: I0316 15:36:14.030757 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-ovsdbserver-nb\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:14 crc kubenswrapper[4736]: I0316 15:36:14.031461 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-config\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:14 crc kubenswrapper[4736]: I0316 15:36:14.031497 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbx9r\" (UniqueName: \"kubernetes.io/projected/e662df69-1ac9-4967-a5a3-e72675cf70ff-kube-api-access-mbx9r\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:14 crc kubenswrapper[4736]: I0316 15:36:14.031570 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-dns-svc\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:14 crc kubenswrapper[4736]: I0316 15:36:14.034513 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-config\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:14 crc kubenswrapper[4736]: I0316 15:36:14.035947 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-dns-svc\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:14 crc kubenswrapper[4736]: I0316 15:36:14.086843 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbx9r\" (UniqueName: \"kubernetes.io/projected/e662df69-1ac9-4967-a5a3-e72675cf70ff-kube-api-access-mbx9r\") pod \"dnsmasq-dns-6f7855c887-z2wwb\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:14 crc kubenswrapper[4736]: I0316 15:36:14.135695 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:14 crc kubenswrapper[4736]: I0316 15:36:14.604504 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac80c6a5-0050-462e-b2d4-59f82ac50751","Type":"ContainerStarted","Data":"508c873d6461f0da9609bb5919e8fc1782fe94ac8ee6d648ab00851c2a726024"} Mar 16 15:36:14 crc kubenswrapper[4736]: I0316 15:36:14.780525 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f7855c887-z2wwb"] Mar 16 15:36:15 crc kubenswrapper[4736]: I0316 15:36:15.616874 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac80c6a5-0050-462e-b2d4-59f82ac50751","Type":"ContainerStarted","Data":"6e7a32bce2e011c2834ae0c041b066148139ca5f4ce3f96e9daaf688ee03261b"} Mar 16 15:36:15 crc kubenswrapper[4736]: I0316 15:36:15.619832 4736 generic.go:334] "Generic (PLEG): container finished" podID="e662df69-1ac9-4967-a5a3-e72675cf70ff" containerID="5ee12c30475891f01b8fb37e2e1b6297103b0a84582384d3c384ec3570c6ec66" exitCode=0 Mar 16 15:36:15 crc kubenswrapper[4736]: I0316 15:36:15.620191 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" event={"ID":"e662df69-1ac9-4967-a5a3-e72675cf70ff","Type":"ContainerDied","Data":"5ee12c30475891f01b8fb37e2e1b6297103b0a84582384d3c384ec3570c6ec66"} Mar 16 15:36:15 crc kubenswrapper[4736]: I0316 15:36:15.620298 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" event={"ID":"e662df69-1ac9-4967-a5a3-e72675cf70ff","Type":"ContainerStarted","Data":"5da8f608641b856507b68ed10073df4f2450bc4f122964a1047419fce1cfadb2"} Mar 16 15:36:15 crc kubenswrapper[4736]: I0316 15:36:15.944133 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:16 crc kubenswrapper[4736]: I0316 15:36:16.563290 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:36:16 crc kubenswrapper[4736]: I0316 15:36:16.631398 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9c6a4a2b-9065-493d-9ebb-e0b47f35825f" containerName="nova-api-log" containerID="cri-o://5b4adcffc9cb00dd409ec7ecc6d806edace18c4e885d88283380fc26d84d8fb3" gracePeriod=30 Mar 16 15:36:16 crc kubenswrapper[4736]: I0316 15:36:16.632561 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" event={"ID":"e662df69-1ac9-4967-a5a3-e72675cf70ff","Type":"ContainerStarted","Data":"4f73e59ce8766f2759453342c1038c3ceeb0ecc4dace95cad349bbcc974a7b92"} Mar 16 15:36:16 crc kubenswrapper[4736]: I0316 15:36:16.632598 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:16 crc kubenswrapper[4736]: I0316 15:36:16.632959 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9c6a4a2b-9065-493d-9ebb-e0b47f35825f" containerName="nova-api-api" containerID="cri-o://beabe5a8489926985c0b0e07f65433ecb29747a5d718276125a4ad04707ba4ca" gracePeriod=30 Mar 16 15:36:17 crc kubenswrapper[4736]: I0316 15:36:17.655514 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac80c6a5-0050-462e-b2d4-59f82ac50751","Type":"ContainerStarted","Data":"3dd6755928750fd159fe70f062fabd86ea6340960a093d86ff73f262bb9d78ca"} Mar 16 15:36:17 crc kubenswrapper[4736]: I0316 15:36:17.656844 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 16 15:36:17 crc kubenswrapper[4736]: I0316 15:36:17.667406 4736 generic.go:334] "Generic (PLEG): container finished" podID="9c6a4a2b-9065-493d-9ebb-e0b47f35825f" containerID="5b4adcffc9cb00dd409ec7ecc6d806edace18c4e885d88283380fc26d84d8fb3" exitCode=143 Mar 16 15:36:17 crc kubenswrapper[4736]: I0316 15:36:17.668347 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c6a4a2b-9065-493d-9ebb-e0b47f35825f","Type":"ContainerDied","Data":"5b4adcffc9cb00dd409ec7ecc6d806edace18c4e885d88283380fc26d84d8fb3"} Mar 16 15:36:17 crc kubenswrapper[4736]: I0316 15:36:17.694854 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" podStartSLOduration=4.691083897 podStartE2EDuration="4.691083897s" podCreationTimestamp="2026-03-16 15:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:36:16.681486317 +0000 UTC m=+1378.408876614" watchObservedRunningTime="2026-03-16 15:36:17.691083897 +0000 UTC m=+1379.418474184" Mar 16 15:36:17 crc kubenswrapper[4736]: I0316 15:36:17.698278 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.474530873 podStartE2EDuration="6.698263144s" podCreationTimestamp="2026-03-16 15:36:11 +0000 UTC" firstStartedPulling="2026-03-16 15:36:12.758748348 +0000 UTC m=+1374.486138635" lastFinishedPulling="2026-03-16 15:36:16.982480619 +0000 UTC m=+1378.709870906" observedRunningTime="2026-03-16 15:36:17.69520555 +0000 UTC m=+1379.422595837" watchObservedRunningTime="2026-03-16 15:36:17.698263144 +0000 UTC m=+1379.425653421" Mar 16 15:36:18 crc kubenswrapper[4736]: I0316 15:36:18.656012 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.613210 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.673583 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-horizon-tls-certs\") pod \"b6de0392-402f-47e4-aa1d-19c956a68e1d\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.673654 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-combined-ca-bundle\") pod \"b6de0392-402f-47e4-aa1d-19c956a68e1d\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.673762 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6de0392-402f-47e4-aa1d-19c956a68e1d-config-data\") pod \"b6de0392-402f-47e4-aa1d-19c956a68e1d\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.673798 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6de0392-402f-47e4-aa1d-19c956a68e1d-logs\") pod \"b6de0392-402f-47e4-aa1d-19c956a68e1d\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.673837 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-horizon-secret-key\") pod \"b6de0392-402f-47e4-aa1d-19c956a68e1d\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.673871 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6de0392-402f-47e4-aa1d-19c956a68e1d-scripts\") pod \"b6de0392-402f-47e4-aa1d-19c956a68e1d\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.673888 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkwjm\" (UniqueName: \"kubernetes.io/projected/b6de0392-402f-47e4-aa1d-19c956a68e1d-kube-api-access-nkwjm\") pod \"b6de0392-402f-47e4-aa1d-19c956a68e1d\" (UID: \"b6de0392-402f-47e4-aa1d-19c956a68e1d\") " Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.675245 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6de0392-402f-47e4-aa1d-19c956a68e1d-logs" (OuterVolumeSpecName: "logs") pod "b6de0392-402f-47e4-aa1d-19c956a68e1d" (UID: "b6de0392-402f-47e4-aa1d-19c956a68e1d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.675951 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6de0392-402f-47e4-aa1d-19c956a68e1d-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.686005 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b6de0392-402f-47e4-aa1d-19c956a68e1d" (UID: "b6de0392-402f-47e4-aa1d-19c956a68e1d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.692314 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6de0392-402f-47e4-aa1d-19c956a68e1d-kube-api-access-nkwjm" (OuterVolumeSpecName: "kube-api-access-nkwjm") pod "b6de0392-402f-47e4-aa1d-19c956a68e1d" (UID: "b6de0392-402f-47e4-aa1d-19c956a68e1d"). InnerVolumeSpecName "kube-api-access-nkwjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.701116 4736 generic.go:334] "Generic (PLEG): container finished" podID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerID="afec60def9ccf57438ca3ca1ad9b7e51a3abf15a3e6c9d520c24fac1d0bca222" exitCode=137 Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.701579 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="ceilometer-central-agent" containerID="cri-o://431703604afce90ff9eeb647b442eb33f08c0776a5f61d72180fd77545333a95" gracePeriod=30 Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.702022 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67c978df54-kdnqn" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.702780 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67c978df54-kdnqn" event={"ID":"b6de0392-402f-47e4-aa1d-19c956a68e1d","Type":"ContainerDied","Data":"afec60def9ccf57438ca3ca1ad9b7e51a3abf15a3e6c9d520c24fac1d0bca222"} Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.702893 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67c978df54-kdnqn" event={"ID":"b6de0392-402f-47e4-aa1d-19c956a68e1d","Type":"ContainerDied","Data":"127d2153838c8b6d99bb7724793cea56b2f67e573a545b6507d9527b41ea6fb2"} Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.702989 4736 scope.go:117] "RemoveContainer" containerID="ec7a546e5b00a48f1b8e940316ff04389add7b34cee99d2f1b01654c4990b1c2" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.703526 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="proxy-httpd" containerID="cri-o://3dd6755928750fd159fe70f062fabd86ea6340960a093d86ff73f262bb9d78ca" gracePeriod=30 Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.703690 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="sg-core" containerID="cri-o://6e7a32bce2e011c2834ae0c041b066148139ca5f4ce3f96e9daaf688ee03261b" gracePeriod=30 Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.703825 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="ceilometer-notification-agent" containerID="cri-o://508c873d6461f0da9609bb5919e8fc1782fe94ac8ee6d648ab00851c2a726024" gracePeriod=30 Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.739832 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6de0392-402f-47e4-aa1d-19c956a68e1d-scripts" (OuterVolumeSpecName: "scripts") pod "b6de0392-402f-47e4-aa1d-19c956a68e1d" (UID: "b6de0392-402f-47e4-aa1d-19c956a68e1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.777880 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6de0392-402f-47e4-aa1d-19c956a68e1d-config-data" (OuterVolumeSpecName: "config-data") pod "b6de0392-402f-47e4-aa1d-19c956a68e1d" (UID: "b6de0392-402f-47e4-aa1d-19c956a68e1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.778291 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6de0392-402f-47e4-aa1d-19c956a68e1d-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.778320 4736 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.778351 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6de0392-402f-47e4-aa1d-19c956a68e1d-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.778366 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkwjm\" (UniqueName: \"kubernetes.io/projected/b6de0392-402f-47e4-aa1d-19c956a68e1d-kube-api-access-nkwjm\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.796142 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6de0392-402f-47e4-aa1d-19c956a68e1d" (UID: "b6de0392-402f-47e4-aa1d-19c956a68e1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.844292 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "b6de0392-402f-47e4-aa1d-19c956a68e1d" (UID: "b6de0392-402f-47e4-aa1d-19c956a68e1d"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.880666 4736 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.880703 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6de0392-402f-47e4-aa1d-19c956a68e1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.967698 4736 scope.go:117] "RemoveContainer" containerID="afec60def9ccf57438ca3ca1ad9b7e51a3abf15a3e6c9d520c24fac1d0bca222" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.991287 4736 scope.go:117] "RemoveContainer" containerID="ec7a546e5b00a48f1b8e940316ff04389add7b34cee99d2f1b01654c4990b1c2" Mar 16 15:36:19 crc kubenswrapper[4736]: E0316 15:36:19.991752 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec7a546e5b00a48f1b8e940316ff04389add7b34cee99d2f1b01654c4990b1c2\": container with ID starting with ec7a546e5b00a48f1b8e940316ff04389add7b34cee99d2f1b01654c4990b1c2 not found: ID does not exist" containerID="ec7a546e5b00a48f1b8e940316ff04389add7b34cee99d2f1b01654c4990b1c2" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.991787 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec7a546e5b00a48f1b8e940316ff04389add7b34cee99d2f1b01654c4990b1c2"} err="failed to get container status \"ec7a546e5b00a48f1b8e940316ff04389add7b34cee99d2f1b01654c4990b1c2\": rpc error: code = NotFound desc = could not find container \"ec7a546e5b00a48f1b8e940316ff04389add7b34cee99d2f1b01654c4990b1c2\": container with ID starting with ec7a546e5b00a48f1b8e940316ff04389add7b34cee99d2f1b01654c4990b1c2 not found: ID does not exist" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.991814 4736 scope.go:117] "RemoveContainer" containerID="afec60def9ccf57438ca3ca1ad9b7e51a3abf15a3e6c9d520c24fac1d0bca222" Mar 16 15:36:19 crc kubenswrapper[4736]: E0316 15:36:19.992143 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afec60def9ccf57438ca3ca1ad9b7e51a3abf15a3e6c9d520c24fac1d0bca222\": container with ID starting with afec60def9ccf57438ca3ca1ad9b7e51a3abf15a3e6c9d520c24fac1d0bca222 not found: ID does not exist" containerID="afec60def9ccf57438ca3ca1ad9b7e51a3abf15a3e6c9d520c24fac1d0bca222" Mar 16 15:36:19 crc kubenswrapper[4736]: I0316 15:36:19.992162 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afec60def9ccf57438ca3ca1ad9b7e51a3abf15a3e6c9d520c24fac1d0bca222"} err="failed to get container status \"afec60def9ccf57438ca3ca1ad9b7e51a3abf15a3e6c9d520c24fac1d0bca222\": rpc error: code = NotFound desc = could not find container \"afec60def9ccf57438ca3ca1ad9b7e51a3abf15a3e6c9d520c24fac1d0bca222\": container with ID starting with afec60def9ccf57438ca3ca1ad9b7e51a3abf15a3e6c9d520c24fac1d0bca222 not found: ID does not exist" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.036950 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67c978df54-kdnqn"] Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.059534 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-67c978df54-kdnqn"] Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.525532 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.596532 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-combined-ca-bundle\") pod \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.596633 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-logs\") pod \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.596690 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-config-data\") pod \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.596740 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64rq2\" (UniqueName: \"kubernetes.io/projected/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-kube-api-access-64rq2\") pod \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\" (UID: \"9c6a4a2b-9065-493d-9ebb-e0b47f35825f\") " Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.598288 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-logs" (OuterVolumeSpecName: "logs") pod "9c6a4a2b-9065-493d-9ebb-e0b47f35825f" (UID: "9c6a4a2b-9065-493d-9ebb-e0b47f35825f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.605342 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-kube-api-access-64rq2" (OuterVolumeSpecName: "kube-api-access-64rq2") pod "9c6a4a2b-9065-493d-9ebb-e0b47f35825f" (UID: "9c6a4a2b-9065-493d-9ebb-e0b47f35825f"). InnerVolumeSpecName "kube-api-access-64rq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.635674 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c6a4a2b-9065-493d-9ebb-e0b47f35825f" (UID: "9c6a4a2b-9065-493d-9ebb-e0b47f35825f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.671013 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-config-data" (OuterVolumeSpecName: "config-data") pod "9c6a4a2b-9065-493d-9ebb-e0b47f35825f" (UID: "9c6a4a2b-9065-493d-9ebb-e0b47f35825f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.713396 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.713426 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64rq2\" (UniqueName: \"kubernetes.io/projected/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-kube-api-access-64rq2\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.713437 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.713445 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c6a4a2b-9065-493d-9ebb-e0b47f35825f-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.750535 4736 generic.go:334] "Generic (PLEG): container finished" podID="9c6a4a2b-9065-493d-9ebb-e0b47f35825f" containerID="beabe5a8489926985c0b0e07f65433ecb29747a5d718276125a4ad04707ba4ca" exitCode=0 Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.750599 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c6a4a2b-9065-493d-9ebb-e0b47f35825f","Type":"ContainerDied","Data":"beabe5a8489926985c0b0e07f65433ecb29747a5d718276125a4ad04707ba4ca"} Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.750627 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c6a4a2b-9065-493d-9ebb-e0b47f35825f","Type":"ContainerDied","Data":"44aedae27a88fef0ceb1c855ca9dde981ab9787dbc95af2053a29fdfe50ad340"} Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.750641 4736 scope.go:117] "RemoveContainer" containerID="beabe5a8489926985c0b0e07f65433ecb29747a5d718276125a4ad04707ba4ca" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.750767 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.773532 4736 generic.go:334] "Generic (PLEG): container finished" podID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerID="3dd6755928750fd159fe70f062fabd86ea6340960a093d86ff73f262bb9d78ca" exitCode=0 Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.773577 4736 generic.go:334] "Generic (PLEG): container finished" podID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerID="6e7a32bce2e011c2834ae0c041b066148139ca5f4ce3f96e9daaf688ee03261b" exitCode=2 Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.773588 4736 generic.go:334] "Generic (PLEG): container finished" podID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerID="508c873d6461f0da9609bb5919e8fc1782fe94ac8ee6d648ab00851c2a726024" exitCode=0 Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.773614 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac80c6a5-0050-462e-b2d4-59f82ac50751","Type":"ContainerDied","Data":"3dd6755928750fd159fe70f062fabd86ea6340960a093d86ff73f262bb9d78ca"} Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.773647 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac80c6a5-0050-462e-b2d4-59f82ac50751","Type":"ContainerDied","Data":"6e7a32bce2e011c2834ae0c041b066148139ca5f4ce3f96e9daaf688ee03261b"} Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.773662 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac80c6a5-0050-462e-b2d4-59f82ac50751","Type":"ContainerDied","Data":"508c873d6461f0da9609bb5919e8fc1782fe94ac8ee6d648ab00851c2a726024"} Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.810928 4736 scope.go:117] "RemoveContainer" containerID="5b4adcffc9cb00dd409ec7ecc6d806edace18c4e885d88283380fc26d84d8fb3" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.833133 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.848055 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.864555 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 16 15:36:20 crc kubenswrapper[4736]: E0316 15:36:20.864948 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.864966 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" Mar 16 15:36:20 crc kubenswrapper[4736]: E0316 15:36:20.864974 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6a4a2b-9065-493d-9ebb-e0b47f35825f" containerName="nova-api-log" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.864980 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6a4a2b-9065-493d-9ebb-e0b47f35825f" containerName="nova-api-log" Mar 16 15:36:20 crc kubenswrapper[4736]: E0316 15:36:20.865008 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon-log" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.865016 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon-log" Mar 16 15:36:20 crc kubenswrapper[4736]: E0316 15:36:20.865028 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c6a4a2b-9065-493d-9ebb-e0b47f35825f" containerName="nova-api-api" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.865033 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c6a4a2b-9065-493d-9ebb-e0b47f35825f" containerName="nova-api-api" Mar 16 15:36:20 crc kubenswrapper[4736]: E0316 15:36:20.865051 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.865057 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.865233 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.865243 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.865256 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.865269 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon-log" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.865277 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6a4a2b-9065-493d-9ebb-e0b47f35825f" containerName="nova-api-log" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.865287 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c6a4a2b-9065-493d-9ebb-e0b47f35825f" containerName="nova-api-api" Mar 16 15:36:20 crc kubenswrapper[4736]: E0316 15:36:20.865445 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.865458 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" containerName="horizon" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.866568 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.869876 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.869956 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.871093 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.879810 4736 scope.go:117] "RemoveContainer" containerID="beabe5a8489926985c0b0e07f65433ecb29747a5d718276125a4ad04707ba4ca" Mar 16 15:36:20 crc kubenswrapper[4736]: E0316 15:36:20.880764 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beabe5a8489926985c0b0e07f65433ecb29747a5d718276125a4ad04707ba4ca\": container with ID starting with beabe5a8489926985c0b0e07f65433ecb29747a5d718276125a4ad04707ba4ca not found: ID does not exist" containerID="beabe5a8489926985c0b0e07f65433ecb29747a5d718276125a4ad04707ba4ca" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.880793 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beabe5a8489926985c0b0e07f65433ecb29747a5d718276125a4ad04707ba4ca"} err="failed to get container status \"beabe5a8489926985c0b0e07f65433ecb29747a5d718276125a4ad04707ba4ca\": rpc error: code = NotFound desc = could not find container \"beabe5a8489926985c0b0e07f65433ecb29747a5d718276125a4ad04707ba4ca\": container with ID starting with beabe5a8489926985c0b0e07f65433ecb29747a5d718276125a4ad04707ba4ca not found: ID does not exist" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.880813 4736 scope.go:117] "RemoveContainer" containerID="5b4adcffc9cb00dd409ec7ecc6d806edace18c4e885d88283380fc26d84d8fb3" Mar 16 15:36:20 crc kubenswrapper[4736]: E0316 15:36:20.881331 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b4adcffc9cb00dd409ec7ecc6d806edace18c4e885d88283380fc26d84d8fb3\": container with ID starting with 5b4adcffc9cb00dd409ec7ecc6d806edace18c4e885d88283380fc26d84d8fb3 not found: ID does not exist" containerID="5b4adcffc9cb00dd409ec7ecc6d806edace18c4e885d88283380fc26d84d8fb3" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.881364 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b4adcffc9cb00dd409ec7ecc6d806edace18c4e885d88283380fc26d84d8fb3"} err="failed to get container status \"5b4adcffc9cb00dd409ec7ecc6d806edace18c4e885d88283380fc26d84d8fb3\": rpc error: code = NotFound desc = could not find container \"5b4adcffc9cb00dd409ec7ecc6d806edace18c4e885d88283380fc26d84d8fb3\": container with ID starting with 5b4adcffc9cb00dd409ec7ecc6d806edace18c4e885d88283380fc26d84d8fb3 not found: ID does not exist" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.893619 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.918481 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mzqv\" (UniqueName: \"kubernetes.io/projected/5cd5e627-3383-43f8-a354-d2ce73becd88-kube-api-access-9mzqv\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.918546 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-public-tls-certs\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.918567 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.918620 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd5e627-3383-43f8-a354-d2ce73becd88-logs\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.918639 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-config-data\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.918653 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.943783 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.970014 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.989750 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c6a4a2b-9065-493d-9ebb-e0b47f35825f" path="/var/lib/kubelet/pods/9c6a4a2b-9065-493d-9ebb-e0b47f35825f/volumes" Mar 16 15:36:20 crc kubenswrapper[4736]: I0316 15:36:20.990622 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6de0392-402f-47e4-aa1d-19c956a68e1d" path="/var/lib/kubelet/pods/b6de0392-402f-47e4-aa1d-19c956a68e1d/volumes" Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.020537 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-public-tls-certs\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.020595 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.020703 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd5e627-3383-43f8-a354-d2ce73becd88-logs\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.020727 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-config-data\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.020750 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.020953 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mzqv\" (UniqueName: \"kubernetes.io/projected/5cd5e627-3383-43f8-a354-d2ce73becd88-kube-api-access-9mzqv\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.022133 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd5e627-3383-43f8-a354-d2ce73becd88-logs\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.025967 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-public-tls-certs\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.026531 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.027530 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.030269 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-config-data\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.040255 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mzqv\" (UniqueName: \"kubernetes.io/projected/5cd5e627-3383-43f8-a354-d2ce73becd88-kube-api-access-9mzqv\") pod \"nova-api-0\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " pod="openstack/nova-api-0" Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.192808 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.770569 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:36:21 crc kubenswrapper[4736]: I0316 15:36:21.853590 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cd5e627-3383-43f8-a354-d2ce73becd88","Type":"ContainerStarted","Data":"c4ba95f5cdb712b09ca815c6440012b37bbd97ef4c1841ddacaf7b0965db4120"} Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.140070 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.460671 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qw4c4"] Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.462112 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.464929 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.466886 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.496540 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qw4c4"] Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.562275 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-config-data\") pod \"nova-cell1-cell-mapping-qw4c4\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.562950 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-scripts\") pod \"nova-cell1-cell-mapping-qw4c4\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.563007 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vhlg\" (UniqueName: \"kubernetes.io/projected/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-kube-api-access-9vhlg\") pod \"nova-cell1-cell-mapping-qw4c4\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.563156 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qw4c4\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.665280 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-config-data\") pod \"nova-cell1-cell-mapping-qw4c4\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.665415 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-scripts\") pod \"nova-cell1-cell-mapping-qw4c4\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.665452 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vhlg\" (UniqueName: \"kubernetes.io/projected/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-kube-api-access-9vhlg\") pod \"nova-cell1-cell-mapping-qw4c4\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.665515 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qw4c4\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.671651 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-config-data\") pod \"nova-cell1-cell-mapping-qw4c4\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.672889 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qw4c4\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.689500 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-scripts\") pod \"nova-cell1-cell-mapping-qw4c4\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.694067 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vhlg\" (UniqueName: \"kubernetes.io/projected/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-kube-api-access-9vhlg\") pod \"nova-cell1-cell-mapping-qw4c4\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.815644 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.868667 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cd5e627-3383-43f8-a354-d2ce73becd88","Type":"ContainerStarted","Data":"46921e7ee5d917a149e958ff5850d6b1f73806e8ccadad72a092a82e6c86f893"} Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.868714 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cd5e627-3383-43f8-a354-d2ce73becd88","Type":"ContainerStarted","Data":"0ecfbb6b535a2c0e7b8ae78d2c999595cfc91a470bd1ff14e48d826527483c68"} Mar 16 15:36:22 crc kubenswrapper[4736]: I0316 15:36:22.894981 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.894963449 podStartE2EDuration="2.894963449s" podCreationTimestamp="2026-03-16 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:36:22.89065611 +0000 UTC m=+1384.618046417" watchObservedRunningTime="2026-03-16 15:36:22.894963449 +0000 UTC m=+1384.622353736" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.442656 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qw4c4"] Mar 16 15:36:23 crc kubenswrapper[4736]: W0316 15:36:23.447137 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedef7df8_01f6_4f77_a1e9_25f7feef5ccd.slice/crio-3ea6111cffdd143667a46a039c7a46fc57ffd737044926785b05599d7a4c7c07 WatchSource:0}: Error finding container 3ea6111cffdd143667a46a039c7a46fc57ffd737044926785b05599d7a4c7c07: Status 404 returned error can't find the container with id 3ea6111cffdd143667a46a039c7a46fc57ffd737044926785b05599d7a4c7c07 Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.656832 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.688235 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac80c6a5-0050-462e-b2d4-59f82ac50751-log-httpd\") pod \"ac80c6a5-0050-462e-b2d4-59f82ac50751\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.688388 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-scripts\") pod \"ac80c6a5-0050-462e-b2d4-59f82ac50751\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.688463 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-sg-core-conf-yaml\") pod \"ac80c6a5-0050-462e-b2d4-59f82ac50751\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.688532 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-combined-ca-bundle\") pod \"ac80c6a5-0050-462e-b2d4-59f82ac50751\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.688612 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg6bl\" (UniqueName: \"kubernetes.io/projected/ac80c6a5-0050-462e-b2d4-59f82ac50751-kube-api-access-jg6bl\") pod \"ac80c6a5-0050-462e-b2d4-59f82ac50751\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.688640 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac80c6a5-0050-462e-b2d4-59f82ac50751-run-httpd\") pod \"ac80c6a5-0050-462e-b2d4-59f82ac50751\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.688787 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-ceilometer-tls-certs\") pod \"ac80c6a5-0050-462e-b2d4-59f82ac50751\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.688857 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-config-data\") pod \"ac80c6a5-0050-462e-b2d4-59f82ac50751\" (UID: \"ac80c6a5-0050-462e-b2d4-59f82ac50751\") " Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.689073 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac80c6a5-0050-462e-b2d4-59f82ac50751-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ac80c6a5-0050-462e-b2d4-59f82ac50751" (UID: "ac80c6a5-0050-462e-b2d4-59f82ac50751"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.689563 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac80c6a5-0050-462e-b2d4-59f82ac50751-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ac80c6a5-0050-462e-b2d4-59f82ac50751" (UID: "ac80c6a5-0050-462e-b2d4-59f82ac50751"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.689942 4736 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac80c6a5-0050-462e-b2d4-59f82ac50751-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.689956 4736 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac80c6a5-0050-462e-b2d4-59f82ac50751-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.700772 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-scripts" (OuterVolumeSpecName: "scripts") pod "ac80c6a5-0050-462e-b2d4-59f82ac50751" (UID: "ac80c6a5-0050-462e-b2d4-59f82ac50751"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.705170 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac80c6a5-0050-462e-b2d4-59f82ac50751-kube-api-access-jg6bl" (OuterVolumeSpecName: "kube-api-access-jg6bl") pod "ac80c6a5-0050-462e-b2d4-59f82ac50751" (UID: "ac80c6a5-0050-462e-b2d4-59f82ac50751"). InnerVolumeSpecName "kube-api-access-jg6bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.787368 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ac80c6a5-0050-462e-b2d4-59f82ac50751" (UID: "ac80c6a5-0050-462e-b2d4-59f82ac50751"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.792015 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.792053 4736 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.792068 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg6bl\" (UniqueName: \"kubernetes.io/projected/ac80c6a5-0050-462e-b2d4-59f82ac50751-kube-api-access-jg6bl\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.841071 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ac80c6a5-0050-462e-b2d4-59f82ac50751" (UID: "ac80c6a5-0050-462e-b2d4-59f82ac50751"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.873437 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac80c6a5-0050-462e-b2d4-59f82ac50751" (UID: "ac80c6a5-0050-462e-b2d4-59f82ac50751"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.878219 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qw4c4" event={"ID":"edef7df8-01f6-4f77-a1e9-25f7feef5ccd","Type":"ContainerStarted","Data":"f2a596d31f9913531bb6a14d73b142598d0fa23305070bad9c9782b7947b7d30"} Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.878266 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qw4c4" event={"ID":"edef7df8-01f6-4f77-a1e9-25f7feef5ccd","Type":"ContainerStarted","Data":"3ea6111cffdd143667a46a039c7a46fc57ffd737044926785b05599d7a4c7c07"} Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.883011 4736 generic.go:334] "Generic (PLEG): container finished" podID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerID="431703604afce90ff9eeb647b442eb33f08c0776a5f61d72180fd77545333a95" exitCode=0 Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.883300 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac80c6a5-0050-462e-b2d4-59f82ac50751","Type":"ContainerDied","Data":"431703604afce90ff9eeb647b442eb33f08c0776a5f61d72180fd77545333a95"} Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.883393 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac80c6a5-0050-462e-b2d4-59f82ac50751","Type":"ContainerDied","Data":"585f164de5e09db83db8ebd5909d3d4d3c4a96106c0fb6014fdb1c9ecc7f9f09"} Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.883421 4736 scope.go:117] "RemoveContainer" containerID="3dd6755928750fd159fe70f062fabd86ea6340960a093d86ff73f262bb9d78ca" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.883833 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.907483 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.907520 4736 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.913415 4736 scope.go:117] "RemoveContainer" containerID="6e7a32bce2e011c2834ae0c041b066148139ca5f4ce3f96e9daaf688ee03261b" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.915680 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qw4c4" podStartSLOduration=1.915654144 podStartE2EDuration="1.915654144s" podCreationTimestamp="2026-03-16 15:36:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:36:23.91405937 +0000 UTC m=+1385.641449657" watchObservedRunningTime="2026-03-16 15:36:23.915654144 +0000 UTC m=+1385.643044431" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.936044 4736 scope.go:117] "RemoveContainer" containerID="508c873d6461f0da9609bb5919e8fc1782fe94ac8ee6d648ab00851c2a726024" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.940261 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-config-data" (OuterVolumeSpecName: "config-data") pod "ac80c6a5-0050-462e-b2d4-59f82ac50751" (UID: "ac80c6a5-0050-462e-b2d4-59f82ac50751"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.962195 4736 scope.go:117] "RemoveContainer" containerID="431703604afce90ff9eeb647b442eb33f08c0776a5f61d72180fd77545333a95" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.981718 4736 scope.go:117] "RemoveContainer" containerID="3dd6755928750fd159fe70f062fabd86ea6340960a093d86ff73f262bb9d78ca" Mar 16 15:36:23 crc kubenswrapper[4736]: E0316 15:36:23.982458 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd6755928750fd159fe70f062fabd86ea6340960a093d86ff73f262bb9d78ca\": container with ID starting with 3dd6755928750fd159fe70f062fabd86ea6340960a093d86ff73f262bb9d78ca not found: ID does not exist" containerID="3dd6755928750fd159fe70f062fabd86ea6340960a093d86ff73f262bb9d78ca" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.982573 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd6755928750fd159fe70f062fabd86ea6340960a093d86ff73f262bb9d78ca"} err="failed to get container status \"3dd6755928750fd159fe70f062fabd86ea6340960a093d86ff73f262bb9d78ca\": rpc error: code = NotFound desc = could not find container \"3dd6755928750fd159fe70f062fabd86ea6340960a093d86ff73f262bb9d78ca\": container with ID starting with 3dd6755928750fd159fe70f062fabd86ea6340960a093d86ff73f262bb9d78ca not found: ID does not exist" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.982647 4736 scope.go:117] "RemoveContainer" containerID="6e7a32bce2e011c2834ae0c041b066148139ca5f4ce3f96e9daaf688ee03261b" Mar 16 15:36:23 crc kubenswrapper[4736]: E0316 15:36:23.983429 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7a32bce2e011c2834ae0c041b066148139ca5f4ce3f96e9daaf688ee03261b\": container with ID starting with 6e7a32bce2e011c2834ae0c041b066148139ca5f4ce3f96e9daaf688ee03261b not found: ID does not exist" containerID="6e7a32bce2e011c2834ae0c041b066148139ca5f4ce3f96e9daaf688ee03261b" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.983538 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e7a32bce2e011c2834ae0c041b066148139ca5f4ce3f96e9daaf688ee03261b"} err="failed to get container status \"6e7a32bce2e011c2834ae0c041b066148139ca5f4ce3f96e9daaf688ee03261b\": rpc error: code = NotFound desc = could not find container \"6e7a32bce2e011c2834ae0c041b066148139ca5f4ce3f96e9daaf688ee03261b\": container with ID starting with 6e7a32bce2e011c2834ae0c041b066148139ca5f4ce3f96e9daaf688ee03261b not found: ID does not exist" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.983631 4736 scope.go:117] "RemoveContainer" containerID="508c873d6461f0da9609bb5919e8fc1782fe94ac8ee6d648ab00851c2a726024" Mar 16 15:36:23 crc kubenswrapper[4736]: E0316 15:36:23.984015 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"508c873d6461f0da9609bb5919e8fc1782fe94ac8ee6d648ab00851c2a726024\": container with ID starting with 508c873d6461f0da9609bb5919e8fc1782fe94ac8ee6d648ab00851c2a726024 not found: ID does not exist" containerID="508c873d6461f0da9609bb5919e8fc1782fe94ac8ee6d648ab00851c2a726024" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.984095 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"508c873d6461f0da9609bb5919e8fc1782fe94ac8ee6d648ab00851c2a726024"} err="failed to get container status \"508c873d6461f0da9609bb5919e8fc1782fe94ac8ee6d648ab00851c2a726024\": rpc error: code = NotFound desc = could not find container \"508c873d6461f0da9609bb5919e8fc1782fe94ac8ee6d648ab00851c2a726024\": container with ID starting with 508c873d6461f0da9609bb5919e8fc1782fe94ac8ee6d648ab00851c2a726024 not found: ID does not exist" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.984201 4736 scope.go:117] "RemoveContainer" containerID="431703604afce90ff9eeb647b442eb33f08c0776a5f61d72180fd77545333a95" Mar 16 15:36:23 crc kubenswrapper[4736]: E0316 15:36:23.984616 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"431703604afce90ff9eeb647b442eb33f08c0776a5f61d72180fd77545333a95\": container with ID starting with 431703604afce90ff9eeb647b442eb33f08c0776a5f61d72180fd77545333a95 not found: ID does not exist" containerID="431703604afce90ff9eeb647b442eb33f08c0776a5f61d72180fd77545333a95" Mar 16 15:36:23 crc kubenswrapper[4736]: I0316 15:36:23.984712 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431703604afce90ff9eeb647b442eb33f08c0776a5f61d72180fd77545333a95"} err="failed to get container status \"431703604afce90ff9eeb647b442eb33f08c0776a5f61d72180fd77545333a95\": rpc error: code = NotFound desc = could not find container \"431703604afce90ff9eeb647b442eb33f08c0776a5f61d72180fd77545333a95\": container with ID starting with 431703604afce90ff9eeb647b442eb33f08c0776a5f61d72180fd77545333a95 not found: ID does not exist" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.011169 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac80c6a5-0050-462e-b2d4-59f82ac50751-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.138253 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.205119 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68fd9f6bc5-ddxzj"] Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.208639 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" podUID="83cfe74a-03e4-42bf-bed9-2d21ef6031eb" containerName="dnsmasq-dns" containerID="cri-o://eee3f0277c6514c70468deded615fd0d24df773bc0c724f2ae9e436b3f43bcb2" gracePeriod=10 Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.304219 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.388491 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.402693 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:36:24 crc kubenswrapper[4736]: E0316 15:36:24.403438 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="ceilometer-notification-agent" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.403468 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="ceilometer-notification-agent" Mar 16 15:36:24 crc kubenswrapper[4736]: E0316 15:36:24.403521 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="sg-core" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.403531 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="sg-core" Mar 16 15:36:24 crc kubenswrapper[4736]: E0316 15:36:24.403552 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="proxy-httpd" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.403560 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="proxy-httpd" Mar 16 15:36:24 crc kubenswrapper[4736]: E0316 15:36:24.403574 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="ceilometer-central-agent" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.403582 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="ceilometer-central-agent" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.403805 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="proxy-httpd" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.403821 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="ceilometer-central-agent" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.403829 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="sg-core" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.403847 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" containerName="ceilometer-notification-agent" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.406237 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.410031 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.411191 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.411343 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.421402 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.538866 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.538928 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsrsk\" (UniqueName: \"kubernetes.io/projected/321d2397-bb79-4799-8725-95081269785f-kube-api-access-gsrsk\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.539211 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.539295 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-scripts\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.539340 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/321d2397-bb79-4799-8725-95081269785f-run-httpd\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.539424 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-config-data\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.539461 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/321d2397-bb79-4799-8725-95081269785f-log-httpd\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.539805 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.642543 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.642622 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.642647 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsrsk\" (UniqueName: \"kubernetes.io/projected/321d2397-bb79-4799-8725-95081269785f-kube-api-access-gsrsk\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.642702 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.642730 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-scripts\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.642756 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/321d2397-bb79-4799-8725-95081269785f-run-httpd\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.642789 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-config-data\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.642812 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/321d2397-bb79-4799-8725-95081269785f-log-httpd\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.643392 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/321d2397-bb79-4799-8725-95081269785f-log-httpd\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.645023 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/321d2397-bb79-4799-8725-95081269785f-run-httpd\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.661271 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-scripts\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.661954 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.666750 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-config-data\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.667608 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.675575 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsrsk\" (UniqueName: \"kubernetes.io/projected/321d2397-bb79-4799-8725-95081269785f-kube-api-access-gsrsk\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.676132 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/321d2397-bb79-4799-8725-95081269785f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"321d2397-bb79-4799-8725-95081269785f\") " pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.805193 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.939614 4736 generic.go:334] "Generic (PLEG): container finished" podID="83cfe74a-03e4-42bf-bed9-2d21ef6031eb" containerID="eee3f0277c6514c70468deded615fd0d24df773bc0c724f2ae9e436b3f43bcb2" exitCode=0 Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.940540 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" event={"ID":"83cfe74a-03e4-42bf-bed9-2d21ef6031eb","Type":"ContainerDied","Data":"eee3f0277c6514c70468deded615fd0d24df773bc0c724f2ae9e436b3f43bcb2"} Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.940635 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" event={"ID":"83cfe74a-03e4-42bf-bed9-2d21ef6031eb","Type":"ContainerDied","Data":"b99977ef5188118e5a1377400e5c03e1b2265586af0f375376ca77e243507518"} Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.940651 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b99977ef5188118e5a1377400e5c03e1b2265586af0f375376ca77e243507518" Mar 16 15:36:24 crc kubenswrapper[4736]: I0316 15:36:24.949571 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.062245 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8vzt\" (UniqueName: \"kubernetes.io/projected/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-kube-api-access-b8vzt\") pod \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.064415 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-dns-svc\") pod \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.064470 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-config\") pod \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.064665 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-ovsdbserver-nb\") pod \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.064702 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-ovsdbserver-sb\") pod \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\" (UID: \"83cfe74a-03e4-42bf-bed9-2d21ef6031eb\") " Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.072912 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac80c6a5-0050-462e-b2d4-59f82ac50751" path="/var/lib/kubelet/pods/ac80c6a5-0050-462e-b2d4-59f82ac50751/volumes" Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.090184 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-kube-api-access-b8vzt" (OuterVolumeSpecName: "kube-api-access-b8vzt") pod "83cfe74a-03e4-42bf-bed9-2d21ef6031eb" (UID: "83cfe74a-03e4-42bf-bed9-2d21ef6031eb"). InnerVolumeSpecName "kube-api-access-b8vzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.167515 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "83cfe74a-03e4-42bf-bed9-2d21ef6031eb" (UID: "83cfe74a-03e4-42bf-bed9-2d21ef6031eb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.168985 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.169005 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8vzt\" (UniqueName: \"kubernetes.io/projected/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-kube-api-access-b8vzt\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.173557 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "83cfe74a-03e4-42bf-bed9-2d21ef6031eb" (UID: "83cfe74a-03e4-42bf-bed9-2d21ef6031eb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.180172 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "83cfe74a-03e4-42bf-bed9-2d21ef6031eb" (UID: "83cfe74a-03e4-42bf-bed9-2d21ef6031eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.201638 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-config" (OuterVolumeSpecName: "config") pod "83cfe74a-03e4-42bf-bed9-2d21ef6031eb" (UID: "83cfe74a-03e4-42bf-bed9-2d21ef6031eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.271411 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.271452 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.271461 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cfe74a-03e4-42bf-bed9-2d21ef6031eb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.400343 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.954516 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68fd9f6bc5-ddxzj" Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.958720 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"321d2397-bb79-4799-8725-95081269785f","Type":"ContainerStarted","Data":"7bea4df300efab67b4cee39b2e7427609216a49525f1efed6d3324f694c73f79"} Mar 16 15:36:25 crc kubenswrapper[4736]: I0316 15:36:25.958766 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"321d2397-bb79-4799-8725-95081269785f","Type":"ContainerStarted","Data":"1e8b7ba660a16d678af911793f828c4e043f2f001b5b898d666c84b1105a78cb"} Mar 16 15:36:26 crc kubenswrapper[4736]: I0316 15:36:26.019694 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68fd9f6bc5-ddxzj"] Mar 16 15:36:26 crc kubenswrapper[4736]: I0316 15:36:26.030575 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68fd9f6bc5-ddxzj"] Mar 16 15:36:26 crc kubenswrapper[4736]: I0316 15:36:26.967211 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"321d2397-bb79-4799-8725-95081269785f","Type":"ContainerStarted","Data":"15c84aef5a037a4d6665bf70b0236669635fd822098ef414893f486c2120d018"} Mar 16 15:36:26 crc kubenswrapper[4736]: I0316 15:36:26.988514 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83cfe74a-03e4-42bf-bed9-2d21ef6031eb" path="/var/lib/kubelet/pods/83cfe74a-03e4-42bf-bed9-2d21ef6031eb/volumes" Mar 16 15:36:27 crc kubenswrapper[4736]: I0316 15:36:27.982005 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"321d2397-bb79-4799-8725-95081269785f","Type":"ContainerStarted","Data":"e2f8025c0d4177c5bac0637772993e31d74c1e2eb509ab76069c662128aa9dd1"} Mar 16 15:36:29 crc kubenswrapper[4736]: I0316 15:36:29.002385 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 16 15:36:29 crc kubenswrapper[4736]: I0316 15:36:29.004640 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"321d2397-bb79-4799-8725-95081269785f","Type":"ContainerStarted","Data":"d7a659543b99b72027d372061be5f5e4b8da16cbe70eaf31bc61912f7c9d4288"} Mar 16 15:36:29 crc kubenswrapper[4736]: I0316 15:36:29.064160 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.87137686 podStartE2EDuration="5.06409303s" podCreationTimestamp="2026-03-16 15:36:24 +0000 UTC" firstStartedPulling="2026-03-16 15:36:25.41133373 +0000 UTC m=+1387.138724017" lastFinishedPulling="2026-03-16 15:36:28.6040499 +0000 UTC m=+1390.331440187" observedRunningTime="2026-03-16 15:36:29.04049731 +0000 UTC m=+1390.767887587" watchObservedRunningTime="2026-03-16 15:36:29.06409303 +0000 UTC m=+1390.791483317" Mar 16 15:36:31 crc kubenswrapper[4736]: I0316 15:36:31.044504 4736 generic.go:334] "Generic (PLEG): container finished" podID="edef7df8-01f6-4f77-a1e9-25f7feef5ccd" containerID="f2a596d31f9913531bb6a14d73b142598d0fa23305070bad9c9782b7947b7d30" exitCode=0 Mar 16 15:36:31 crc kubenswrapper[4736]: I0316 15:36:31.044615 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qw4c4" event={"ID":"edef7df8-01f6-4f77-a1e9-25f7feef5ccd","Type":"ContainerDied","Data":"f2a596d31f9913531bb6a14d73b142598d0fa23305070bad9c9782b7947b7d30"} Mar 16 15:36:31 crc kubenswrapper[4736]: I0316 15:36:31.193145 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 16 15:36:31 crc kubenswrapper[4736]: I0316 15:36:31.193235 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.214471 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5cd5e627-3383-43f8-a354-d2ce73becd88" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.214866 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5cd5e627-3383-43f8-a354-d2ce73becd88" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.556232 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.641127 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-scripts\") pod \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.641256 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-config-data\") pod \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.641331 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vhlg\" (UniqueName: \"kubernetes.io/projected/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-kube-api-access-9vhlg\") pod \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.641380 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-combined-ca-bundle\") pod \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\" (UID: \"edef7df8-01f6-4f77-a1e9-25f7feef5ccd\") " Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.652291 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-kube-api-access-9vhlg" (OuterVolumeSpecName: "kube-api-access-9vhlg") pod "edef7df8-01f6-4f77-a1e9-25f7feef5ccd" (UID: "edef7df8-01f6-4f77-a1e9-25f7feef5ccd"). InnerVolumeSpecName "kube-api-access-9vhlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.656147 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-scripts" (OuterVolumeSpecName: "scripts") pod "edef7df8-01f6-4f77-a1e9-25f7feef5ccd" (UID: "edef7df8-01f6-4f77-a1e9-25f7feef5ccd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.682738 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-config-data" (OuterVolumeSpecName: "config-data") pod "edef7df8-01f6-4f77-a1e9-25f7feef5ccd" (UID: "edef7df8-01f6-4f77-a1e9-25f7feef5ccd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.708877 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edef7df8-01f6-4f77-a1e9-25f7feef5ccd" (UID: "edef7df8-01f6-4f77-a1e9-25f7feef5ccd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.744158 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.744195 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.744206 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vhlg\" (UniqueName: \"kubernetes.io/projected/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-kube-api-access-9vhlg\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:32 crc kubenswrapper[4736]: I0316 15:36:32.744216 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edef7df8-01f6-4f77-a1e9-25f7feef5ccd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:33 crc kubenswrapper[4736]: I0316 15:36:33.080834 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qw4c4" Mar 16 15:36:33 crc kubenswrapper[4736]: I0316 15:36:33.080721 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qw4c4" event={"ID":"edef7df8-01f6-4f77-a1e9-25f7feef5ccd","Type":"ContainerDied","Data":"3ea6111cffdd143667a46a039c7a46fc57ffd737044926785b05599d7a4c7c07"} Mar 16 15:36:33 crc kubenswrapper[4736]: I0316 15:36:33.082504 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ea6111cffdd143667a46a039c7a46fc57ffd737044926785b05599d7a4c7c07" Mar 16 15:36:33 crc kubenswrapper[4736]: I0316 15:36:33.269732 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:36:33 crc kubenswrapper[4736]: I0316 15:36:33.270958 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="36d4a559-748a-4e80-8bce-c66068084394" containerName="nova-scheduler-scheduler" containerID="cri-o://88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4" gracePeriod=30 Mar 16 15:36:33 crc kubenswrapper[4736]: I0316 15:36:33.289867 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:36:33 crc kubenswrapper[4736]: I0316 15:36:33.290540 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5cd5e627-3383-43f8-a354-d2ce73becd88" containerName="nova-api-log" containerID="cri-o://0ecfbb6b535a2c0e7b8ae78d2c999595cfc91a470bd1ff14e48d826527483c68" gracePeriod=30 Mar 16 15:36:33 crc kubenswrapper[4736]: I0316 15:36:33.290637 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5cd5e627-3383-43f8-a354-d2ce73becd88" containerName="nova-api-api" containerID="cri-o://46921e7ee5d917a149e958ff5850d6b1f73806e8ccadad72a092a82e6c86f893" gracePeriod=30 Mar 16 15:36:33 crc kubenswrapper[4736]: I0316 15:36:33.309073 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:36:33 crc kubenswrapper[4736]: I0316 15:36:33.309671 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" containerName="nova-metadata-log" containerID="cri-o://6481211df4dc96e48aa6f1afaf366f2a8ca0e8dbd2fb4da9b3315cfc0ebd6dc1" gracePeriod=30 Mar 16 15:36:33 crc kubenswrapper[4736]: I0316 15:36:33.309879 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" containerName="nova-metadata-metadata" containerID="cri-o://1d9a2f0840afd93e704ea2c5502027afec9421a5a42aa1bfbe8d099dc5322bde" gracePeriod=30 Mar 16 15:36:34 crc kubenswrapper[4736]: I0316 15:36:34.092951 4736 generic.go:334] "Generic (PLEG): container finished" podID="5cd5e627-3383-43f8-a354-d2ce73becd88" containerID="0ecfbb6b535a2c0e7b8ae78d2c999595cfc91a470bd1ff14e48d826527483c68" exitCode=143 Mar 16 15:36:34 crc kubenswrapper[4736]: I0316 15:36:34.093015 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cd5e627-3383-43f8-a354-d2ce73becd88","Type":"ContainerDied","Data":"0ecfbb6b535a2c0e7b8ae78d2c999595cfc91a470bd1ff14e48d826527483c68"} Mar 16 15:36:34 crc kubenswrapper[4736]: I0316 15:36:34.095269 4736 generic.go:334] "Generic (PLEG): container finished" podID="3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" containerID="6481211df4dc96e48aa6f1afaf366f2a8ca0e8dbd2fb4da9b3315cfc0ebd6dc1" exitCode=143 Mar 16 15:36:34 crc kubenswrapper[4736]: I0316 15:36:34.095304 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f","Type":"ContainerDied","Data":"6481211df4dc96e48aa6f1afaf366f2a8ca0e8dbd2fb4da9b3315cfc0ebd6dc1"} Mar 16 15:36:35 crc kubenswrapper[4736]: E0316 15:36:35.660374 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4 is running failed: container process not found" containerID="88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 16 15:36:35 crc kubenswrapper[4736]: E0316 15:36:35.661581 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4 is running failed: container process not found" containerID="88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 16 15:36:35 crc kubenswrapper[4736]: E0316 15:36:35.661898 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4 is running failed: container process not found" containerID="88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 16 15:36:35 crc kubenswrapper[4736]: E0316 15:36:35.661938 4736 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="36d4a559-748a-4e80-8bce-c66068084394" containerName="nova-scheduler-scheduler" Mar 16 15:36:35 crc kubenswrapper[4736]: I0316 15:36:35.809183 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 16 15:36:35 crc kubenswrapper[4736]: I0316 15:36:35.913660 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9n9s\" (UniqueName: \"kubernetes.io/projected/36d4a559-748a-4e80-8bce-c66068084394-kube-api-access-t9n9s\") pod \"36d4a559-748a-4e80-8bce-c66068084394\" (UID: \"36d4a559-748a-4e80-8bce-c66068084394\") " Mar 16 15:36:35 crc kubenswrapper[4736]: I0316 15:36:35.913995 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d4a559-748a-4e80-8bce-c66068084394-combined-ca-bundle\") pod \"36d4a559-748a-4e80-8bce-c66068084394\" (UID: \"36d4a559-748a-4e80-8bce-c66068084394\") " Mar 16 15:36:35 crc kubenswrapper[4736]: I0316 15:36:35.915231 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d4a559-748a-4e80-8bce-c66068084394-config-data\") pod \"36d4a559-748a-4e80-8bce-c66068084394\" (UID: \"36d4a559-748a-4e80-8bce-c66068084394\") " Mar 16 15:36:35 crc kubenswrapper[4736]: I0316 15:36:35.921468 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d4a559-748a-4e80-8bce-c66068084394-kube-api-access-t9n9s" (OuterVolumeSpecName: "kube-api-access-t9n9s") pod "36d4a559-748a-4e80-8bce-c66068084394" (UID: "36d4a559-748a-4e80-8bce-c66068084394"). InnerVolumeSpecName "kube-api-access-t9n9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:36:35 crc kubenswrapper[4736]: I0316 15:36:35.947936 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d4a559-748a-4e80-8bce-c66068084394-config-data" (OuterVolumeSpecName: "config-data") pod "36d4a559-748a-4e80-8bce-c66068084394" (UID: "36d4a559-748a-4e80-8bce-c66068084394"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:35 crc kubenswrapper[4736]: I0316 15:36:35.960734 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d4a559-748a-4e80-8bce-c66068084394-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36d4a559-748a-4e80-8bce-c66068084394" (UID: "36d4a559-748a-4e80-8bce-c66068084394"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.023312 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d4a559-748a-4e80-8bce-c66068084394-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.023349 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9n9s\" (UniqueName: \"kubernetes.io/projected/36d4a559-748a-4e80-8bce-c66068084394-kube-api-access-t9n9s\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.023361 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d4a559-748a-4e80-8bce-c66068084394-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.115828 4736 generic.go:334] "Generic (PLEG): container finished" podID="36d4a559-748a-4e80-8bce-c66068084394" containerID="88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4" exitCode=0 Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.115881 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36d4a559-748a-4e80-8bce-c66068084394","Type":"ContainerDied","Data":"88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4"} Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.115918 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36d4a559-748a-4e80-8bce-c66068084394","Type":"ContainerDied","Data":"f428914989dd517b72661482fb993c5ca043f5ba54f69fad9c88627e454c2cea"} Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.115920 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.115936 4736 scope.go:117] "RemoveContainer" containerID="88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.144051 4736 scope.go:117] "RemoveContainer" containerID="88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4" Mar 16 15:36:36 crc kubenswrapper[4736]: E0316 15:36:36.144566 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4\": container with ID starting with 88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4 not found: ID does not exist" containerID="88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.144600 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4"} err="failed to get container status \"88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4\": rpc error: code = NotFound desc = could not find container \"88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4\": container with ID starting with 88bb93a9aa69f4d00cd957d5071031966c7d878fea953fdaea9428445160f4c4 not found: ID does not exist" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.151368 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.163503 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.185325 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:36:36 crc kubenswrapper[4736]: E0316 15:36:36.185800 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83cfe74a-03e4-42bf-bed9-2d21ef6031eb" containerName="init" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.185821 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="83cfe74a-03e4-42bf-bed9-2d21ef6031eb" containerName="init" Mar 16 15:36:36 crc kubenswrapper[4736]: E0316 15:36:36.185843 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83cfe74a-03e4-42bf-bed9-2d21ef6031eb" containerName="dnsmasq-dns" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.185851 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="83cfe74a-03e4-42bf-bed9-2d21ef6031eb" containerName="dnsmasq-dns" Mar 16 15:36:36 crc kubenswrapper[4736]: E0316 15:36:36.185865 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edef7df8-01f6-4f77-a1e9-25f7feef5ccd" containerName="nova-manage" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.185872 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="edef7df8-01f6-4f77-a1e9-25f7feef5ccd" containerName="nova-manage" Mar 16 15:36:36 crc kubenswrapper[4736]: E0316 15:36:36.185893 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d4a559-748a-4e80-8bce-c66068084394" containerName="nova-scheduler-scheduler" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.185900 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d4a559-748a-4e80-8bce-c66068084394" containerName="nova-scheduler-scheduler" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.188290 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d4a559-748a-4e80-8bce-c66068084394" containerName="nova-scheduler-scheduler" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.188336 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="edef7df8-01f6-4f77-a1e9-25f7feef5ccd" containerName="nova-manage" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.188355 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="83cfe74a-03e4-42bf-bed9-2d21ef6031eb" containerName="dnsmasq-dns" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.189267 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.191547 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.204873 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.227748 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzgzv\" (UniqueName: \"kubernetes.io/projected/c38eb8c1-13d7-4ef2-b026-d55b36f56919-kube-api-access-zzgzv\") pod \"nova-scheduler-0\" (UID: \"c38eb8c1-13d7-4ef2-b026-d55b36f56919\") " pod="openstack/nova-scheduler-0" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.227900 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38eb8c1-13d7-4ef2-b026-d55b36f56919-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c38eb8c1-13d7-4ef2-b026-d55b36f56919\") " pod="openstack/nova-scheduler-0" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.227967 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38eb8c1-13d7-4ef2-b026-d55b36f56919-config-data\") pod \"nova-scheduler-0\" (UID: \"c38eb8c1-13d7-4ef2-b026-d55b36f56919\") " pod="openstack/nova-scheduler-0" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.329322 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38eb8c1-13d7-4ef2-b026-d55b36f56919-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c38eb8c1-13d7-4ef2-b026-d55b36f56919\") " pod="openstack/nova-scheduler-0" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.329408 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38eb8c1-13d7-4ef2-b026-d55b36f56919-config-data\") pod \"nova-scheduler-0\" (UID: \"c38eb8c1-13d7-4ef2-b026-d55b36f56919\") " pod="openstack/nova-scheduler-0" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.329496 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzgzv\" (UniqueName: \"kubernetes.io/projected/c38eb8c1-13d7-4ef2-b026-d55b36f56919-kube-api-access-zzgzv\") pod \"nova-scheduler-0\" (UID: \"c38eb8c1-13d7-4ef2-b026-d55b36f56919\") " pod="openstack/nova-scheduler-0" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.334784 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38eb8c1-13d7-4ef2-b026-d55b36f56919-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c38eb8c1-13d7-4ef2-b026-d55b36f56919\") " pod="openstack/nova-scheduler-0" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.341653 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c38eb8c1-13d7-4ef2-b026-d55b36f56919-config-data\") pod \"nova-scheduler-0\" (UID: \"c38eb8c1-13d7-4ef2-b026-d55b36f56919\") " pod="openstack/nova-scheduler-0" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.345748 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzgzv\" (UniqueName: \"kubernetes.io/projected/c38eb8c1-13d7-4ef2-b026-d55b36f56919-kube-api-access-zzgzv\") pod \"nova-scheduler-0\" (UID: \"c38eb8c1-13d7-4ef2-b026-d55b36f56919\") " pod="openstack/nova-scheduler-0" Mar 16 15:36:36 crc kubenswrapper[4736]: I0316 15:36:36.507696 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.000457 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36d4a559-748a-4e80-8bce-c66068084394" path="/var/lib/kubelet/pods/36d4a559-748a-4e80-8bce-c66068084394/volumes" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.066735 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.094328 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.148766 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-config-data\") pod \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.148828 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-combined-ca-bundle\") pod \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.148925 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-logs\") pod \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.148970 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl885\" (UniqueName: \"kubernetes.io/projected/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-kube-api-access-vl885\") pod \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.149000 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-nova-metadata-tls-certs\") pod \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\" (UID: \"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f\") " Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.163625 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-logs" (OuterVolumeSpecName: "logs") pod "3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" (UID: "3709ecdf-1b56-4e47-8c2a-31fc3a7e940f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.183430 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-kube-api-access-vl885" (OuterVolumeSpecName: "kube-api-access-vl885") pod "3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" (UID: "3709ecdf-1b56-4e47-8c2a-31fc3a7e940f"). InnerVolumeSpecName "kube-api-access-vl885". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.195069 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c38eb8c1-13d7-4ef2-b026-d55b36f56919","Type":"ContainerStarted","Data":"4ffcda2615578b36debed54c0ebecfd77bdc61965c533dee7ebb38bc6094946a"} Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.218340 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" (UID: "3709ecdf-1b56-4e47-8c2a-31fc3a7e940f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.225836 4736 generic.go:334] "Generic (PLEG): container finished" podID="3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" containerID="1d9a2f0840afd93e704ea2c5502027afec9421a5a42aa1bfbe8d099dc5322bde" exitCode=0 Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.225913 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f","Type":"ContainerDied","Data":"1d9a2f0840afd93e704ea2c5502027afec9421a5a42aa1bfbe8d099dc5322bde"} Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.225961 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3709ecdf-1b56-4e47-8c2a-31fc3a7e940f","Type":"ContainerDied","Data":"fa035525bad3324349297ec4e52dd11c82d48b6a12146eaca8f648ea2402dee7"} Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.225985 4736 scope.go:117] "RemoveContainer" containerID="1d9a2f0840afd93e704ea2c5502027afec9421a5a42aa1bfbe8d099dc5322bde" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.226195 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.230820 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-config-data" (OuterVolumeSpecName: "config-data") pod "3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" (UID: "3709ecdf-1b56-4e47-8c2a-31fc3a7e940f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.254496 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.254533 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl885\" (UniqueName: \"kubernetes.io/projected/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-kube-api-access-vl885\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.254548 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.254558 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.265352 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" (UID: "3709ecdf-1b56-4e47-8c2a-31fc3a7e940f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.273740 4736 scope.go:117] "RemoveContainer" containerID="6481211df4dc96e48aa6f1afaf366f2a8ca0e8dbd2fb4da9b3315cfc0ebd6dc1" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.306986 4736 scope.go:117] "RemoveContainer" containerID="1d9a2f0840afd93e704ea2c5502027afec9421a5a42aa1bfbe8d099dc5322bde" Mar 16 15:36:37 crc kubenswrapper[4736]: E0316 15:36:37.307589 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d9a2f0840afd93e704ea2c5502027afec9421a5a42aa1bfbe8d099dc5322bde\": container with ID starting with 1d9a2f0840afd93e704ea2c5502027afec9421a5a42aa1bfbe8d099dc5322bde not found: ID does not exist" containerID="1d9a2f0840afd93e704ea2c5502027afec9421a5a42aa1bfbe8d099dc5322bde" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.307624 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d9a2f0840afd93e704ea2c5502027afec9421a5a42aa1bfbe8d099dc5322bde"} err="failed to get container status \"1d9a2f0840afd93e704ea2c5502027afec9421a5a42aa1bfbe8d099dc5322bde\": rpc error: code = NotFound desc = could not find container \"1d9a2f0840afd93e704ea2c5502027afec9421a5a42aa1bfbe8d099dc5322bde\": container with ID starting with 1d9a2f0840afd93e704ea2c5502027afec9421a5a42aa1bfbe8d099dc5322bde not found: ID does not exist" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.307643 4736 scope.go:117] "RemoveContainer" containerID="6481211df4dc96e48aa6f1afaf366f2a8ca0e8dbd2fb4da9b3315cfc0ebd6dc1" Mar 16 15:36:37 crc kubenswrapper[4736]: E0316 15:36:37.307888 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6481211df4dc96e48aa6f1afaf366f2a8ca0e8dbd2fb4da9b3315cfc0ebd6dc1\": container with ID starting with 6481211df4dc96e48aa6f1afaf366f2a8ca0e8dbd2fb4da9b3315cfc0ebd6dc1 not found: ID does not exist" containerID="6481211df4dc96e48aa6f1afaf366f2a8ca0e8dbd2fb4da9b3315cfc0ebd6dc1" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.307906 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6481211df4dc96e48aa6f1afaf366f2a8ca0e8dbd2fb4da9b3315cfc0ebd6dc1"} err="failed to get container status \"6481211df4dc96e48aa6f1afaf366f2a8ca0e8dbd2fb4da9b3315cfc0ebd6dc1\": rpc error: code = NotFound desc = could not find container \"6481211df4dc96e48aa6f1afaf366f2a8ca0e8dbd2fb4da9b3315cfc0ebd6dc1\": container with ID starting with 6481211df4dc96e48aa6f1afaf366f2a8ca0e8dbd2fb4da9b3315cfc0ebd6dc1 not found: ID does not exist" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.357226 4736 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.574379 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.584648 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.612837 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:36:37 crc kubenswrapper[4736]: E0316 15:36:37.613279 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" containerName="nova-metadata-metadata" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.613356 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" containerName="nova-metadata-metadata" Mar 16 15:36:37 crc kubenswrapper[4736]: E0316 15:36:37.613386 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" containerName="nova-metadata-log" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.613394 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" containerName="nova-metadata-log" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.613559 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" containerName="nova-metadata-metadata" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.613581 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" containerName="nova-metadata-log" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.614549 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.623336 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.625368 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.663526 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hczr\" (UniqueName: \"kubernetes.io/projected/b839907f-5ee5-450e-b483-ace8fd0fb0d5-kube-api-access-4hczr\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.663629 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b839907f-5ee5-450e-b483-ace8fd0fb0d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.663731 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b839907f-5ee5-450e-b483-ace8fd0fb0d5-logs\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.663758 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b839907f-5ee5-450e-b483-ace8fd0fb0d5-config-data\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.663800 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b839907f-5ee5-450e-b483-ace8fd0fb0d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.665237 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.766664 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b839907f-5ee5-450e-b483-ace8fd0fb0d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.766813 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b839907f-5ee5-450e-b483-ace8fd0fb0d5-logs\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.766856 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b839907f-5ee5-450e-b483-ace8fd0fb0d5-config-data\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.766904 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b839907f-5ee5-450e-b483-ace8fd0fb0d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.766962 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hczr\" (UniqueName: \"kubernetes.io/projected/b839907f-5ee5-450e-b483-ace8fd0fb0d5-kube-api-access-4hczr\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.768620 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b839907f-5ee5-450e-b483-ace8fd0fb0d5-logs\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.780127 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b839907f-5ee5-450e-b483-ace8fd0fb0d5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.792982 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b839907f-5ee5-450e-b483-ace8fd0fb0d5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.793040 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b839907f-5ee5-450e-b483-ace8fd0fb0d5-config-data\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.804760 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hczr\" (UniqueName: \"kubernetes.io/projected/b839907f-5ee5-450e-b483-ace8fd0fb0d5-kube-api-access-4hczr\") pod \"nova-metadata-0\" (UID: \"b839907f-5ee5-450e-b483-ace8fd0fb0d5\") " pod="openstack/nova-metadata-0" Mar 16 15:36:37 crc kubenswrapper[4736]: I0316 15:36:37.933039 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 16 15:36:38 crc kubenswrapper[4736]: I0316 15:36:38.240467 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c38eb8c1-13d7-4ef2-b026-d55b36f56919","Type":"ContainerStarted","Data":"2d64502cdbe1eab290776ac4648af7b8c79f74faa631096fcbddbb6c1076f373"} Mar 16 15:36:38 crc kubenswrapper[4736]: I0316 15:36:38.279199 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.279172264 podStartE2EDuration="2.279172264s" podCreationTimestamp="2026-03-16 15:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:36:38.26340213 +0000 UTC m=+1399.990792417" watchObservedRunningTime="2026-03-16 15:36:38.279172264 +0000 UTC m=+1400.006562551" Mar 16 15:36:38 crc kubenswrapper[4736]: I0316 15:36:38.505286 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 16 15:36:38 crc kubenswrapper[4736]: I0316 15:36:38.507817 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:36:38 crc kubenswrapper[4736]: I0316 15:36:38.507967 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:36:38 crc kubenswrapper[4736]: W0316 15:36:38.511616 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb839907f_5ee5_450e_b483_ace8fd0fb0d5.slice/crio-84f68d4f305254080fc6e5c91da37e68b237eeaaa02ca138df50fba83ab9498b WatchSource:0}: Error finding container 84f68d4f305254080fc6e5c91da37e68b237eeaaa02ca138df50fba83ab9498b: Status 404 returned error can't find the container with id 84f68d4f305254080fc6e5c91da37e68b237eeaaa02ca138df50fba83ab9498b Mar 16 15:36:38 crc kubenswrapper[4736]: I0316 15:36:38.991911 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3709ecdf-1b56-4e47-8c2a-31fc3a7e940f" path="/var/lib/kubelet/pods/3709ecdf-1b56-4e47-8c2a-31fc3a7e940f/volumes" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.192722 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.192881 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.205156 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.265584 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b839907f-5ee5-450e-b483-ace8fd0fb0d5","Type":"ContainerStarted","Data":"f8c741372cd6cbc8fdd20184d075fb981044558d174655ad3897c821bcb33a26"} Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.265639 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b839907f-5ee5-450e-b483-ace8fd0fb0d5","Type":"ContainerStarted","Data":"274a5b9d9972f2390a3186165cb789bae4201ebd043873cbfc3cfe8316685465"} Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.265652 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b839907f-5ee5-450e-b483-ace8fd0fb0d5","Type":"ContainerStarted","Data":"84f68d4f305254080fc6e5c91da37e68b237eeaaa02ca138df50fba83ab9498b"} Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.268442 4736 generic.go:334] "Generic (PLEG): container finished" podID="5cd5e627-3383-43f8-a354-d2ce73becd88" containerID="46921e7ee5d917a149e958ff5850d6b1f73806e8ccadad72a092a82e6c86f893" exitCode=0 Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.269117 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.269279 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cd5e627-3383-43f8-a354-d2ce73becd88","Type":"ContainerDied","Data":"46921e7ee5d917a149e958ff5850d6b1f73806e8ccadad72a092a82e6c86f893"} Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.269307 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cd5e627-3383-43f8-a354-d2ce73becd88","Type":"ContainerDied","Data":"c4ba95f5cdb712b09ca815c6440012b37bbd97ef4c1841ddacaf7b0965db4120"} Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.269324 4736 scope.go:117] "RemoveContainer" containerID="46921e7ee5d917a149e958ff5850d6b1f73806e8ccadad72a092a82e6c86f893" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.304021 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.303994323 podStartE2EDuration="2.303994323s" podCreationTimestamp="2026-03-16 15:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:36:39.298601185 +0000 UTC m=+1401.025991472" watchObservedRunningTime="2026-03-16 15:36:39.303994323 +0000 UTC m=+1401.031384600" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.307535 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd5e627-3383-43f8-a354-d2ce73becd88-logs\") pod \"5cd5e627-3383-43f8-a354-d2ce73becd88\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.307624 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-combined-ca-bundle\") pod \"5cd5e627-3383-43f8-a354-d2ce73becd88\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.307725 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-internal-tls-certs\") pod \"5cd5e627-3383-43f8-a354-d2ce73becd88\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.307789 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mzqv\" (UniqueName: \"kubernetes.io/projected/5cd5e627-3383-43f8-a354-d2ce73becd88-kube-api-access-9mzqv\") pod \"5cd5e627-3383-43f8-a354-d2ce73becd88\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.307851 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-public-tls-certs\") pod \"5cd5e627-3383-43f8-a354-d2ce73becd88\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.307962 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-config-data\") pod \"5cd5e627-3383-43f8-a354-d2ce73becd88\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.314770 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd5e627-3383-43f8-a354-d2ce73becd88-logs" (OuterVolumeSpecName: "logs") pod "5cd5e627-3383-43f8-a354-d2ce73becd88" (UID: "5cd5e627-3383-43f8-a354-d2ce73becd88"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.334915 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd5e627-3383-43f8-a354-d2ce73becd88-kube-api-access-9mzqv" (OuterVolumeSpecName: "kube-api-access-9mzqv") pod "5cd5e627-3383-43f8-a354-d2ce73becd88" (UID: "5cd5e627-3383-43f8-a354-d2ce73becd88"). InnerVolumeSpecName "kube-api-access-9mzqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.336629 4736 scope.go:117] "RemoveContainer" containerID="0ecfbb6b535a2c0e7b8ae78d2c999595cfc91a470bd1ff14e48d826527483c68" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.366468 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cd5e627-3383-43f8-a354-d2ce73becd88" (UID: "5cd5e627-3383-43f8-a354-d2ce73becd88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.397321 4736 scope.go:117] "RemoveContainer" containerID="46921e7ee5d917a149e958ff5850d6b1f73806e8ccadad72a092a82e6c86f893" Mar 16 15:36:39 crc kubenswrapper[4736]: E0316 15:36:39.397809 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46921e7ee5d917a149e958ff5850d6b1f73806e8ccadad72a092a82e6c86f893\": container with ID starting with 46921e7ee5d917a149e958ff5850d6b1f73806e8ccadad72a092a82e6c86f893 not found: ID does not exist" containerID="46921e7ee5d917a149e958ff5850d6b1f73806e8ccadad72a092a82e6c86f893" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.397844 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46921e7ee5d917a149e958ff5850d6b1f73806e8ccadad72a092a82e6c86f893"} err="failed to get container status \"46921e7ee5d917a149e958ff5850d6b1f73806e8ccadad72a092a82e6c86f893\": rpc error: code = NotFound desc = could not find container \"46921e7ee5d917a149e958ff5850d6b1f73806e8ccadad72a092a82e6c86f893\": container with ID starting with 46921e7ee5d917a149e958ff5850d6b1f73806e8ccadad72a092a82e6c86f893 not found: ID does not exist" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.397865 4736 scope.go:117] "RemoveContainer" containerID="0ecfbb6b535a2c0e7b8ae78d2c999595cfc91a470bd1ff14e48d826527483c68" Mar 16 15:36:39 crc kubenswrapper[4736]: E0316 15:36:39.398129 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ecfbb6b535a2c0e7b8ae78d2c999595cfc91a470bd1ff14e48d826527483c68\": container with ID starting with 0ecfbb6b535a2c0e7b8ae78d2c999595cfc91a470bd1ff14e48d826527483c68 not found: ID does not exist" containerID="0ecfbb6b535a2c0e7b8ae78d2c999595cfc91a470bd1ff14e48d826527483c68" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.398152 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ecfbb6b535a2c0e7b8ae78d2c999595cfc91a470bd1ff14e48d826527483c68"} err="failed to get container status \"0ecfbb6b535a2c0e7b8ae78d2c999595cfc91a470bd1ff14e48d826527483c68\": rpc error: code = NotFound desc = could not find container \"0ecfbb6b535a2c0e7b8ae78d2c999595cfc91a470bd1ff14e48d826527483c68\": container with ID starting with 0ecfbb6b535a2c0e7b8ae78d2c999595cfc91a470bd1ff14e48d826527483c68 not found: ID does not exist" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.409870 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5cd5e627-3383-43f8-a354-d2ce73becd88" (UID: "5cd5e627-3383-43f8-a354-d2ce73becd88"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.410532 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-internal-tls-certs\") pod \"5cd5e627-3383-43f8-a354-d2ce73becd88\" (UID: \"5cd5e627-3383-43f8-a354-d2ce73becd88\") " Mar 16 15:36:39 crc kubenswrapper[4736]: W0316 15:36:39.410660 4736 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5cd5e627-3383-43f8-a354-d2ce73becd88/volumes/kubernetes.io~secret/internal-tls-certs Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.410679 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5cd5e627-3383-43f8-a354-d2ce73becd88" (UID: "5cd5e627-3383-43f8-a354-d2ce73becd88"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.411413 4736 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cd5e627-3383-43f8-a354-d2ce73becd88-logs\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.411503 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.411594 4736 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.412126 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mzqv\" (UniqueName: \"kubernetes.io/projected/5cd5e627-3383-43f8-a354-d2ce73becd88-kube-api-access-9mzqv\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.412564 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-config-data" (OuterVolumeSpecName: "config-data") pod "5cd5e627-3383-43f8-a354-d2ce73becd88" (UID: "5cd5e627-3383-43f8-a354-d2ce73becd88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.433738 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5cd5e627-3383-43f8-a354-d2ce73becd88" (UID: "5cd5e627-3383-43f8-a354-d2ce73becd88"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.513741 4736 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.513786 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd5e627-3383-43f8-a354-d2ce73becd88-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.636457 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.648798 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.662051 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 16 15:36:39 crc kubenswrapper[4736]: E0316 15:36:39.662569 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd5e627-3383-43f8-a354-d2ce73becd88" containerName="nova-api-api" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.662589 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd5e627-3383-43f8-a354-d2ce73becd88" containerName="nova-api-api" Mar 16 15:36:39 crc kubenswrapper[4736]: E0316 15:36:39.662627 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd5e627-3383-43f8-a354-d2ce73becd88" containerName="nova-api-log" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.662633 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd5e627-3383-43f8-a354-d2ce73becd88" containerName="nova-api-log" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.662818 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd5e627-3383-43f8-a354-d2ce73becd88" containerName="nova-api-api" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.662838 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd5e627-3383-43f8-a354-d2ce73becd88" containerName="nova-api-log" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.664596 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.671092 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.671600 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.673196 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.683080 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.831988 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-logs\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.832074 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.832137 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.832191 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj5zf\" (UniqueName: \"kubernetes.io/projected/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-kube-api-access-hj5zf\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.832208 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-config-data\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.832270 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-public-tls-certs\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.933515 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.933586 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.933640 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj5zf\" (UniqueName: \"kubernetes.io/projected/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-kube-api-access-hj5zf\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.933659 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-config-data\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.933717 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-public-tls-certs\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.933766 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-logs\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.934223 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-logs\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.942452 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-config-data\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.943931 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-public-tls-certs\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.944027 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.944639 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.964172 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj5zf\" (UniqueName: \"kubernetes.io/projected/ffe59cdd-6766-4d9b-a82c-0287d028a8d0-kube-api-access-hj5zf\") pod \"nova-api-0\" (UID: \"ffe59cdd-6766-4d9b-a82c-0287d028a8d0\") " pod="openstack/nova-api-0" Mar 16 15:36:39 crc kubenswrapper[4736]: I0316 15:36:39.993617 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 16 15:36:40 crc kubenswrapper[4736]: I0316 15:36:40.462456 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 16 15:36:40 crc kubenswrapper[4736]: W0316 15:36:40.479718 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffe59cdd_6766_4d9b_a82c_0287d028a8d0.slice/crio-65fbc382f520e546de1b74d6f8390e61de5545bfc89f1ea4516ce7d0aaced004 WatchSource:0}: Error finding container 65fbc382f520e546de1b74d6f8390e61de5545bfc89f1ea4516ce7d0aaced004: Status 404 returned error can't find the container with id 65fbc382f520e546de1b74d6f8390e61de5545bfc89f1ea4516ce7d0aaced004 Mar 16 15:36:41 crc kubenswrapper[4736]: I0316 15:36:40.999838 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cd5e627-3383-43f8-a354-d2ce73becd88" path="/var/lib/kubelet/pods/5cd5e627-3383-43f8-a354-d2ce73becd88/volumes" Mar 16 15:36:41 crc kubenswrapper[4736]: I0316 15:36:41.305625 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ffe59cdd-6766-4d9b-a82c-0287d028a8d0","Type":"ContainerStarted","Data":"4fddc4c4414bbaf8c8728c75e135b5b3c7e542852b2ac01c51ab1821e0046a89"} Mar 16 15:36:41 crc kubenswrapper[4736]: I0316 15:36:41.305672 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ffe59cdd-6766-4d9b-a82c-0287d028a8d0","Type":"ContainerStarted","Data":"85efc03035da79a071e9540acfbbe1705a8f58bfae02d921d84456acde462470"} Mar 16 15:36:41 crc kubenswrapper[4736]: I0316 15:36:41.305687 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ffe59cdd-6766-4d9b-a82c-0287d028a8d0","Type":"ContainerStarted","Data":"65fbc382f520e546de1b74d6f8390e61de5545bfc89f1ea4516ce7d0aaced004"} Mar 16 15:36:41 crc kubenswrapper[4736]: I0316 15:36:41.327704 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.327686247 podStartE2EDuration="2.327686247s" podCreationTimestamp="2026-03-16 15:36:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:36:41.325603511 +0000 UTC m=+1403.052993818" watchObservedRunningTime="2026-03-16 15:36:41.327686247 +0000 UTC m=+1403.055076534" Mar 16 15:36:41 crc kubenswrapper[4736]: I0316 15:36:41.508082 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 16 15:36:45 crc kubenswrapper[4736]: I0316 15:36:45.493644 4736 scope.go:117] "RemoveContainer" containerID="6745450ba06f002766ab495fc801d3d5544d0a839aeac824ebf6498a09bf3b12" Mar 16 15:36:46 crc kubenswrapper[4736]: I0316 15:36:46.508419 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 16 15:36:46 crc kubenswrapper[4736]: I0316 15:36:46.551443 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 16 15:36:47 crc kubenswrapper[4736]: I0316 15:36:47.395955 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 16 15:36:47 crc kubenswrapper[4736]: I0316 15:36:47.934213 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 16 15:36:47 crc kubenswrapper[4736]: I0316 15:36:47.934947 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 16 15:36:48 crc kubenswrapper[4736]: I0316 15:36:48.951300 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b839907f-5ee5-450e-b483-ace8fd0fb0d5" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:36:48 crc kubenswrapper[4736]: I0316 15:36:48.951680 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b839907f-5ee5-450e-b483-ace8fd0fb0d5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:36:49 crc kubenswrapper[4736]: I0316 15:36:49.994150 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 16 15:36:49 crc kubenswrapper[4736]: I0316 15:36:49.994204 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 16 15:36:51 crc kubenswrapper[4736]: I0316 15:36:51.035485 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ffe59cdd-6766-4d9b-a82c-0287d028a8d0" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:36:51 crc kubenswrapper[4736]: I0316 15:36:51.076379 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ffe59cdd-6766-4d9b-a82c-0287d028a8d0" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 15:36:54 crc kubenswrapper[4736]: I0316 15:36:54.831882 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 16 15:36:55 crc kubenswrapper[4736]: I0316 15:36:55.933896 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 16 15:36:55 crc kubenswrapper[4736]: I0316 15:36:55.934355 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 16 15:36:57 crc kubenswrapper[4736]: I0316 15:36:57.945512 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 16 15:36:57 crc kubenswrapper[4736]: I0316 15:36:57.947407 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 16 15:36:57 crc kubenswrapper[4736]: I0316 15:36:57.955311 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 16 15:36:57 crc kubenswrapper[4736]: I0316 15:36:57.994331 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 16 15:36:57 crc kubenswrapper[4736]: I0316 15:36:57.994856 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 16 15:36:58 crc kubenswrapper[4736]: I0316 15:36:58.518179 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 16 15:37:00 crc kubenswrapper[4736]: I0316 15:37:00.002594 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 16 15:37:00 crc kubenswrapper[4736]: I0316 15:37:00.004320 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 16 15:37:00 crc kubenswrapper[4736]: I0316 15:37:00.010522 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 16 15:37:00 crc kubenswrapper[4736]: I0316 15:37:00.038356 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 16 15:37:07 crc kubenswrapper[4736]: E0316 15:37:07.937609 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="0892ebc9-dbd4-4652-9691-13028da07f80" Mar 16 15:37:08 crc kubenswrapper[4736]: I0316 15:37:08.507753 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:37:08 crc kubenswrapper[4736]: I0316 15:37:08.508255 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:37:08 crc kubenswrapper[4736]: I0316 15:37:08.508302 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:37:08 crc kubenswrapper[4736]: I0316 15:37:08.509180 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08f32cddeb0a32067a3f6c160d0a3610c524e1e7ffc4c4430b9902318c96ac8e"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 15:37:08 crc kubenswrapper[4736]: I0316 15:37:08.509269 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://08f32cddeb0a32067a3f6c160d0a3610c524e1e7ffc4c4430b9902318c96ac8e" gracePeriod=600 Mar 16 15:37:08 crc kubenswrapper[4736]: I0316 15:37:08.644739 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="08f32cddeb0a32067a3f6c160d0a3610c524e1e7ffc4c4430b9902318c96ac8e" exitCode=0 Mar 16 15:37:08 crc kubenswrapper[4736]: I0316 15:37:08.644785 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"08f32cddeb0a32067a3f6c160d0a3610c524e1e7ffc4c4430b9902318c96ac8e"} Mar 16 15:37:08 crc kubenswrapper[4736]: I0316 15:37:08.644972 4736 scope.go:117] "RemoveContainer" containerID="42060cddfe4c59472f3be42cb2e0bb18ea86207173b4fad3d63b9e861b6fe74e" Mar 16 15:37:08 crc kubenswrapper[4736]: I0316 15:37:08.645144 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 16 15:37:09 crc kubenswrapper[4736]: I0316 15:37:09.656972 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7"} Mar 16 15:37:10 crc kubenswrapper[4736]: I0316 15:37:10.294758 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:37:10 crc kubenswrapper[4736]: E0316 15:37:10.295075 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:37:10 crc kubenswrapper[4736]: E0316 15:37:10.295133 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 16 15:37:10 crc kubenswrapper[4736]: E0316 15:37:10.295223 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift podName:0892ebc9-dbd4-4652-9691-13028da07f80 nodeName:}" failed. No retries permitted until 2026-03-16 15:39:12.295196704 +0000 UTC m=+1554.022587031 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift") pod "swift-storage-0" (UID: "0892ebc9-dbd4-4652-9691-13028da07f80") : configmap "swift-ring-files" not found Mar 16 15:37:45 crc kubenswrapper[4736]: I0316 15:37:45.885624 4736 scope.go:117] "RemoveContainer" containerID="2fef9ead1db6d1a0f6ed9b735d599ef12ff1fc98fd77396e9ab15ecc235ab57b" Mar 16 15:37:45 crc kubenswrapper[4736]: I0316 15:37:45.909645 4736 scope.go:117] "RemoveContainer" containerID="2fdc991772647e8f9e82a90a6e036097e6f4e3e7023d5725236a84aa3609e469" Mar 16 15:37:45 crc kubenswrapper[4736]: I0316 15:37:45.964682 4736 scope.go:117] "RemoveContainer" containerID="c4ccb2ca0fb811b54227ef052038e094027e03b3b98fdbd0bafa1e787643d441" Mar 16 15:38:00 crc kubenswrapper[4736]: I0316 15:38:00.174647 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561258-8kwp4"] Mar 16 15:38:00 crc kubenswrapper[4736]: I0316 15:38:00.177424 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561258-8kwp4" Mar 16 15:38:00 crc kubenswrapper[4736]: I0316 15:38:00.181279 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:38:00 crc kubenswrapper[4736]: I0316 15:38:00.181359 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:38:00 crc kubenswrapper[4736]: I0316 15:38:00.181769 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:38:00 crc kubenswrapper[4736]: I0316 15:38:00.206321 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561258-8kwp4"] Mar 16 15:38:00 crc kubenswrapper[4736]: E0316 15:38:00.339883 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-proxy-678dd4f677-jxtsk" podUID="bccee937-d642-4483-87fb-033b157cf68c" Mar 16 15:38:00 crc kubenswrapper[4736]: I0316 15:38:00.350460 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n4vf\" (UniqueName: \"kubernetes.io/projected/a5f66e4c-c0a5-4996-a966-05636ae1b7ad-kube-api-access-6n4vf\") pod \"auto-csr-approver-29561258-8kwp4\" (UID: \"a5f66e4c-c0a5-4996-a966-05636ae1b7ad\") " pod="openshift-infra/auto-csr-approver-29561258-8kwp4" Mar 16 15:38:00 crc kubenswrapper[4736]: I0316 15:38:00.451475 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n4vf\" (UniqueName: \"kubernetes.io/projected/a5f66e4c-c0a5-4996-a966-05636ae1b7ad-kube-api-access-6n4vf\") pod \"auto-csr-approver-29561258-8kwp4\" (UID: \"a5f66e4c-c0a5-4996-a966-05636ae1b7ad\") " pod="openshift-infra/auto-csr-approver-29561258-8kwp4" Mar 16 15:38:00 crc kubenswrapper[4736]: I0316 15:38:00.486252 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n4vf\" (UniqueName: \"kubernetes.io/projected/a5f66e4c-c0a5-4996-a966-05636ae1b7ad-kube-api-access-6n4vf\") pod \"auto-csr-approver-29561258-8kwp4\" (UID: \"a5f66e4c-c0a5-4996-a966-05636ae1b7ad\") " pod="openshift-infra/auto-csr-approver-29561258-8kwp4" Mar 16 15:38:00 crc kubenswrapper[4736]: I0316 15:38:00.500578 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561258-8kwp4" Mar 16 15:38:01 crc kubenswrapper[4736]: I0316 15:38:01.013938 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 15:38:01 crc kubenswrapper[4736]: I0316 15:38:01.024551 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561258-8kwp4"] Mar 16 15:38:01 crc kubenswrapper[4736]: I0316 15:38:01.293074 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:38:01 crc kubenswrapper[4736]: I0316 15:38:01.293085 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561258-8kwp4" event={"ID":"a5f66e4c-c0a5-4996-a966-05636ae1b7ad","Type":"ContainerStarted","Data":"1f683affc153c905a858715ff0d7d23e1f019c8007d55b07949fcc122939af32"} Mar 16 15:38:03 crc kubenswrapper[4736]: I0316 15:38:03.319788 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561258-8kwp4" event={"ID":"a5f66e4c-c0a5-4996-a966-05636ae1b7ad","Type":"ContainerStarted","Data":"7d3af9008c47cc08df959583174e801bed5e3721387782fc8131598486ea4ec1"} Mar 16 15:38:03 crc kubenswrapper[4736]: I0316 15:38:03.348671 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561258-8kwp4" podStartSLOduration=1.988944959 podStartE2EDuration="3.348646822s" podCreationTimestamp="2026-03-16 15:38:00 +0000 UTC" firstStartedPulling="2026-03-16 15:38:01.013610712 +0000 UTC m=+1482.741000999" lastFinishedPulling="2026-03-16 15:38:02.373312565 +0000 UTC m=+1484.100702862" observedRunningTime="2026-03-16 15:38:03.345538487 +0000 UTC m=+1485.072928774" watchObservedRunningTime="2026-03-16 15:38:03.348646822 +0000 UTC m=+1485.076037109" Mar 16 15:38:03 crc kubenswrapper[4736]: I0316 15:38:03.633481 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:38:03 crc kubenswrapper[4736]: E0316 15:38:03.633698 4736 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 16 15:38:03 crc kubenswrapper[4736]: E0316 15:38:03.633757 4736 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-proxy-678dd4f677-jxtsk: configmap "swift-ring-files" not found Mar 16 15:38:03 crc kubenswrapper[4736]: E0316 15:38:03.633852 4736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift podName:bccee937-d642-4483-87fb-033b157cf68c nodeName:}" failed. No retries permitted until 2026-03-16 15:40:05.63382165 +0000 UTC m=+1607.361211977 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift") pod "swift-proxy-678dd4f677-jxtsk" (UID: "bccee937-d642-4483-87fb-033b157cf68c") : configmap "swift-ring-files" not found Mar 16 15:38:04 crc kubenswrapper[4736]: I0316 15:38:04.334601 4736 generic.go:334] "Generic (PLEG): container finished" podID="a5f66e4c-c0a5-4996-a966-05636ae1b7ad" containerID="7d3af9008c47cc08df959583174e801bed5e3721387782fc8131598486ea4ec1" exitCode=0 Mar 16 15:38:04 crc kubenswrapper[4736]: I0316 15:38:04.334660 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561258-8kwp4" event={"ID":"a5f66e4c-c0a5-4996-a966-05636ae1b7ad","Type":"ContainerDied","Data":"7d3af9008c47cc08df959583174e801bed5e3721387782fc8131598486ea4ec1"} Mar 16 15:38:05 crc kubenswrapper[4736]: I0316 15:38:05.786988 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561258-8kwp4" Mar 16 15:38:05 crc kubenswrapper[4736]: I0316 15:38:05.886327 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n4vf\" (UniqueName: \"kubernetes.io/projected/a5f66e4c-c0a5-4996-a966-05636ae1b7ad-kube-api-access-6n4vf\") pod \"a5f66e4c-c0a5-4996-a966-05636ae1b7ad\" (UID: \"a5f66e4c-c0a5-4996-a966-05636ae1b7ad\") " Mar 16 15:38:05 crc kubenswrapper[4736]: I0316 15:38:05.896828 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f66e4c-c0a5-4996-a966-05636ae1b7ad-kube-api-access-6n4vf" (OuterVolumeSpecName: "kube-api-access-6n4vf") pod "a5f66e4c-c0a5-4996-a966-05636ae1b7ad" (UID: "a5f66e4c-c0a5-4996-a966-05636ae1b7ad"). InnerVolumeSpecName "kube-api-access-6n4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:38:05 crc kubenswrapper[4736]: I0316 15:38:05.991635 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n4vf\" (UniqueName: \"kubernetes.io/projected/a5f66e4c-c0a5-4996-a966-05636ae1b7ad-kube-api-access-6n4vf\") on node \"crc\" DevicePath \"\"" Mar 16 15:38:06 crc kubenswrapper[4736]: I0316 15:38:06.376161 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561258-8kwp4" event={"ID":"a5f66e4c-c0a5-4996-a966-05636ae1b7ad","Type":"ContainerDied","Data":"1f683affc153c905a858715ff0d7d23e1f019c8007d55b07949fcc122939af32"} Mar 16 15:38:06 crc kubenswrapper[4736]: I0316 15:38:06.377807 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f683affc153c905a858715ff0d7d23e1f019c8007d55b07949fcc122939af32" Mar 16 15:38:06 crc kubenswrapper[4736]: I0316 15:38:06.377749 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561258-8kwp4" Mar 16 15:38:06 crc kubenswrapper[4736]: I0316 15:38:06.440584 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561252-bhqrv"] Mar 16 15:38:06 crc kubenswrapper[4736]: I0316 15:38:06.449779 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561252-bhqrv"] Mar 16 15:38:06 crc kubenswrapper[4736]: I0316 15:38:06.996411 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b7b3c49-39de-43da-af30-7a34a07d7022" path="/var/lib/kubelet/pods/9b7b3c49-39de-43da-af30-7a34a07d7022/volumes" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.294450 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-6f7cb"] Mar 16 15:38:44 crc kubenswrapper[4736]: E0316 15:38:44.295910 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5f66e4c-c0a5-4996-a966-05636ae1b7ad" containerName="oc" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.295926 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5f66e4c-c0a5-4996-a966-05636ae1b7ad" containerName="oc" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.296135 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5f66e4c-c0a5-4996-a966-05636ae1b7ad" containerName="oc" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.296777 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.303844 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.304600 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.315424 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6f7cb"] Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.327958 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-scripts\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.328068 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-dispersionconf\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.328220 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-swiftconf\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.328315 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-combined-ca-bundle\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.328353 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzfd6\" (UniqueName: \"kubernetes.io/projected/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-kube-api-access-bzfd6\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.328450 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-etc-swift\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.328486 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-ring-data-devices\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.430419 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-etc-swift\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.430486 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-ring-data-devices\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.430539 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-scripts\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.430586 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-dispersionconf\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.430617 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-swiftconf\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.430678 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-combined-ca-bundle\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.430710 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzfd6\" (UniqueName: \"kubernetes.io/projected/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-kube-api-access-bzfd6\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.430948 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-etc-swift\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.432708 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-ring-data-devices\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.432899 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-scripts\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.445073 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-combined-ca-bundle\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.446303 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-swiftconf\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.446561 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-dispersionconf\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.448668 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzfd6\" (UniqueName: \"kubernetes.io/projected/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-kube-api-access-bzfd6\") pod \"swift-ring-rebalance-6f7cb\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.628599 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-9rw6b" Mar 16 15:38:44 crc kubenswrapper[4736]: I0316 15:38:44.636447 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:45 crc kubenswrapper[4736]: I0316 15:38:45.109578 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6f7cb"] Mar 16 15:38:45 crc kubenswrapper[4736]: I0316 15:38:45.826801 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6f7cb" event={"ID":"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1","Type":"ContainerStarted","Data":"e791a69bf545d6e340539cff0ff4aa5571b7eb44db34cf3596c3975c17b7995f"} Mar 16 15:38:46 crc kubenswrapper[4736]: I0316 15:38:46.151980 4736 scope.go:117] "RemoveContainer" containerID="968e6936851f09c8695f5b59772bbb4af871706eabfed810e8726f74eeeca3ec" Mar 16 15:38:46 crc kubenswrapper[4736]: I0316 15:38:46.212091 4736 scope.go:117] "RemoveContainer" containerID="cb6bf0fe6c0a31f30a0b19798762adbcad06d0261ffb4839609e4359fd8b2c7a" Mar 16 15:38:49 crc kubenswrapper[4736]: I0316 15:38:49.878155 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6f7cb" event={"ID":"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1","Type":"ContainerStarted","Data":"35b4512e00f440d533a3e30bded548147870334c1e7b03f9080894a0a420d3cc"} Mar 16 15:38:49 crc kubenswrapper[4736]: I0316 15:38:49.915612 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-6f7cb" podStartSLOduration=1.692190216 podStartE2EDuration="5.915593213s" podCreationTimestamp="2026-03-16 15:38:44 +0000 UTC" firstStartedPulling="2026-03-16 15:38:45.116576299 +0000 UTC m=+1526.843966586" lastFinishedPulling="2026-03-16 15:38:49.339979256 +0000 UTC m=+1531.067369583" observedRunningTime="2026-03-16 15:38:49.899344233 +0000 UTC m=+1531.626734530" watchObservedRunningTime="2026-03-16 15:38:49.915593213 +0000 UTC m=+1531.642983490" Mar 16 15:38:56 crc kubenswrapper[4736]: I0316 15:38:56.952693 4736 generic.go:334] "Generic (PLEG): container finished" podID="48b165ae-e228-45fa-a5d3-6d1f8c8f43b1" containerID="35b4512e00f440d533a3e30bded548147870334c1e7b03f9080894a0a420d3cc" exitCode=0 Mar 16 15:38:56 crc kubenswrapper[4736]: I0316 15:38:56.952771 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6f7cb" event={"ID":"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1","Type":"ContainerDied","Data":"35b4512e00f440d533a3e30bded548147870334c1e7b03f9080894a0a420d3cc"} Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.361732 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.470787 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-combined-ca-bundle\") pod \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.470878 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-ring-data-devices\") pod \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.470910 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-swiftconf\") pod \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.470948 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-etc-swift\") pod \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.471000 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzfd6\" (UniqueName: \"kubernetes.io/projected/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-kube-api-access-bzfd6\") pod \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.471059 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-dispersionconf\") pod \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.471192 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-scripts\") pod \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\" (UID: \"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1\") " Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.471664 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "48b165ae-e228-45fa-a5d3-6d1f8c8f43b1" (UID: "48b165ae-e228-45fa-a5d3-6d1f8c8f43b1"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.473154 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "48b165ae-e228-45fa-a5d3-6d1f8c8f43b1" (UID: "48b165ae-e228-45fa-a5d3-6d1f8c8f43b1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.476993 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-kube-api-access-bzfd6" (OuterVolumeSpecName: "kube-api-access-bzfd6") pod "48b165ae-e228-45fa-a5d3-6d1f8c8f43b1" (UID: "48b165ae-e228-45fa-a5d3-6d1f8c8f43b1"). InnerVolumeSpecName "kube-api-access-bzfd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.495825 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-scripts" (OuterVolumeSpecName: "scripts") pod "48b165ae-e228-45fa-a5d3-6d1f8c8f43b1" (UID: "48b165ae-e228-45fa-a5d3-6d1f8c8f43b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.499394 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "48b165ae-e228-45fa-a5d3-6d1f8c8f43b1" (UID: "48b165ae-e228-45fa-a5d3-6d1f8c8f43b1"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.508009 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "48b165ae-e228-45fa-a5d3-6d1f8c8f43b1" (UID: "48b165ae-e228-45fa-a5d3-6d1f8c8f43b1"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.518582 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48b165ae-e228-45fa-a5d3-6d1f8c8f43b1" (UID: "48b165ae-e228-45fa-a5d3-6d1f8c8f43b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.574266 4736 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-scripts\") on node \"crc\" DevicePath \"\"" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.574362 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.574384 4736 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.574401 4736 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.574413 4736 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.574425 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzfd6\" (UniqueName: \"kubernetes.io/projected/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-kube-api-access-bzfd6\") on node \"crc\" DevicePath \"\"" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.574440 4736 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/48b165ae-e228-45fa-a5d3-6d1f8c8f43b1-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.983866 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6f7cb" Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.991597 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6f7cb" event={"ID":"48b165ae-e228-45fa-a5d3-6d1f8c8f43b1","Type":"ContainerDied","Data":"e791a69bf545d6e340539cff0ff4aa5571b7eb44db34cf3596c3975c17b7995f"} Mar 16 15:38:58 crc kubenswrapper[4736]: I0316 15:38:58.991647 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e791a69bf545d6e340539cff0ff4aa5571b7eb44db34cf3596c3975c17b7995f" Mar 16 15:39:08 crc kubenswrapper[4736]: I0316 15:39:08.507995 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:39:08 crc kubenswrapper[4736]: I0316 15:39:08.508594 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:39:11 crc kubenswrapper[4736]: E0316 15:39:11.647826 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="0892ebc9-dbd4-4652-9691-13028da07f80" Mar 16 15:39:12 crc kubenswrapper[4736]: I0316 15:39:12.131727 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 16 15:39:12 crc kubenswrapper[4736]: I0316 15:39:12.378367 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:39:12 crc kubenswrapper[4736]: I0316 15:39:12.390412 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0892ebc9-dbd4-4652-9691-13028da07f80-etc-swift\") pod \"swift-storage-0\" (UID: \"0892ebc9-dbd4-4652-9691-13028da07f80\") " pod="openstack/swift-storage-0" Mar 16 15:39:12 crc kubenswrapper[4736]: I0316 15:39:12.433414 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 16 15:39:13 crc kubenswrapper[4736]: I0316 15:39:13.138321 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 16 15:39:14 crc kubenswrapper[4736]: I0316 15:39:14.173042 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"2aade766fa36333e12870280c27bb6e1c5dbb94f6b0227de0d3e954f19e8954d"} Mar 16 15:39:15 crc kubenswrapper[4736]: I0316 15:39:15.252750 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"dce407e087393144e4759eeb97805bc5744adc5c67182caa9d08247d585e0e32"} Mar 16 15:39:15 crc kubenswrapper[4736]: I0316 15:39:15.253381 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"ec1c363caf13af5b30259aac1bbe72dff3afaf0b2831f1d92dc83639f7db4818"} Mar 16 15:39:15 crc kubenswrapper[4736]: I0316 15:39:15.253395 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"7d37b8bfdfc5da0bab20ffa32769da86c1135c985923d7ed2143727cb7f28d28"} Mar 16 15:39:15 crc kubenswrapper[4736]: I0316 15:39:15.253406 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"7fb08142b96339141ad6af6d2a72aa69b9606474822f8f89dcc29a438da21c08"} Mar 16 15:39:16 crc kubenswrapper[4736]: I0316 15:39:16.265452 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"1cb2487809527eae34f1152400d3b164013677de5c4dd427af6a396e88d202cd"} Mar 16 15:39:17 crc kubenswrapper[4736]: I0316 15:39:17.281048 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"ffb81f714cfc1a836f22ab9e99cca787cbfe7e8a5dfc9d4c37ebbebdba4689f6"} Mar 16 15:39:17 crc kubenswrapper[4736]: I0316 15:39:17.281114 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"9e351ca87bcab950e3bb8c9f8a99a68d871a49ca0493b153d7a81d71802c7e30"} Mar 16 15:39:17 crc kubenswrapper[4736]: I0316 15:39:17.281125 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"ba2dbddae1424fd95f517e24de220993dec1c0b381e5ab7930fc0c93dee450df"} Mar 16 15:39:18 crc kubenswrapper[4736]: I0316 15:39:18.329254 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"9c57b8cea01a74a62d15f91def027bcca03398c6425836e829e76b121893e794"} Mar 16 15:39:18 crc kubenswrapper[4736]: I0316 15:39:18.329803 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"5484bc85524e5c467cd9a54e8d805a250c9d4181ea663f02feaae45891c79c27"} Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.347962 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"c53ea4bc1ffa9714809540c88297c775d110da698d620e40f7e39c414a82a2a9"} Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.348360 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"5cca5e3bc77e4f128949d0c0193d9943590a253cf838cfff3f9b4347d0167dbe"} Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.348371 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"86e43fb43e8c5202b4d96df51455bc9ee86d3e852dd43354eb813d2cdfe7ba28"} Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.348379 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"64c6eb609b31bb605c2d53b3d1df7f03511133319ac8b65ce8487b14ef315fde"} Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.348388 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0892ebc9-dbd4-4652-9691-13028da07f80","Type":"ContainerStarted","Data":"553de300a5185851fd95a696ba87f0047aca002ef3bc8ac0cab4b0f25db2f109"} Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.395146 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=498.817643 podStartE2EDuration="8m23.395124144s" podCreationTimestamp="2026-03-16 15:30:56 +0000 UTC" firstStartedPulling="2026-03-16 15:39:13.149694103 +0000 UTC m=+1554.877084390" lastFinishedPulling="2026-03-16 15:39:17.727175227 +0000 UTC m=+1559.454565534" observedRunningTime="2026-03-16 15:39:19.383858672 +0000 UTC m=+1561.111248959" watchObservedRunningTime="2026-03-16 15:39:19.395124144 +0000 UTC m=+1561.122514431" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.707936 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ff6ff9699-tkccb"] Mar 16 15:39:19 crc kubenswrapper[4736]: E0316 15:39:19.710553 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b165ae-e228-45fa-a5d3-6d1f8c8f43b1" containerName="swift-ring-rebalance" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.710575 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b165ae-e228-45fa-a5d3-6d1f8c8f43b1" containerName="swift-ring-rebalance" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.710755 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b165ae-e228-45fa-a5d3-6d1f8c8f43b1" containerName="swift-ring-rebalance" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.711706 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.715293 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.730063 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ff6ff9699-tkccb"] Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.836020 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.836111 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-svc\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.836192 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.836221 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.836236 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q79f\" (UniqueName: \"kubernetes.io/projected/c8ba66d5-5da5-472a-844d-a17b77655425-kube-api-access-2q79f\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.836255 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-config\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.937735 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.938068 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-svc\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.938173 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.938211 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.938231 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q79f\" (UniqueName: \"kubernetes.io/projected/c8ba66d5-5da5-472a-844d-a17b77655425-kube-api-access-2q79f\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.938249 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-config\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.938691 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.938947 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-config\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.938975 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-svc\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.939138 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.939141 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:19 crc kubenswrapper[4736]: I0316 15:39:19.960543 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q79f\" (UniqueName: \"kubernetes.io/projected/c8ba66d5-5da5-472a-844d-a17b77655425-kube-api-access-2q79f\") pod \"dnsmasq-dns-5ff6ff9699-tkccb\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:20 crc kubenswrapper[4736]: I0316 15:39:20.037070 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:20 crc kubenswrapper[4736]: I0316 15:39:20.544947 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ff6ff9699-tkccb"] Mar 16 15:39:20 crc kubenswrapper[4736]: W0316 15:39:20.550266 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8ba66d5_5da5_472a_844d_a17b77655425.slice/crio-0278f71441a291bde7cfe390b80313ee2773c220b8c51abb3240eaa8e0813b8c WatchSource:0}: Error finding container 0278f71441a291bde7cfe390b80313ee2773c220b8c51abb3240eaa8e0813b8c: Status 404 returned error can't find the container with id 0278f71441a291bde7cfe390b80313ee2773c220b8c51abb3240eaa8e0813b8c Mar 16 15:39:21 crc kubenswrapper[4736]: I0316 15:39:21.368986 4736 generic.go:334] "Generic (PLEG): container finished" podID="c8ba66d5-5da5-472a-844d-a17b77655425" containerID="7c1f622b962971f095b8d2c8db09947a8910c489c88ff669165752f5a654dc54" exitCode=0 Mar 16 15:39:21 crc kubenswrapper[4736]: I0316 15:39:21.369272 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" event={"ID":"c8ba66d5-5da5-472a-844d-a17b77655425","Type":"ContainerDied","Data":"7c1f622b962971f095b8d2c8db09947a8910c489c88ff669165752f5a654dc54"} Mar 16 15:39:21 crc kubenswrapper[4736]: I0316 15:39:21.369375 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" event={"ID":"c8ba66d5-5da5-472a-844d-a17b77655425","Type":"ContainerStarted","Data":"0278f71441a291bde7cfe390b80313ee2773c220b8c51abb3240eaa8e0813b8c"} Mar 16 15:39:22 crc kubenswrapper[4736]: I0316 15:39:22.383908 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" event={"ID":"c8ba66d5-5da5-472a-844d-a17b77655425","Type":"ContainerStarted","Data":"f909aca514d002f1d15c88c7d2bacc714e3b2f65c8f7ec5266efda9e6a45109d"} Mar 16 15:39:22 crc kubenswrapper[4736]: I0316 15:39:22.385324 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:22 crc kubenswrapper[4736]: I0316 15:39:22.418535 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" podStartSLOduration=3.418516986 podStartE2EDuration="3.418516986s" podCreationTimestamp="2026-03-16 15:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:39:22.407664065 +0000 UTC m=+1564.135054362" watchObservedRunningTime="2026-03-16 15:39:22.418516986 +0000 UTC m=+1564.145907273" Mar 16 15:39:30 crc kubenswrapper[4736]: I0316 15:39:30.038401 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:39:30 crc kubenswrapper[4736]: I0316 15:39:30.127840 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f7855c887-z2wwb"] Mar 16 15:39:30 crc kubenswrapper[4736]: I0316 15:39:30.128198 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" podUID="e662df69-1ac9-4967-a5a3-e72675cf70ff" containerName="dnsmasq-dns" containerID="cri-o://4f73e59ce8766f2759453342c1038c3ceeb0ecc4dace95cad349bbcc974a7b92" gracePeriod=10 Mar 16 15:39:30 crc kubenswrapper[4736]: I0316 15:39:30.523746 4736 generic.go:334] "Generic (PLEG): container finished" podID="e662df69-1ac9-4967-a5a3-e72675cf70ff" containerID="4f73e59ce8766f2759453342c1038c3ceeb0ecc4dace95cad349bbcc974a7b92" exitCode=0 Mar 16 15:39:30 crc kubenswrapper[4736]: I0316 15:39:30.524093 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" event={"ID":"e662df69-1ac9-4967-a5a3-e72675cf70ff","Type":"ContainerDied","Data":"4f73e59ce8766f2759453342c1038c3ceeb0ecc4dace95cad349bbcc974a7b92"} Mar 16 15:39:30 crc kubenswrapper[4736]: I0316 15:39:30.849376 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:39:30 crc kubenswrapper[4736]: I0316 15:39:30.922163 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-config\") pod \"e662df69-1ac9-4967-a5a3-e72675cf70ff\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " Mar 16 15:39:30 crc kubenswrapper[4736]: I0316 15:39:30.922631 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-dns-svc\") pod \"e662df69-1ac9-4967-a5a3-e72675cf70ff\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " Mar 16 15:39:30 crc kubenswrapper[4736]: I0316 15:39:30.922746 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-ovsdbserver-sb\") pod \"e662df69-1ac9-4967-a5a3-e72675cf70ff\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " Mar 16 15:39:30 crc kubenswrapper[4736]: I0316 15:39:30.922905 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbx9r\" (UniqueName: \"kubernetes.io/projected/e662df69-1ac9-4967-a5a3-e72675cf70ff-kube-api-access-mbx9r\") pod \"e662df69-1ac9-4967-a5a3-e72675cf70ff\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " Mar 16 15:39:30 crc kubenswrapper[4736]: I0316 15:39:30.923010 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-ovsdbserver-nb\") pod \"e662df69-1ac9-4967-a5a3-e72675cf70ff\" (UID: \"e662df69-1ac9-4967-a5a3-e72675cf70ff\") " Mar 16 15:39:30 crc kubenswrapper[4736]: I0316 15:39:30.998584 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e662df69-1ac9-4967-a5a3-e72675cf70ff-kube-api-access-mbx9r" (OuterVolumeSpecName: "kube-api-access-mbx9r") pod "e662df69-1ac9-4967-a5a3-e72675cf70ff" (UID: "e662df69-1ac9-4967-a5a3-e72675cf70ff"). InnerVolumeSpecName "kube-api-access-mbx9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.028247 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbx9r\" (UniqueName: \"kubernetes.io/projected/e662df69-1ac9-4967-a5a3-e72675cf70ff-kube-api-access-mbx9r\") on node \"crc\" DevicePath \"\"" Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.106868 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e662df69-1ac9-4967-a5a3-e72675cf70ff" (UID: "e662df69-1ac9-4967-a5a3-e72675cf70ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.114014 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e662df69-1ac9-4967-a5a3-e72675cf70ff" (UID: "e662df69-1ac9-4967-a5a3-e72675cf70ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.122883 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e662df69-1ac9-4967-a5a3-e72675cf70ff" (UID: "e662df69-1ac9-4967-a5a3-e72675cf70ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.129014 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-config" (OuterVolumeSpecName: "config") pod "e662df69-1ac9-4967-a5a3-e72675cf70ff" (UID: "e662df69-1ac9-4967-a5a3-e72675cf70ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.132040 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.132141 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.132159 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.132171 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e662df69-1ac9-4967-a5a3-e72675cf70ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.537822 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" event={"ID":"e662df69-1ac9-4967-a5a3-e72675cf70ff","Type":"ContainerDied","Data":"5da8f608641b856507b68ed10073df4f2450bc4f122964a1047419fce1cfadb2"} Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.537874 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f7855c887-z2wwb" Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.537906 4736 scope.go:117] "RemoveContainer" containerID="4f73e59ce8766f2759453342c1038c3ceeb0ecc4dace95cad349bbcc974a7b92" Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.572806 4736 scope.go:117] "RemoveContainer" containerID="5ee12c30475891f01b8fb37e2e1b6297103b0a84582384d3c384ec3570c6ec66" Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.586294 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f7855c887-z2wwb"] Mar 16 15:39:31 crc kubenswrapper[4736]: I0316 15:39:31.595649 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f7855c887-z2wwb"] Mar 16 15:39:32 crc kubenswrapper[4736]: I0316 15:39:32.989524 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e662df69-1ac9-4967-a5a3-e72675cf70ff" path="/var/lib/kubelet/pods/e662df69-1ac9-4967-a5a3-e72675cf70ff/volumes" Mar 16 15:39:38 crc kubenswrapper[4736]: I0316 15:39:38.507645 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:39:38 crc kubenswrapper[4736]: I0316 15:39:38.508617 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:39:46 crc kubenswrapper[4736]: I0316 15:39:46.335932 4736 scope.go:117] "RemoveContainer" containerID="5265934484f061fe78b8bb8f456241117f6ebf2461459504fdd39ccb8385dc56" Mar 16 15:39:46 crc kubenswrapper[4736]: I0316 15:39:46.369648 4736 scope.go:117] "RemoveContainer" containerID="6ee095a55e8eea421626c98630666bd9f5215ac2b44878bfffec35b72b21da54" Mar 16 15:39:46 crc kubenswrapper[4736]: I0316 15:39:46.452434 4736 scope.go:117] "RemoveContainer" containerID="c5b423c00c460f9a66eeab99191397a8cfd3e65e0e00d9710ca0facad764d09c" Mar 16 15:39:46 crc kubenswrapper[4736]: I0316 15:39:46.500329 4736 scope.go:117] "RemoveContainer" containerID="6333ff67604da865829bb9fef5a314db61bbf3710b586801fb39251551619648" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.361384 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5q5wk"] Mar 16 15:39:52 crc kubenswrapper[4736]: E0316 15:39:52.362845 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e662df69-1ac9-4967-a5a3-e72675cf70ff" containerName="init" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.362867 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e662df69-1ac9-4967-a5a3-e72675cf70ff" containerName="init" Mar 16 15:39:52 crc kubenswrapper[4736]: E0316 15:39:52.362925 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e662df69-1ac9-4967-a5a3-e72675cf70ff" containerName="dnsmasq-dns" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.362932 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e662df69-1ac9-4967-a5a3-e72675cf70ff" containerName="dnsmasq-dns" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.363155 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e662df69-1ac9-4967-a5a3-e72675cf70ff" containerName="dnsmasq-dns" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.364920 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.381306 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5q5wk"] Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.526774 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca766c3-4adb-4609-a917-7e07a11ffc4f-catalog-content\") pod \"community-operators-5q5wk\" (UID: \"bca766c3-4adb-4609-a917-7e07a11ffc4f\") " pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.527430 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca766c3-4adb-4609-a917-7e07a11ffc4f-utilities\") pod \"community-operators-5q5wk\" (UID: \"bca766c3-4adb-4609-a917-7e07a11ffc4f\") " pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.527669 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qw9d\" (UniqueName: \"kubernetes.io/projected/bca766c3-4adb-4609-a917-7e07a11ffc4f-kube-api-access-5qw9d\") pod \"community-operators-5q5wk\" (UID: \"bca766c3-4adb-4609-a917-7e07a11ffc4f\") " pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.629443 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca766c3-4adb-4609-a917-7e07a11ffc4f-utilities\") pod \"community-operators-5q5wk\" (UID: \"bca766c3-4adb-4609-a917-7e07a11ffc4f\") " pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.629497 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qw9d\" (UniqueName: \"kubernetes.io/projected/bca766c3-4adb-4609-a917-7e07a11ffc4f-kube-api-access-5qw9d\") pod \"community-operators-5q5wk\" (UID: \"bca766c3-4adb-4609-a917-7e07a11ffc4f\") " pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.629626 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca766c3-4adb-4609-a917-7e07a11ffc4f-catalog-content\") pod \"community-operators-5q5wk\" (UID: \"bca766c3-4adb-4609-a917-7e07a11ffc4f\") " pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.630240 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca766c3-4adb-4609-a917-7e07a11ffc4f-catalog-content\") pod \"community-operators-5q5wk\" (UID: \"bca766c3-4adb-4609-a917-7e07a11ffc4f\") " pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.630534 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca766c3-4adb-4609-a917-7e07a11ffc4f-utilities\") pod \"community-operators-5q5wk\" (UID: \"bca766c3-4adb-4609-a917-7e07a11ffc4f\") " pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.649184 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qw9d\" (UniqueName: \"kubernetes.io/projected/bca766c3-4adb-4609-a917-7e07a11ffc4f-kube-api-access-5qw9d\") pod \"community-operators-5q5wk\" (UID: \"bca766c3-4adb-4609-a917-7e07a11ffc4f\") " pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:39:52 crc kubenswrapper[4736]: I0316 15:39:52.690200 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:39:53 crc kubenswrapper[4736]: I0316 15:39:53.206831 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5q5wk"] Mar 16 15:39:53 crc kubenswrapper[4736]: I0316 15:39:53.831201 4736 generic.go:334] "Generic (PLEG): container finished" podID="bca766c3-4adb-4609-a917-7e07a11ffc4f" containerID="a8e4560a53c3f97c4cbbe3607603d34a9223558f5ebb63a033304a1c4fc358dd" exitCode=0 Mar 16 15:39:53 crc kubenswrapper[4736]: I0316 15:39:53.831282 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5q5wk" event={"ID":"bca766c3-4adb-4609-a917-7e07a11ffc4f","Type":"ContainerDied","Data":"a8e4560a53c3f97c4cbbe3607603d34a9223558f5ebb63a033304a1c4fc358dd"} Mar 16 15:39:53 crc kubenswrapper[4736]: I0316 15:39:53.831480 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5q5wk" event={"ID":"bca766c3-4adb-4609-a917-7e07a11ffc4f","Type":"ContainerStarted","Data":"7ed2d548460132f5c4a153212beb5bb63ef276aa7f8325024f28df85407666c2"} Mar 16 15:39:54 crc kubenswrapper[4736]: I0316 15:39:54.841363 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5q5wk" event={"ID":"bca766c3-4adb-4609-a917-7e07a11ffc4f","Type":"ContainerStarted","Data":"5c1e33b93129adb2504dae0597aa97a5ef60296d70db293399ba4de054c920d9"} Mar 16 15:39:56 crc kubenswrapper[4736]: I0316 15:39:56.865211 4736 generic.go:334] "Generic (PLEG): container finished" podID="bca766c3-4adb-4609-a917-7e07a11ffc4f" containerID="5c1e33b93129adb2504dae0597aa97a5ef60296d70db293399ba4de054c920d9" exitCode=0 Mar 16 15:39:56 crc kubenswrapper[4736]: I0316 15:39:56.865295 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5q5wk" event={"ID":"bca766c3-4adb-4609-a917-7e07a11ffc4f","Type":"ContainerDied","Data":"5c1e33b93129adb2504dae0597aa97a5ef60296d70db293399ba4de054c920d9"} Mar 16 15:39:57 crc kubenswrapper[4736]: I0316 15:39:57.875700 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5q5wk" event={"ID":"bca766c3-4adb-4609-a917-7e07a11ffc4f","Type":"ContainerStarted","Data":"2b1199e90f9b964d65594d581ee499756ca5562c241f9ceace81cd81c4c825de"} Mar 16 15:39:57 crc kubenswrapper[4736]: I0316 15:39:57.902847 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5q5wk" podStartSLOduration=2.305183404 podStartE2EDuration="5.902806764s" podCreationTimestamp="2026-03-16 15:39:52 +0000 UTC" firstStartedPulling="2026-03-16 15:39:53.83389996 +0000 UTC m=+1595.561290277" lastFinishedPulling="2026-03-16 15:39:57.43152334 +0000 UTC m=+1599.158913637" observedRunningTime="2026-03-16 15:39:57.8957768 +0000 UTC m=+1599.623167077" watchObservedRunningTime="2026-03-16 15:39:57.902806764 +0000 UTC m=+1599.630197051" Mar 16 15:40:00 crc kubenswrapper[4736]: I0316 15:40:00.140599 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561260-bflcm"] Mar 16 15:40:00 crc kubenswrapper[4736]: I0316 15:40:00.142564 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561260-bflcm" Mar 16 15:40:00 crc kubenswrapper[4736]: I0316 15:40:00.147233 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:40:00 crc kubenswrapper[4736]: I0316 15:40:00.147566 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:40:00 crc kubenswrapper[4736]: I0316 15:40:00.149042 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:40:00 crc kubenswrapper[4736]: I0316 15:40:00.152232 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561260-bflcm"] Mar 16 15:40:00 crc kubenswrapper[4736]: I0316 15:40:00.316432 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8bnf\" (UniqueName: \"kubernetes.io/projected/07ed35f7-92c1-4d82-acea-7733d8dd9be9-kube-api-access-l8bnf\") pod \"auto-csr-approver-29561260-bflcm\" (UID: \"07ed35f7-92c1-4d82-acea-7733d8dd9be9\") " pod="openshift-infra/auto-csr-approver-29561260-bflcm" Mar 16 15:40:00 crc kubenswrapper[4736]: I0316 15:40:00.418234 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bnf\" (UniqueName: \"kubernetes.io/projected/07ed35f7-92c1-4d82-acea-7733d8dd9be9-kube-api-access-l8bnf\") pod \"auto-csr-approver-29561260-bflcm\" (UID: \"07ed35f7-92c1-4d82-acea-7733d8dd9be9\") " pod="openshift-infra/auto-csr-approver-29561260-bflcm" Mar 16 15:40:00 crc kubenswrapper[4736]: I0316 15:40:00.445462 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8bnf\" (UniqueName: \"kubernetes.io/projected/07ed35f7-92c1-4d82-acea-7733d8dd9be9-kube-api-access-l8bnf\") pod \"auto-csr-approver-29561260-bflcm\" (UID: \"07ed35f7-92c1-4d82-acea-7733d8dd9be9\") " pod="openshift-infra/auto-csr-approver-29561260-bflcm" Mar 16 15:40:00 crc kubenswrapper[4736]: I0316 15:40:00.508389 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561260-bflcm" Mar 16 15:40:01 crc kubenswrapper[4736]: I0316 15:40:01.029413 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561260-bflcm"] Mar 16 15:40:01 crc kubenswrapper[4736]: I0316 15:40:01.912003 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561260-bflcm" event={"ID":"07ed35f7-92c1-4d82-acea-7733d8dd9be9","Type":"ContainerStarted","Data":"d2156456ea0fb2f298a43cddd1c36f48a1c5c3e2f80f7e7fadc926291c6805a0"} Mar 16 15:40:02 crc kubenswrapper[4736]: I0316 15:40:02.692157 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:40:02 crc kubenswrapper[4736]: I0316 15:40:02.693146 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:40:02 crc kubenswrapper[4736]: I0316 15:40:02.923054 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561260-bflcm" event={"ID":"07ed35f7-92c1-4d82-acea-7733d8dd9be9","Type":"ContainerStarted","Data":"77f54924305095b576cf3ff70fee23ebc95ce4a82f1a0e59911a5b2a12ccc9e2"} Mar 16 15:40:02 crc kubenswrapper[4736]: I0316 15:40:02.937406 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561260-bflcm" podStartSLOduration=1.559153692 podStartE2EDuration="2.937388499s" podCreationTimestamp="2026-03-16 15:40:00 +0000 UTC" firstStartedPulling="2026-03-16 15:40:01.016035626 +0000 UTC m=+1602.743425913" lastFinishedPulling="2026-03-16 15:40:02.394270433 +0000 UTC m=+1604.121660720" observedRunningTime="2026-03-16 15:40:02.936680549 +0000 UTC m=+1604.664070836" watchObservedRunningTime="2026-03-16 15:40:02.937388499 +0000 UTC m=+1604.664778786" Mar 16 15:40:03 crc kubenswrapper[4736]: I0316 15:40:03.775970 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5q5wk" podUID="bca766c3-4adb-4609-a917-7e07a11ffc4f" containerName="registry-server" probeResult="failure" output=< Mar 16 15:40:03 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 15:40:03 crc kubenswrapper[4736]: > Mar 16 15:40:03 crc kubenswrapper[4736]: I0316 15:40:03.934627 4736 generic.go:334] "Generic (PLEG): container finished" podID="07ed35f7-92c1-4d82-acea-7733d8dd9be9" containerID="77f54924305095b576cf3ff70fee23ebc95ce4a82f1a0e59911a5b2a12ccc9e2" exitCode=0 Mar 16 15:40:03 crc kubenswrapper[4736]: I0316 15:40:03.934680 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561260-bflcm" event={"ID":"07ed35f7-92c1-4d82-acea-7733d8dd9be9","Type":"ContainerDied","Data":"77f54924305095b576cf3ff70fee23ebc95ce4a82f1a0e59911a5b2a12ccc9e2"} Mar 16 15:40:04 crc kubenswrapper[4736]: E0316 15:40:04.296984 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-proxy-678dd4f677-jxtsk" podUID="bccee937-d642-4483-87fb-033b157cf68c" Mar 16 15:40:04 crc kubenswrapper[4736]: I0316 15:40:04.945360 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:40:05 crc kubenswrapper[4736]: I0316 15:40:05.446290 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561260-bflcm" Mar 16 15:40:05 crc kubenswrapper[4736]: I0316 15:40:05.630084 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8bnf\" (UniqueName: \"kubernetes.io/projected/07ed35f7-92c1-4d82-acea-7733d8dd9be9-kube-api-access-l8bnf\") pod \"07ed35f7-92c1-4d82-acea-7733d8dd9be9\" (UID: \"07ed35f7-92c1-4d82-acea-7733d8dd9be9\") " Mar 16 15:40:05 crc kubenswrapper[4736]: I0316 15:40:05.642509 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07ed35f7-92c1-4d82-acea-7733d8dd9be9-kube-api-access-l8bnf" (OuterVolumeSpecName: "kube-api-access-l8bnf") pod "07ed35f7-92c1-4d82-acea-7733d8dd9be9" (UID: "07ed35f7-92c1-4d82-acea-7733d8dd9be9"). InnerVolumeSpecName "kube-api-access-l8bnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:40:05 crc kubenswrapper[4736]: I0316 15:40:05.733472 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:40:05 crc kubenswrapper[4736]: I0316 15:40:05.733621 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8bnf\" (UniqueName: \"kubernetes.io/projected/07ed35f7-92c1-4d82-acea-7733d8dd9be9-kube-api-access-l8bnf\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:05 crc kubenswrapper[4736]: I0316 15:40:05.742742 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bccee937-d642-4483-87fb-033b157cf68c-etc-swift\") pod \"swift-proxy-678dd4f677-jxtsk\" (UID: \"bccee937-d642-4483-87fb-033b157cf68c\") " pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:40:05 crc kubenswrapper[4736]: I0316 15:40:05.849118 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:40:05 crc kubenswrapper[4736]: I0316 15:40:05.988885 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561260-bflcm" event={"ID":"07ed35f7-92c1-4d82-acea-7733d8dd9be9","Type":"ContainerDied","Data":"d2156456ea0fb2f298a43cddd1c36f48a1c5c3e2f80f7e7fadc926291c6805a0"} Mar 16 15:40:05 crc kubenswrapper[4736]: I0316 15:40:05.989350 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2156456ea0fb2f298a43cddd1c36f48a1c5c3e2f80f7e7fadc926291c6805a0" Mar 16 15:40:05 crc kubenswrapper[4736]: I0316 15:40:05.989406 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561260-bflcm" Mar 16 15:40:06 crc kubenswrapper[4736]: I0316 15:40:06.087716 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561254-fl8vh"] Mar 16 15:40:06 crc kubenswrapper[4736]: I0316 15:40:06.113505 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561254-fl8vh"] Mar 16 15:40:06 crc kubenswrapper[4736]: I0316 15:40:06.435861 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-678dd4f677-jxtsk"] Mar 16 15:40:06 crc kubenswrapper[4736]: W0316 15:40:06.440528 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbccee937_d642_4483_87fb_033b157cf68c.slice/crio-fa9e606d7590d77ce46b27dad0f56863d09598393da1204d67411cc9104ba23a WatchSource:0}: Error finding container fa9e606d7590d77ce46b27dad0f56863d09598393da1204d67411cc9104ba23a: Status 404 returned error can't find the container with id fa9e606d7590d77ce46b27dad0f56863d09598393da1204d67411cc9104ba23a Mar 16 15:40:07 crc kubenswrapper[4736]: I0316 15:40:07.003225 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4074fbb2-9d24-491f-9053-54171f6e4dbd" path="/var/lib/kubelet/pods/4074fbb2-9d24-491f-9053-54171f6e4dbd/volumes" Mar 16 15:40:07 crc kubenswrapper[4736]: I0316 15:40:07.034215 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-678dd4f677-jxtsk" event={"ID":"bccee937-d642-4483-87fb-033b157cf68c","Type":"ContainerStarted","Data":"6988572ccc91a8a4596f2df4e6bbac9e36187d7af90061c0a693fd641deaba6b"} Mar 16 15:40:07 crc kubenswrapper[4736]: I0316 15:40:07.034299 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-678dd4f677-jxtsk" event={"ID":"bccee937-d642-4483-87fb-033b157cf68c","Type":"ContainerStarted","Data":"fa133bc28e2a1869281ed248a7f23005e0ee1bbb29fa6103cb5ced6fdf50cf0e"} Mar 16 15:40:07 crc kubenswrapper[4736]: I0316 15:40:07.034314 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-678dd4f677-jxtsk" event={"ID":"bccee937-d642-4483-87fb-033b157cf68c","Type":"ContainerStarted","Data":"fa9e606d7590d77ce46b27dad0f56863d09598393da1204d67411cc9104ba23a"} Mar 16 15:40:07 crc kubenswrapper[4736]: I0316 15:40:07.034533 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:40:07 crc kubenswrapper[4736]: I0316 15:40:07.063339 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-678dd4f677-jxtsk" podStartSLOduration=374.063310194 podStartE2EDuration="6m14.063310194s" podCreationTimestamp="2026-03-16 15:33:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:40:07.057532924 +0000 UTC m=+1608.784923211" watchObservedRunningTime="2026-03-16 15:40:07.063310194 +0000 UTC m=+1608.790700481" Mar 16 15:40:08 crc kubenswrapper[4736]: I0316 15:40:08.051385 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:40:08 crc kubenswrapper[4736]: I0316 15:40:08.508007 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:40:08 crc kubenswrapper[4736]: I0316 15:40:08.508096 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:40:08 crc kubenswrapper[4736]: I0316 15:40:08.508241 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:40:08 crc kubenswrapper[4736]: I0316 15:40:08.509408 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 15:40:08 crc kubenswrapper[4736]: I0316 15:40:08.509472 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" gracePeriod=600 Mar 16 15:40:08 crc kubenswrapper[4736]: E0316 15:40:08.639623 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:40:09 crc kubenswrapper[4736]: I0316 15:40:09.080198 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" exitCode=0 Mar 16 15:40:09 crc kubenswrapper[4736]: I0316 15:40:09.082252 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7"} Mar 16 15:40:09 crc kubenswrapper[4736]: I0316 15:40:09.082383 4736 scope.go:117] "RemoveContainer" containerID="08f32cddeb0a32067a3f6c160d0a3610c524e1e7ffc4c4430b9902318c96ac8e" Mar 16 15:40:09 crc kubenswrapper[4736]: I0316 15:40:09.083131 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:40:09 crc kubenswrapper[4736]: E0316 15:40:09.083418 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:40:12 crc kubenswrapper[4736]: I0316 15:40:12.751823 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:40:12 crc kubenswrapper[4736]: I0316 15:40:12.807850 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:40:13 crc kubenswrapper[4736]: I0316 15:40:13.033160 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5q5wk"] Mar 16 15:40:14 crc kubenswrapper[4736]: I0316 15:40:14.136888 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5q5wk" podUID="bca766c3-4adb-4609-a917-7e07a11ffc4f" containerName="registry-server" containerID="cri-o://2b1199e90f9b964d65594d581ee499756ca5562c241f9ceace81cd81c4c825de" gracePeriod=2 Mar 16 15:40:14 crc kubenswrapper[4736]: I0316 15:40:14.650362 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:40:14 crc kubenswrapper[4736]: I0316 15:40:14.782175 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca766c3-4adb-4609-a917-7e07a11ffc4f-catalog-content\") pod \"bca766c3-4adb-4609-a917-7e07a11ffc4f\" (UID: \"bca766c3-4adb-4609-a917-7e07a11ffc4f\") " Mar 16 15:40:14 crc kubenswrapper[4736]: I0316 15:40:14.782269 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qw9d\" (UniqueName: \"kubernetes.io/projected/bca766c3-4adb-4609-a917-7e07a11ffc4f-kube-api-access-5qw9d\") pod \"bca766c3-4adb-4609-a917-7e07a11ffc4f\" (UID: \"bca766c3-4adb-4609-a917-7e07a11ffc4f\") " Mar 16 15:40:14 crc kubenswrapper[4736]: I0316 15:40:14.782490 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca766c3-4adb-4609-a917-7e07a11ffc4f-utilities\") pod \"bca766c3-4adb-4609-a917-7e07a11ffc4f\" (UID: \"bca766c3-4adb-4609-a917-7e07a11ffc4f\") " Mar 16 15:40:14 crc kubenswrapper[4736]: I0316 15:40:14.791994 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bca766c3-4adb-4609-a917-7e07a11ffc4f-utilities" (OuterVolumeSpecName: "utilities") pod "bca766c3-4adb-4609-a917-7e07a11ffc4f" (UID: "bca766c3-4adb-4609-a917-7e07a11ffc4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:40:14 crc kubenswrapper[4736]: I0316 15:40:14.812525 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bca766c3-4adb-4609-a917-7e07a11ffc4f-kube-api-access-5qw9d" (OuterVolumeSpecName: "kube-api-access-5qw9d") pod "bca766c3-4adb-4609-a917-7e07a11ffc4f" (UID: "bca766c3-4adb-4609-a917-7e07a11ffc4f"). InnerVolumeSpecName "kube-api-access-5qw9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:40:14 crc kubenswrapper[4736]: I0316 15:40:14.843424 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bca766c3-4adb-4609-a917-7e07a11ffc4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bca766c3-4adb-4609-a917-7e07a11ffc4f" (UID: "bca766c3-4adb-4609-a917-7e07a11ffc4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:40:14 crc kubenswrapper[4736]: I0316 15:40:14.884684 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bca766c3-4adb-4609-a917-7e07a11ffc4f-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:14 crc kubenswrapper[4736]: I0316 15:40:14.884729 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bca766c3-4adb-4609-a917-7e07a11ffc4f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:14 crc kubenswrapper[4736]: I0316 15:40:14.884742 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qw9d\" (UniqueName: \"kubernetes.io/projected/bca766c3-4adb-4609-a917-7e07a11ffc4f-kube-api-access-5qw9d\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.147710 4736 generic.go:334] "Generic (PLEG): container finished" podID="bca766c3-4adb-4609-a917-7e07a11ffc4f" containerID="2b1199e90f9b964d65594d581ee499756ca5562c241f9ceace81cd81c4c825de" exitCode=0 Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.147769 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5q5wk" event={"ID":"bca766c3-4adb-4609-a917-7e07a11ffc4f","Type":"ContainerDied","Data":"2b1199e90f9b964d65594d581ee499756ca5562c241f9ceace81cd81c4c825de"} Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.148799 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5q5wk" event={"ID":"bca766c3-4adb-4609-a917-7e07a11ffc4f","Type":"ContainerDied","Data":"7ed2d548460132f5c4a153212beb5bb63ef276aa7f8325024f28df85407666c2"} Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.148873 4736 scope.go:117] "RemoveContainer" containerID="2b1199e90f9b964d65594d581ee499756ca5562c241f9ceace81cd81c4c825de" Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.147851 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5q5wk" Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.172325 4736 scope.go:117] "RemoveContainer" containerID="5c1e33b93129adb2504dae0597aa97a5ef60296d70db293399ba4de054c920d9" Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.182388 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5q5wk"] Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.194409 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5q5wk"] Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.219144 4736 scope.go:117] "RemoveContainer" containerID="a8e4560a53c3f97c4cbbe3607603d34a9223558f5ebb63a033304a1c4fc358dd" Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.242783 4736 scope.go:117] "RemoveContainer" containerID="2b1199e90f9b964d65594d581ee499756ca5562c241f9ceace81cd81c4c825de" Mar 16 15:40:15 crc kubenswrapper[4736]: E0316 15:40:15.243392 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b1199e90f9b964d65594d581ee499756ca5562c241f9ceace81cd81c4c825de\": container with ID starting with 2b1199e90f9b964d65594d581ee499756ca5562c241f9ceace81cd81c4c825de not found: ID does not exist" containerID="2b1199e90f9b964d65594d581ee499756ca5562c241f9ceace81cd81c4c825de" Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.243424 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b1199e90f9b964d65594d581ee499756ca5562c241f9ceace81cd81c4c825de"} err="failed to get container status \"2b1199e90f9b964d65594d581ee499756ca5562c241f9ceace81cd81c4c825de\": rpc error: code = NotFound desc = could not find container \"2b1199e90f9b964d65594d581ee499756ca5562c241f9ceace81cd81c4c825de\": container with ID starting with 2b1199e90f9b964d65594d581ee499756ca5562c241f9ceace81cd81c4c825de not found: ID does not exist" Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.243446 4736 scope.go:117] "RemoveContainer" containerID="5c1e33b93129adb2504dae0597aa97a5ef60296d70db293399ba4de054c920d9" Mar 16 15:40:15 crc kubenswrapper[4736]: E0316 15:40:15.244985 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c1e33b93129adb2504dae0597aa97a5ef60296d70db293399ba4de054c920d9\": container with ID starting with 5c1e33b93129adb2504dae0597aa97a5ef60296d70db293399ba4de054c920d9 not found: ID does not exist" containerID="5c1e33b93129adb2504dae0597aa97a5ef60296d70db293399ba4de054c920d9" Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.245011 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c1e33b93129adb2504dae0597aa97a5ef60296d70db293399ba4de054c920d9"} err="failed to get container status \"5c1e33b93129adb2504dae0597aa97a5ef60296d70db293399ba4de054c920d9\": rpc error: code = NotFound desc = could not find container \"5c1e33b93129adb2504dae0597aa97a5ef60296d70db293399ba4de054c920d9\": container with ID starting with 5c1e33b93129adb2504dae0597aa97a5ef60296d70db293399ba4de054c920d9 not found: ID does not exist" Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.245027 4736 scope.go:117] "RemoveContainer" containerID="a8e4560a53c3f97c4cbbe3607603d34a9223558f5ebb63a033304a1c4fc358dd" Mar 16 15:40:15 crc kubenswrapper[4736]: E0316 15:40:15.245673 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e4560a53c3f97c4cbbe3607603d34a9223558f5ebb63a033304a1c4fc358dd\": container with ID starting with a8e4560a53c3f97c4cbbe3607603d34a9223558f5ebb63a033304a1c4fc358dd not found: ID does not exist" containerID="a8e4560a53c3f97c4cbbe3607603d34a9223558f5ebb63a033304a1c4fc358dd" Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.245699 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e4560a53c3f97c4cbbe3607603d34a9223558f5ebb63a033304a1c4fc358dd"} err="failed to get container status \"a8e4560a53c3f97c4cbbe3607603d34a9223558f5ebb63a033304a1c4fc358dd\": rpc error: code = NotFound desc = could not find container \"a8e4560a53c3f97c4cbbe3607603d34a9223558f5ebb63a033304a1c4fc358dd\": container with ID starting with a8e4560a53c3f97c4cbbe3607603d34a9223558f5ebb63a033304a1c4fc358dd not found: ID does not exist" Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.893442 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:40:15 crc kubenswrapper[4736]: I0316 15:40:15.920401 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-678dd4f677-jxtsk" Mar 16 15:40:16 crc kubenswrapper[4736]: I0316 15:40:16.992572 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bca766c3-4adb-4609-a917-7e07a11ffc4f" path="/var/lib/kubelet/pods/bca766c3-4adb-4609-a917-7e07a11ffc4f/volumes" Mar 16 15:40:17 crc kubenswrapper[4736]: I0316 15:40:17.734391 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kc5g2"] Mar 16 15:40:17 crc kubenswrapper[4736]: E0316 15:40:17.735980 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca766c3-4adb-4609-a917-7e07a11ffc4f" containerName="extract-utilities" Mar 16 15:40:17 crc kubenswrapper[4736]: I0316 15:40:17.736005 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca766c3-4adb-4609-a917-7e07a11ffc4f" containerName="extract-utilities" Mar 16 15:40:17 crc kubenswrapper[4736]: E0316 15:40:17.736031 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca766c3-4adb-4609-a917-7e07a11ffc4f" containerName="registry-server" Mar 16 15:40:17 crc kubenswrapper[4736]: I0316 15:40:17.736039 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca766c3-4adb-4609-a917-7e07a11ffc4f" containerName="registry-server" Mar 16 15:40:17 crc kubenswrapper[4736]: E0316 15:40:17.736066 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ed35f7-92c1-4d82-acea-7733d8dd9be9" containerName="oc" Mar 16 15:40:17 crc kubenswrapper[4736]: I0316 15:40:17.736072 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ed35f7-92c1-4d82-acea-7733d8dd9be9" containerName="oc" Mar 16 15:40:17 crc kubenswrapper[4736]: E0316 15:40:17.736089 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bca766c3-4adb-4609-a917-7e07a11ffc4f" containerName="extract-content" Mar 16 15:40:17 crc kubenswrapper[4736]: I0316 15:40:17.736096 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bca766c3-4adb-4609-a917-7e07a11ffc4f" containerName="extract-content" Mar 16 15:40:17 crc kubenswrapper[4736]: I0316 15:40:17.736523 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="bca766c3-4adb-4609-a917-7e07a11ffc4f" containerName="registry-server" Mar 16 15:40:17 crc kubenswrapper[4736]: I0316 15:40:17.736555 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="07ed35f7-92c1-4d82-acea-7733d8dd9be9" containerName="oc" Mar 16 15:40:17 crc kubenswrapper[4736]: I0316 15:40:17.748854 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:17 crc kubenswrapper[4736]: I0316 15:40:17.761344 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kc5g2"] Mar 16 15:40:17 crc kubenswrapper[4736]: I0316 15:40:17.917364 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e129126d-fe87-47c1-8098-56d95255d546-catalog-content\") pod \"redhat-marketplace-kc5g2\" (UID: \"e129126d-fe87-47c1-8098-56d95255d546\") " pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:17 crc kubenswrapper[4736]: I0316 15:40:17.917560 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqhs8\" (UniqueName: \"kubernetes.io/projected/e129126d-fe87-47c1-8098-56d95255d546-kube-api-access-fqhs8\") pod \"redhat-marketplace-kc5g2\" (UID: \"e129126d-fe87-47c1-8098-56d95255d546\") " pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:17 crc kubenswrapper[4736]: I0316 15:40:17.917673 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e129126d-fe87-47c1-8098-56d95255d546-utilities\") pod \"redhat-marketplace-kc5g2\" (UID: \"e129126d-fe87-47c1-8098-56d95255d546\") " pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:18 crc kubenswrapper[4736]: I0316 15:40:18.019671 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e129126d-fe87-47c1-8098-56d95255d546-utilities\") pod \"redhat-marketplace-kc5g2\" (UID: \"e129126d-fe87-47c1-8098-56d95255d546\") " pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:18 crc kubenswrapper[4736]: I0316 15:40:18.020314 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e129126d-fe87-47c1-8098-56d95255d546-utilities\") pod \"redhat-marketplace-kc5g2\" (UID: \"e129126d-fe87-47c1-8098-56d95255d546\") " pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:18 crc kubenswrapper[4736]: I0316 15:40:18.020954 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e129126d-fe87-47c1-8098-56d95255d546-catalog-content\") pod \"redhat-marketplace-kc5g2\" (UID: \"e129126d-fe87-47c1-8098-56d95255d546\") " pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:18 crc kubenswrapper[4736]: I0316 15:40:18.021528 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqhs8\" (UniqueName: \"kubernetes.io/projected/e129126d-fe87-47c1-8098-56d95255d546-kube-api-access-fqhs8\") pod \"redhat-marketplace-kc5g2\" (UID: \"e129126d-fe87-47c1-8098-56d95255d546\") " pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:18 crc kubenswrapper[4736]: I0316 15:40:18.021425 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e129126d-fe87-47c1-8098-56d95255d546-catalog-content\") pod \"redhat-marketplace-kc5g2\" (UID: \"e129126d-fe87-47c1-8098-56d95255d546\") " pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:18 crc kubenswrapper[4736]: I0316 15:40:18.046193 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqhs8\" (UniqueName: \"kubernetes.io/projected/e129126d-fe87-47c1-8098-56d95255d546-kube-api-access-fqhs8\") pod \"redhat-marketplace-kc5g2\" (UID: \"e129126d-fe87-47c1-8098-56d95255d546\") " pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:18 crc kubenswrapper[4736]: I0316 15:40:18.124075 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:18 crc kubenswrapper[4736]: I0316 15:40:18.632439 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kc5g2"] Mar 16 15:40:19 crc kubenswrapper[4736]: I0316 15:40:19.195966 4736 generic.go:334] "Generic (PLEG): container finished" podID="e129126d-fe87-47c1-8098-56d95255d546" containerID="2f7d457f93e48e935fc69c042412bd622099dc68af8617e4bd9e5cf8ea2b48f3" exitCode=0 Mar 16 15:40:19 crc kubenswrapper[4736]: I0316 15:40:19.196036 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kc5g2" event={"ID":"e129126d-fe87-47c1-8098-56d95255d546","Type":"ContainerDied","Data":"2f7d457f93e48e935fc69c042412bd622099dc68af8617e4bd9e5cf8ea2b48f3"} Mar 16 15:40:19 crc kubenswrapper[4736]: I0316 15:40:19.198179 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kc5g2" event={"ID":"e129126d-fe87-47c1-8098-56d95255d546","Type":"ContainerStarted","Data":"39367d6816dbc8c5e29d016faefd2277784e9eb9191b42059fbf90ac11226834"} Mar 16 15:40:19 crc kubenswrapper[4736]: I0316 15:40:19.978613 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:40:19 crc kubenswrapper[4736]: E0316 15:40:19.979594 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:40:20 crc kubenswrapper[4736]: I0316 15:40:20.213029 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kc5g2" event={"ID":"e129126d-fe87-47c1-8098-56d95255d546","Type":"ContainerStarted","Data":"9789ebd1bbf6958cd87dffba8f0f4ce3dafa4bd5ebafafcb3296c3fdbb875604"} Mar 16 15:40:22 crc kubenswrapper[4736]: I0316 15:40:22.237418 4736 generic.go:334] "Generic (PLEG): container finished" podID="e129126d-fe87-47c1-8098-56d95255d546" containerID="9789ebd1bbf6958cd87dffba8f0f4ce3dafa4bd5ebafafcb3296c3fdbb875604" exitCode=0 Mar 16 15:40:22 crc kubenswrapper[4736]: I0316 15:40:22.237529 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kc5g2" event={"ID":"e129126d-fe87-47c1-8098-56d95255d546","Type":"ContainerDied","Data":"9789ebd1bbf6958cd87dffba8f0f4ce3dafa4bd5ebafafcb3296c3fdbb875604"} Mar 16 15:40:23 crc kubenswrapper[4736]: I0316 15:40:23.256318 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kc5g2" event={"ID":"e129126d-fe87-47c1-8098-56d95255d546","Type":"ContainerStarted","Data":"93eb1f014278e174eb5a76aaff637a97ce1b62ebed88beae29b6c3750bd46826"} Mar 16 15:40:23 crc kubenswrapper[4736]: I0316 15:40:23.277748 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kc5g2" podStartSLOduration=2.776696336 podStartE2EDuration="6.277712438s" podCreationTimestamp="2026-03-16 15:40:17 +0000 UTC" firstStartedPulling="2026-03-16 15:40:19.197593402 +0000 UTC m=+1620.924983689" lastFinishedPulling="2026-03-16 15:40:22.698609504 +0000 UTC m=+1624.425999791" observedRunningTime="2026-03-16 15:40:23.273916053 +0000 UTC m=+1625.001306350" watchObservedRunningTime="2026-03-16 15:40:23.277712438 +0000 UTC m=+1625.005102715" Mar 16 15:40:24 crc kubenswrapper[4736]: I0316 15:40:24.637272 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 16 15:40:25 crc kubenswrapper[4736]: I0316 15:40:25.954573 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 16 15:40:28 crc kubenswrapper[4736]: I0316 15:40:28.124379 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:28 crc kubenswrapper[4736]: I0316 15:40:28.124860 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:29 crc kubenswrapper[4736]: I0316 15:40:29.187917 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-kc5g2" podUID="e129126d-fe87-47c1-8098-56d95255d546" containerName="registry-server" probeResult="failure" output=< Mar 16 15:40:29 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 15:40:29 crc kubenswrapper[4736]: > Mar 16 15:40:29 crc kubenswrapper[4736]: I0316 15:40:29.802234 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="343be938-86f7-45c1-b8ef-a3143202be82" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Mar 16 15:40:29 crc kubenswrapper[4736]: I0316 15:40:29.863478 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="343be938-86f7-45c1-b8ef-a3143202be82" containerName="rabbitmq" containerID="cri-o://501dc9d256dd94e2b7424efff25421d73d4dd41d42dcdab4adda73b4b8210496" gracePeriod=604795 Mar 16 15:40:30 crc kubenswrapper[4736]: I0316 15:40:30.824236 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="582900c6-e591-4ff4-ac53-a8965af431e2" containerName="rabbitmq" containerID="cri-o://501a39f0357b2d260be56384e2411cc489d7f31f3e7014f27fc44ce4f2e7ab5d" gracePeriod=604796 Mar 16 15:40:30 crc kubenswrapper[4736]: I0316 15:40:30.978000 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:40:30 crc kubenswrapper[4736]: E0316 15:40:30.978403 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:40:32 crc kubenswrapper[4736]: I0316 15:40:32.770602 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r9b8d"] Mar 16 15:40:32 crc kubenswrapper[4736]: I0316 15:40:32.772705 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:32 crc kubenswrapper[4736]: I0316 15:40:32.797000 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9b8d"] Mar 16 15:40:32 crc kubenswrapper[4736]: I0316 15:40:32.963054 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7670450-d08a-434a-a9c3-455434e19343-catalog-content\") pod \"certified-operators-r9b8d\" (UID: \"b7670450-d08a-434a-a9c3-455434e19343\") " pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:32 crc kubenswrapper[4736]: I0316 15:40:32.963150 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms6kd\" (UniqueName: \"kubernetes.io/projected/b7670450-d08a-434a-a9c3-455434e19343-kube-api-access-ms6kd\") pod \"certified-operators-r9b8d\" (UID: \"b7670450-d08a-434a-a9c3-455434e19343\") " pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:32 crc kubenswrapper[4736]: I0316 15:40:32.963204 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7670450-d08a-434a-a9c3-455434e19343-utilities\") pod \"certified-operators-r9b8d\" (UID: \"b7670450-d08a-434a-a9c3-455434e19343\") " pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:33 crc kubenswrapper[4736]: I0316 15:40:33.065204 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7670450-d08a-434a-a9c3-455434e19343-utilities\") pod \"certified-operators-r9b8d\" (UID: \"b7670450-d08a-434a-a9c3-455434e19343\") " pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:33 crc kubenswrapper[4736]: I0316 15:40:33.065398 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7670450-d08a-434a-a9c3-455434e19343-catalog-content\") pod \"certified-operators-r9b8d\" (UID: \"b7670450-d08a-434a-a9c3-455434e19343\") " pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:33 crc kubenswrapper[4736]: I0316 15:40:33.065451 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms6kd\" (UniqueName: \"kubernetes.io/projected/b7670450-d08a-434a-a9c3-455434e19343-kube-api-access-ms6kd\") pod \"certified-operators-r9b8d\" (UID: \"b7670450-d08a-434a-a9c3-455434e19343\") " pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:33 crc kubenswrapper[4736]: I0316 15:40:33.066001 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7670450-d08a-434a-a9c3-455434e19343-utilities\") pod \"certified-operators-r9b8d\" (UID: \"b7670450-d08a-434a-a9c3-455434e19343\") " pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:33 crc kubenswrapper[4736]: I0316 15:40:33.066361 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7670450-d08a-434a-a9c3-455434e19343-catalog-content\") pod \"certified-operators-r9b8d\" (UID: \"b7670450-d08a-434a-a9c3-455434e19343\") " pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:33 crc kubenswrapper[4736]: I0316 15:40:33.093213 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms6kd\" (UniqueName: \"kubernetes.io/projected/b7670450-d08a-434a-a9c3-455434e19343-kube-api-access-ms6kd\") pod \"certified-operators-r9b8d\" (UID: \"b7670450-d08a-434a-a9c3-455434e19343\") " pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:33 crc kubenswrapper[4736]: I0316 15:40:33.391238 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:33 crc kubenswrapper[4736]: I0316 15:40:33.888542 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9b8d"] Mar 16 15:40:34 crc kubenswrapper[4736]: I0316 15:40:34.372832 4736 generic.go:334] "Generic (PLEG): container finished" podID="b7670450-d08a-434a-a9c3-455434e19343" containerID="b88453b0f7cc550dd01bb72ee2568af1bda0a64a99cacda2dd02fc8dd2537e6a" exitCode=0 Mar 16 15:40:34 crc kubenswrapper[4736]: I0316 15:40:34.373082 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9b8d" event={"ID":"b7670450-d08a-434a-a9c3-455434e19343","Type":"ContainerDied","Data":"b88453b0f7cc550dd01bb72ee2568af1bda0a64a99cacda2dd02fc8dd2537e6a"} Mar 16 15:40:34 crc kubenswrapper[4736]: I0316 15:40:34.373363 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9b8d" event={"ID":"b7670450-d08a-434a-a9c3-455434e19343","Type":"ContainerStarted","Data":"23c1d68387a44e05e8dbb8f0f962660f6fe2e3713c81faecaaf7a9cb495b64db"} Mar 16 15:40:35 crc kubenswrapper[4736]: I0316 15:40:35.385154 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9b8d" event={"ID":"b7670450-d08a-434a-a9c3-455434e19343","Type":"ContainerStarted","Data":"eadeab5dd3ae7d2eb43c14cc41c2c0af76eb0f8e113b9b4501a79510f56957c7"} Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.399644 4736 generic.go:334] "Generic (PLEG): container finished" podID="343be938-86f7-45c1-b8ef-a3143202be82" containerID="501dc9d256dd94e2b7424efff25421d73d4dd41d42dcdab4adda73b4b8210496" exitCode=0 Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.400162 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"343be938-86f7-45c1-b8ef-a3143202be82","Type":"ContainerDied","Data":"501dc9d256dd94e2b7424efff25421d73d4dd41d42dcdab4adda73b4b8210496"} Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.400265 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"343be938-86f7-45c1-b8ef-a3143202be82","Type":"ContainerDied","Data":"1b8fa3bf076ec7b1d9a0ddebc282e27234b10fd8568440e98b48a580219c7537"} Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.400279 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b8fa3bf076ec7b1d9a0ddebc282e27234b10fd8568440e98b48a580219c7537" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.485030 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.544844 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/343be938-86f7-45c1-b8ef-a3143202be82-erlang-cookie-secret\") pod \"343be938-86f7-45c1-b8ef-a3143202be82\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.544903 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-plugins\") pod \"343be938-86f7-45c1-b8ef-a3143202be82\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.544935 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"343be938-86f7-45c1-b8ef-a3143202be82\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.544974 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-plugins-conf\") pod \"343be938-86f7-45c1-b8ef-a3143202be82\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.545023 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-confd\") pod \"343be938-86f7-45c1-b8ef-a3143202be82\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.545063 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-erlang-cookie\") pod \"343be938-86f7-45c1-b8ef-a3143202be82\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.545146 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/343be938-86f7-45c1-b8ef-a3143202be82-pod-info\") pod \"343be938-86f7-45c1-b8ef-a3143202be82\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.547660 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "343be938-86f7-45c1-b8ef-a3143202be82" (UID: "343be938-86f7-45c1-b8ef-a3143202be82"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.549404 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "343be938-86f7-45c1-b8ef-a3143202be82" (UID: "343be938-86f7-45c1-b8ef-a3143202be82"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.549543 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "343be938-86f7-45c1-b8ef-a3143202be82" (UID: "343be938-86f7-45c1-b8ef-a3143202be82"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.557349 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "343be938-86f7-45c1-b8ef-a3143202be82" (UID: "343be938-86f7-45c1-b8ef-a3143202be82"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.557435 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/343be938-86f7-45c1-b8ef-a3143202be82-pod-info" (OuterVolumeSpecName: "pod-info") pod "343be938-86f7-45c1-b8ef-a3143202be82" (UID: "343be938-86f7-45c1-b8ef-a3143202be82"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.562301 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/343be938-86f7-45c1-b8ef-a3143202be82-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "343be938-86f7-45c1-b8ef-a3143202be82" (UID: "343be938-86f7-45c1-b8ef-a3143202be82"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.647963 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-server-conf\") pod \"343be938-86f7-45c1-b8ef-a3143202be82\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.657966 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-config-data\") pod \"343be938-86f7-45c1-b8ef-a3143202be82\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.658140 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-tls\") pod \"343be938-86f7-45c1-b8ef-a3143202be82\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.658345 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnmtk\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-kube-api-access-cnmtk\") pod \"343be938-86f7-45c1-b8ef-a3143202be82\" (UID: \"343be938-86f7-45c1-b8ef-a3143202be82\") " Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.671873 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.671915 4736 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.671934 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.671945 4736 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/343be938-86f7-45c1-b8ef-a3143202be82-pod-info\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.671954 4736 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/343be938-86f7-45c1-b8ef-a3143202be82-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.671967 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.721863 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "343be938-86f7-45c1-b8ef-a3143202be82" (UID: "343be938-86f7-45c1-b8ef-a3143202be82"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.727206 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-kube-api-access-cnmtk" (OuterVolumeSpecName: "kube-api-access-cnmtk") pod "343be938-86f7-45c1-b8ef-a3143202be82" (UID: "343be938-86f7-45c1-b8ef-a3143202be82"). InnerVolumeSpecName "kube-api-access-cnmtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.765892 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-config-data" (OuterVolumeSpecName: "config-data") pod "343be938-86f7-45c1-b8ef-a3143202be82" (UID: "343be938-86f7-45c1-b8ef-a3143202be82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.775437 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.784695 4736 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.784725 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.784739 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.784762 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnmtk\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-kube-api-access-cnmtk\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.839161 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-server-conf" (OuterVolumeSpecName: "server-conf") pod "343be938-86f7-45c1-b8ef-a3143202be82" (UID: "343be938-86f7-45c1-b8ef-a3143202be82"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.852229 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "343be938-86f7-45c1-b8ef-a3143202be82" (UID: "343be938-86f7-45c1-b8ef-a3143202be82"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.887160 4736 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/343be938-86f7-45c1-b8ef-a3143202be82-server-conf\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:36 crc kubenswrapper[4736]: I0316 15:40:36.887205 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/343be938-86f7-45c1-b8ef-a3143202be82-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.378390 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.395183 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-erlang-cookie\") pod \"582900c6-e591-4ff4-ac53-a8965af431e2\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.395240 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-confd\") pod \"582900c6-e591-4ff4-ac53-a8965af431e2\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.395278 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-plugins-conf\") pod \"582900c6-e591-4ff4-ac53-a8965af431e2\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.395360 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4vn2\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-kube-api-access-f4vn2\") pod \"582900c6-e591-4ff4-ac53-a8965af431e2\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.395408 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"582900c6-e591-4ff4-ac53-a8965af431e2\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.395497 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-plugins\") pod \"582900c6-e591-4ff4-ac53-a8965af431e2\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.395548 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/582900c6-e591-4ff4-ac53-a8965af431e2-pod-info\") pod \"582900c6-e591-4ff4-ac53-a8965af431e2\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.395608 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-config-data\") pod \"582900c6-e591-4ff4-ac53-a8965af431e2\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.395679 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-server-conf\") pod \"582900c6-e591-4ff4-ac53-a8965af431e2\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.395746 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-tls\") pod \"582900c6-e591-4ff4-ac53-a8965af431e2\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.395780 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/582900c6-e591-4ff4-ac53-a8965af431e2-erlang-cookie-secret\") pod \"582900c6-e591-4ff4-ac53-a8965af431e2\" (UID: \"582900c6-e591-4ff4-ac53-a8965af431e2\") " Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.396331 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "582900c6-e591-4ff4-ac53-a8965af431e2" (UID: "582900c6-e591-4ff4-ac53-a8965af431e2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.400192 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "582900c6-e591-4ff4-ac53-a8965af431e2" (UID: "582900c6-e591-4ff4-ac53-a8965af431e2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.400548 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "582900c6-e591-4ff4-ac53-a8965af431e2" (UID: "582900c6-e591-4ff4-ac53-a8965af431e2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.409444 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "582900c6-e591-4ff4-ac53-a8965af431e2" (UID: "582900c6-e591-4ff4-ac53-a8965af431e2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.452547 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "582900c6-e591-4ff4-ac53-a8965af431e2" (UID: "582900c6-e591-4ff4-ac53-a8965af431e2"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.453959 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-kube-api-access-f4vn2" (OuterVolumeSpecName: "kube-api-access-f4vn2") pod "582900c6-e591-4ff4-ac53-a8965af431e2" (UID: "582900c6-e591-4ff4-ac53-a8965af431e2"). InnerVolumeSpecName "kube-api-access-f4vn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.472700 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/582900c6-e591-4ff4-ac53-a8965af431e2-pod-info" (OuterVolumeSpecName: "pod-info") pod "582900c6-e591-4ff4-ac53-a8965af431e2" (UID: "582900c6-e591-4ff4-ac53-a8965af431e2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.472773 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582900c6-e591-4ff4-ac53-a8965af431e2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "582900c6-e591-4ff4-ac53-a8965af431e2" (UID: "582900c6-e591-4ff4-ac53-a8965af431e2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.473215 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-config-data" (OuterVolumeSpecName: "config-data") pod "582900c6-e591-4ff4-ac53-a8965af431e2" (UID: "582900c6-e591-4ff4-ac53-a8965af431e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.482830 4736 generic.go:334] "Generic (PLEG): container finished" podID="582900c6-e591-4ff4-ac53-a8965af431e2" containerID="501a39f0357b2d260be56384e2411cc489d7f31f3e7014f27fc44ce4f2e7ab5d" exitCode=0 Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.482898 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"582900c6-e591-4ff4-ac53-a8965af431e2","Type":"ContainerDied","Data":"501a39f0357b2d260be56384e2411cc489d7f31f3e7014f27fc44ce4f2e7ab5d"} Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.482925 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"582900c6-e591-4ff4-ac53-a8965af431e2","Type":"ContainerDied","Data":"5398aeaafa4630b70670235f94f05a9349eb3ecd03762e8b2936df6f875909a8"} Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.482919 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.482944 4736 scope.go:117] "RemoveContainer" containerID="501a39f0357b2d260be56384e2411cc489d7f31f3e7014f27fc44ce4f2e7ab5d" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.491907 4736 generic.go:334] "Generic (PLEG): container finished" podID="b7670450-d08a-434a-a9c3-455434e19343" containerID="eadeab5dd3ae7d2eb43c14cc41c2c0af76eb0f8e113b9b4501a79510f56957c7" exitCode=0 Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.492009 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.496216 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9b8d" event={"ID":"b7670450-d08a-434a-a9c3-455434e19343","Type":"ContainerDied","Data":"eadeab5dd3ae7d2eb43c14cc41c2c0af76eb0f8e113b9b4501a79510f56957c7"} Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.500029 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.500053 4736 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/582900c6-e591-4ff4-ac53-a8965af431e2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.500063 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.500074 4736 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.500083 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4vn2\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-kube-api-access-f4vn2\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.500119 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.500128 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.500136 4736 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/582900c6-e591-4ff4-ac53-a8965af431e2-pod-info\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.500144 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.538421 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-server-conf" (OuterVolumeSpecName: "server-conf") pod "582900c6-e591-4ff4-ac53-a8965af431e2" (UID: "582900c6-e591-4ff4-ac53-a8965af431e2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.576006 4736 scope.go:117] "RemoveContainer" containerID="fe9b43956d77ed742b1abf72f183143a9580027753dd1c959c175ca9fe80b5e5" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.590495 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.607257 4736 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.607283 4736 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/582900c6-e591-4ff4-ac53-a8965af431e2-server-conf\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.621184 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "582900c6-e591-4ff4-ac53-a8965af431e2" (UID: "582900c6-e591-4ff4-ac53-a8965af431e2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.667868 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.681555 4736 scope.go:117] "RemoveContainer" containerID="501a39f0357b2d260be56384e2411cc489d7f31f3e7014f27fc44ce4f2e7ab5d" Mar 16 15:40:37 crc kubenswrapper[4736]: E0316 15:40:37.682007 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"501a39f0357b2d260be56384e2411cc489d7f31f3e7014f27fc44ce4f2e7ab5d\": container with ID starting with 501a39f0357b2d260be56384e2411cc489d7f31f3e7014f27fc44ce4f2e7ab5d not found: ID does not exist" containerID="501a39f0357b2d260be56384e2411cc489d7f31f3e7014f27fc44ce4f2e7ab5d" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.682037 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"501a39f0357b2d260be56384e2411cc489d7f31f3e7014f27fc44ce4f2e7ab5d"} err="failed to get container status \"501a39f0357b2d260be56384e2411cc489d7f31f3e7014f27fc44ce4f2e7ab5d\": rpc error: code = NotFound desc = could not find container \"501a39f0357b2d260be56384e2411cc489d7f31f3e7014f27fc44ce4f2e7ab5d\": container with ID starting with 501a39f0357b2d260be56384e2411cc489d7f31f3e7014f27fc44ce4f2e7ab5d not found: ID does not exist" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.682059 4736 scope.go:117] "RemoveContainer" containerID="fe9b43956d77ed742b1abf72f183143a9580027753dd1c959c175ca9fe80b5e5" Mar 16 15:40:37 crc kubenswrapper[4736]: E0316 15:40:37.682405 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe9b43956d77ed742b1abf72f183143a9580027753dd1c959c175ca9fe80b5e5\": container with ID starting with fe9b43956d77ed742b1abf72f183143a9580027753dd1c959c175ca9fe80b5e5 not found: ID does not exist" containerID="fe9b43956d77ed742b1abf72f183143a9580027753dd1c959c175ca9fe80b5e5" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.682437 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe9b43956d77ed742b1abf72f183143a9580027753dd1c959c175ca9fe80b5e5"} err="failed to get container status \"fe9b43956d77ed742b1abf72f183143a9580027753dd1c959c175ca9fe80b5e5\": rpc error: code = NotFound desc = could not find container \"fe9b43956d77ed742b1abf72f183143a9580027753dd1c959c175ca9fe80b5e5\": container with ID starting with fe9b43956d77ed742b1abf72f183143a9580027753dd1c959c175ca9fe80b5e5 not found: ID does not exist" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.689165 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.708998 4736 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/582900c6-e591-4ff4-ac53-a8965af431e2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.723377 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 16 15:40:37 crc kubenswrapper[4736]: E0316 15:40:37.723897 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343be938-86f7-45c1-b8ef-a3143202be82" containerName="setup-container" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.723919 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="343be938-86f7-45c1-b8ef-a3143202be82" containerName="setup-container" Mar 16 15:40:37 crc kubenswrapper[4736]: E0316 15:40:37.723948 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343be938-86f7-45c1-b8ef-a3143202be82" containerName="rabbitmq" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.723956 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="343be938-86f7-45c1-b8ef-a3143202be82" containerName="rabbitmq" Mar 16 15:40:37 crc kubenswrapper[4736]: E0316 15:40:37.723980 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582900c6-e591-4ff4-ac53-a8965af431e2" containerName="rabbitmq" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.723988 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="582900c6-e591-4ff4-ac53-a8965af431e2" containerName="rabbitmq" Mar 16 15:40:37 crc kubenswrapper[4736]: E0316 15:40:37.724002 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582900c6-e591-4ff4-ac53-a8965af431e2" containerName="setup-container" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.724010 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="582900c6-e591-4ff4-ac53-a8965af431e2" containerName="setup-container" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.724258 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="582900c6-e591-4ff4-ac53-a8965af431e2" containerName="rabbitmq" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.724295 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="343be938-86f7-45c1-b8ef-a3143202be82" containerName="rabbitmq" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.728335 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.729863 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.730953 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.733487 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hw6vc" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.733559 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.733639 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.733809 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.733869 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.733989 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.810184 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b8db200-f455-4868-8ffd-7f129434034e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.810243 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.810282 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b8db200-f455-4868-8ffd-7f129434034e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.810301 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b8db200-f455-4868-8ffd-7f129434034e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.810338 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b8db200-f455-4868-8ffd-7f129434034e-config-data\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.810363 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b8db200-f455-4868-8ffd-7f129434034e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.810407 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b8db200-f455-4868-8ffd-7f129434034e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.810431 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm4lj\" (UniqueName: \"kubernetes.io/projected/0b8db200-f455-4868-8ffd-7f129434034e-kube-api-access-wm4lj\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.810466 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b8db200-f455-4868-8ffd-7f129434034e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.810503 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b8db200-f455-4868-8ffd-7f129434034e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.810528 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b8db200-f455-4868-8ffd-7f129434034e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.827366 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.837301 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.850892 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.852738 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.856312 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fcb59" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.856383 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.856496 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.856789 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.857708 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.857734 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.857754 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.878072 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.912206 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b8db200-f455-4868-8ffd-7f129434034e-config-data\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.912641 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b8db200-f455-4868-8ffd-7f129434034e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.912744 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b8db200-f455-4868-8ffd-7f129434034e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.912827 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm4lj\" (UniqueName: \"kubernetes.io/projected/0b8db200-f455-4868-8ffd-7f129434034e-kube-api-access-wm4lj\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.912918 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b8db200-f455-4868-8ffd-7f129434034e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.913009 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b8db200-f455-4868-8ffd-7f129434034e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.913088 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b8db200-f455-4868-8ffd-7f129434034e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.913224 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b8db200-f455-4868-8ffd-7f129434034e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.913295 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.913369 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b8db200-f455-4868-8ffd-7f129434034e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.913440 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b8db200-f455-4868-8ffd-7f129434034e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.915442 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0b8db200-f455-4868-8ffd-7f129434034e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.915676 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.916379 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b8db200-f455-4868-8ffd-7f129434034e-config-data\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.920403 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0b8db200-f455-4868-8ffd-7f129434034e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.920450 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0b8db200-f455-4868-8ffd-7f129434034e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.920893 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0b8db200-f455-4868-8ffd-7f129434034e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.922876 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0b8db200-f455-4868-8ffd-7f129434034e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.926435 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0b8db200-f455-4868-8ffd-7f129434034e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.938681 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0b8db200-f455-4868-8ffd-7f129434034e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.941980 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0b8db200-f455-4868-8ffd-7f129434034e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.953083 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm4lj\" (UniqueName: \"kubernetes.io/projected/0b8db200-f455-4868-8ffd-7f129434034e-kube-api-access-wm4lj\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:37 crc kubenswrapper[4736]: I0316 15:40:37.990313 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"0b8db200-f455-4868-8ffd-7f129434034e\") " pod="openstack/rabbitmq-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.015515 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da4145d1-110a-477c-ba28-813d6c53db11-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.015569 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.015628 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8k77\" (UniqueName: \"kubernetes.io/projected/da4145d1-110a-477c-ba28-813d6c53db11-kube-api-access-l8k77\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.015656 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da4145d1-110a-477c-ba28-813d6c53db11-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.015706 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da4145d1-110a-477c-ba28-813d6c53db11-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.015742 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da4145d1-110a-477c-ba28-813d6c53db11-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.015764 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da4145d1-110a-477c-ba28-813d6c53db11-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.015802 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da4145d1-110a-477c-ba28-813d6c53db11-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.015822 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da4145d1-110a-477c-ba28-813d6c53db11-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.015851 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da4145d1-110a-477c-ba28-813d6c53db11-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.015895 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da4145d1-110a-477c-ba28-813d6c53db11-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.045753 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.117456 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da4145d1-110a-477c-ba28-813d6c53db11-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.117500 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da4145d1-110a-477c-ba28-813d6c53db11-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.117523 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da4145d1-110a-477c-ba28-813d6c53db11-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.117596 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da4145d1-110a-477c-ba28-813d6c53db11-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.117642 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da4145d1-110a-477c-ba28-813d6c53db11-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.117678 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.117712 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8k77\" (UniqueName: \"kubernetes.io/projected/da4145d1-110a-477c-ba28-813d6c53db11-kube-api-access-l8k77\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.117737 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da4145d1-110a-477c-ba28-813d6c53db11-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.117784 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da4145d1-110a-477c-ba28-813d6c53db11-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.117938 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da4145d1-110a-477c-ba28-813d6c53db11-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.117965 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da4145d1-110a-477c-ba28-813d6c53db11-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.117985 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.119078 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da4145d1-110a-477c-ba28-813d6c53db11-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.119385 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da4145d1-110a-477c-ba28-813d6c53db11-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.120094 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da4145d1-110a-477c-ba28-813d6c53db11-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.120873 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da4145d1-110a-477c-ba28-813d6c53db11-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.121736 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da4145d1-110a-477c-ba28-813d6c53db11-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.132055 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da4145d1-110a-477c-ba28-813d6c53db11-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.132362 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da4145d1-110a-477c-ba28-813d6c53db11-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.132442 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da4145d1-110a-477c-ba28-813d6c53db11-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.143867 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da4145d1-110a-477c-ba28-813d6c53db11-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.165146 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8k77\" (UniqueName: \"kubernetes.io/projected/da4145d1-110a-477c-ba28-813d6c53db11-kube-api-access-l8k77\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.210635 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da4145d1-110a-477c-ba28-813d6c53db11\") " pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.238150 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.301355 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.468232 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.515614 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9b8d" event={"ID":"b7670450-d08a-434a-a9c3-455434e19343","Type":"ContainerStarted","Data":"ca9422bfcff2dff1724b6a93440833ded45aa9f3814cc287bff4ca666777b1c5"} Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.582448 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r9b8d" podStartSLOduration=2.958150483 podStartE2EDuration="6.582418522s" podCreationTimestamp="2026-03-16 15:40:32 +0000 UTC" firstStartedPulling="2026-03-16 15:40:34.375187973 +0000 UTC m=+1636.102578260" lastFinishedPulling="2026-03-16 15:40:37.999456012 +0000 UTC m=+1639.726846299" observedRunningTime="2026-03-16 15:40:38.561793281 +0000 UTC m=+1640.289183568" watchObservedRunningTime="2026-03-16 15:40:38.582418522 +0000 UTC m=+1640.309808809" Mar 16 15:40:38 crc kubenswrapper[4736]: W0316 15:40:38.725316 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b8db200_f455_4868_8ffd_7f129434034e.slice/crio-582364c6fe48081793652a8d644b2ba50cae817c814dc0e3d555b2298dc7a0f0 WatchSource:0}: Error finding container 582364c6fe48081793652a8d644b2ba50cae817c814dc0e3d555b2298dc7a0f0: Status 404 returned error can't find the container with id 582364c6fe48081793652a8d644b2ba50cae817c814dc0e3d555b2298dc7a0f0 Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.770665 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.991372 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="343be938-86f7-45c1-b8ef-a3143202be82" path="/var/lib/kubelet/pods/343be938-86f7-45c1-b8ef-a3143202be82/volumes" Mar 16 15:40:38 crc kubenswrapper[4736]: I0316 15:40:38.992786 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582900c6-e591-4ff4-ac53-a8965af431e2" path="/var/lib/kubelet/pods/582900c6-e591-4ff4-ac53-a8965af431e2/volumes" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.039543 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84944565bc-qv7d5"] Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.041881 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.044324 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.055398 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84944565bc-qv7d5"] Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.104885 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.152842 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-openstack-edpm-ipam\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.152901 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-config\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.152928 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-ovsdbserver-sb\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.153005 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-dns-svc\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.153065 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-dns-swift-storage-0\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.153235 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-ovsdbserver-nb\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.153276 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c8ft\" (UniqueName: \"kubernetes.io/projected/f3dd9367-a1d0-42c9-bf6c-3393089a493b-kube-api-access-6c8ft\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.255195 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-ovsdbserver-nb\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.255254 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c8ft\" (UniqueName: \"kubernetes.io/projected/f3dd9367-a1d0-42c9-bf6c-3393089a493b-kube-api-access-6c8ft\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.255321 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-openstack-edpm-ipam\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.255356 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-config\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.255393 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-ovsdbserver-sb\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.255456 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-dns-svc\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.255492 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-dns-swift-storage-0\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.256513 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-ovsdbserver-nb\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.256527 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-dns-swift-storage-0\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.256799 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-config\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.257192 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-dns-svc\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.257326 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-openstack-edpm-ipam\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.257671 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-ovsdbserver-sb\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.277851 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c8ft\" (UniqueName: \"kubernetes.io/projected/f3dd9367-a1d0-42c9-bf6c-3393089a493b-kube-api-access-6c8ft\") pod \"dnsmasq-dns-84944565bc-qv7d5\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.363789 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.543965 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da4145d1-110a-477c-ba28-813d6c53db11","Type":"ContainerStarted","Data":"c0ade7b068a7c5c7c052d4f706b25f9dd7d0ec03c005c8311ffd2d0109f82e20"} Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.545562 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kc5g2"] Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.545957 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kc5g2" podUID="e129126d-fe87-47c1-8098-56d95255d546" containerName="registry-server" containerID="cri-o://93eb1f014278e174eb5a76aaff637a97ce1b62ebed88beae29b6c3750bd46826" gracePeriod=2 Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.546020 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b8db200-f455-4868-8ffd-7f129434034e","Type":"ContainerStarted","Data":"582364c6fe48081793652a8d644b2ba50cae817c814dc0e3d555b2298dc7a0f0"} Mar 16 15:40:39 crc kubenswrapper[4736]: I0316 15:40:39.955534 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84944565bc-qv7d5"] Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.268334 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.398704 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e129126d-fe87-47c1-8098-56d95255d546-catalog-content\") pod \"e129126d-fe87-47c1-8098-56d95255d546\" (UID: \"e129126d-fe87-47c1-8098-56d95255d546\") " Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.398758 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e129126d-fe87-47c1-8098-56d95255d546-utilities\") pod \"e129126d-fe87-47c1-8098-56d95255d546\" (UID: \"e129126d-fe87-47c1-8098-56d95255d546\") " Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.398816 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqhs8\" (UniqueName: \"kubernetes.io/projected/e129126d-fe87-47c1-8098-56d95255d546-kube-api-access-fqhs8\") pod \"e129126d-fe87-47c1-8098-56d95255d546\" (UID: \"e129126d-fe87-47c1-8098-56d95255d546\") " Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.402746 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e129126d-fe87-47c1-8098-56d95255d546-utilities" (OuterVolumeSpecName: "utilities") pod "e129126d-fe87-47c1-8098-56d95255d546" (UID: "e129126d-fe87-47c1-8098-56d95255d546"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.404072 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e129126d-fe87-47c1-8098-56d95255d546-kube-api-access-fqhs8" (OuterVolumeSpecName: "kube-api-access-fqhs8") pod "e129126d-fe87-47c1-8098-56d95255d546" (UID: "e129126d-fe87-47c1-8098-56d95255d546"). InnerVolumeSpecName "kube-api-access-fqhs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.424937 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e129126d-fe87-47c1-8098-56d95255d546-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e129126d-fe87-47c1-8098-56d95255d546" (UID: "e129126d-fe87-47c1-8098-56d95255d546"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.501249 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e129126d-fe87-47c1-8098-56d95255d546-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.501279 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e129126d-fe87-47c1-8098-56d95255d546-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.501292 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqhs8\" (UniqueName: \"kubernetes.io/projected/e129126d-fe87-47c1-8098-56d95255d546-kube-api-access-fqhs8\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.556264 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da4145d1-110a-477c-ba28-813d6c53db11","Type":"ContainerStarted","Data":"1eb23b3f87620e74c88aaa7cf5ca8a92622bfe39c6ee3fdc607f6d78a2581467"} Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.559165 4736 generic.go:334] "Generic (PLEG): container finished" podID="e129126d-fe87-47c1-8098-56d95255d546" containerID="93eb1f014278e174eb5a76aaff637a97ce1b62ebed88beae29b6c3750bd46826" exitCode=0 Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.559229 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kc5g2" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.559234 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kc5g2" event={"ID":"e129126d-fe87-47c1-8098-56d95255d546","Type":"ContainerDied","Data":"93eb1f014278e174eb5a76aaff637a97ce1b62ebed88beae29b6c3750bd46826"} Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.559317 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kc5g2" event={"ID":"e129126d-fe87-47c1-8098-56d95255d546","Type":"ContainerDied","Data":"39367d6816dbc8c5e29d016faefd2277784e9eb9191b42059fbf90ac11226834"} Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.559342 4736 scope.go:117] "RemoveContainer" containerID="93eb1f014278e174eb5a76aaff637a97ce1b62ebed88beae29b6c3750bd46826" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.560369 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84944565bc-qv7d5" event={"ID":"f3dd9367-a1d0-42c9-bf6c-3393089a493b","Type":"ContainerStarted","Data":"46c84808937f1c60eee1edf1514f1e5b179c8a7a9b9ce3f77ff59c1e5341e683"} Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.581797 4736 scope.go:117] "RemoveContainer" containerID="9789ebd1bbf6958cd87dffba8f0f4ce3dafa4bd5ebafafcb3296c3fdbb875604" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.611577 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kc5g2"] Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.621550 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kc5g2"] Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.631646 4736 scope.go:117] "RemoveContainer" containerID="2f7d457f93e48e935fc69c042412bd622099dc68af8617e4bd9e5cf8ea2b48f3" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.686289 4736 scope.go:117] "RemoveContainer" containerID="93eb1f014278e174eb5a76aaff637a97ce1b62ebed88beae29b6c3750bd46826" Mar 16 15:40:40 crc kubenswrapper[4736]: E0316 15:40:40.692246 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93eb1f014278e174eb5a76aaff637a97ce1b62ebed88beae29b6c3750bd46826\": container with ID starting with 93eb1f014278e174eb5a76aaff637a97ce1b62ebed88beae29b6c3750bd46826 not found: ID does not exist" containerID="93eb1f014278e174eb5a76aaff637a97ce1b62ebed88beae29b6c3750bd46826" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.692287 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93eb1f014278e174eb5a76aaff637a97ce1b62ebed88beae29b6c3750bd46826"} err="failed to get container status \"93eb1f014278e174eb5a76aaff637a97ce1b62ebed88beae29b6c3750bd46826\": rpc error: code = NotFound desc = could not find container \"93eb1f014278e174eb5a76aaff637a97ce1b62ebed88beae29b6c3750bd46826\": container with ID starting with 93eb1f014278e174eb5a76aaff637a97ce1b62ebed88beae29b6c3750bd46826 not found: ID does not exist" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.692313 4736 scope.go:117] "RemoveContainer" containerID="9789ebd1bbf6958cd87dffba8f0f4ce3dafa4bd5ebafafcb3296c3fdbb875604" Mar 16 15:40:40 crc kubenswrapper[4736]: E0316 15:40:40.694588 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9789ebd1bbf6958cd87dffba8f0f4ce3dafa4bd5ebafafcb3296c3fdbb875604\": container with ID starting with 9789ebd1bbf6958cd87dffba8f0f4ce3dafa4bd5ebafafcb3296c3fdbb875604 not found: ID does not exist" containerID="9789ebd1bbf6958cd87dffba8f0f4ce3dafa4bd5ebafafcb3296c3fdbb875604" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.694614 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9789ebd1bbf6958cd87dffba8f0f4ce3dafa4bd5ebafafcb3296c3fdbb875604"} err="failed to get container status \"9789ebd1bbf6958cd87dffba8f0f4ce3dafa4bd5ebafafcb3296c3fdbb875604\": rpc error: code = NotFound desc = could not find container \"9789ebd1bbf6958cd87dffba8f0f4ce3dafa4bd5ebafafcb3296c3fdbb875604\": container with ID starting with 9789ebd1bbf6958cd87dffba8f0f4ce3dafa4bd5ebafafcb3296c3fdbb875604 not found: ID does not exist" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.694631 4736 scope.go:117] "RemoveContainer" containerID="2f7d457f93e48e935fc69c042412bd622099dc68af8617e4bd9e5cf8ea2b48f3" Mar 16 15:40:40 crc kubenswrapper[4736]: E0316 15:40:40.694956 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f7d457f93e48e935fc69c042412bd622099dc68af8617e4bd9e5cf8ea2b48f3\": container with ID starting with 2f7d457f93e48e935fc69c042412bd622099dc68af8617e4bd9e5cf8ea2b48f3 not found: ID does not exist" containerID="2f7d457f93e48e935fc69c042412bd622099dc68af8617e4bd9e5cf8ea2b48f3" Mar 16 15:40:40 crc kubenswrapper[4736]: I0316 15:40:40.694977 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f7d457f93e48e935fc69c042412bd622099dc68af8617e4bd9e5cf8ea2b48f3"} err="failed to get container status \"2f7d457f93e48e935fc69c042412bd622099dc68af8617e4bd9e5cf8ea2b48f3\": rpc error: code = NotFound desc = could not find container \"2f7d457f93e48e935fc69c042412bd622099dc68af8617e4bd9e5cf8ea2b48f3\": container with ID starting with 2f7d457f93e48e935fc69c042412bd622099dc68af8617e4bd9e5cf8ea2b48f3 not found: ID does not exist" Mar 16 15:40:41 crc kubenswrapper[4736]: I0316 15:40:41.036159 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e129126d-fe87-47c1-8098-56d95255d546" path="/var/lib/kubelet/pods/e129126d-fe87-47c1-8098-56d95255d546/volumes" Mar 16 15:40:41 crc kubenswrapper[4736]: I0316 15:40:41.571676 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b8db200-f455-4868-8ffd-7f129434034e","Type":"ContainerStarted","Data":"f56b047cc25eac32facac80dbf6169d9a55e7e0a0d81e587500c6c25b99f6e55"} Mar 16 15:40:41 crc kubenswrapper[4736]: I0316 15:40:41.576492 4736 generic.go:334] "Generic (PLEG): container finished" podID="f3dd9367-a1d0-42c9-bf6c-3393089a493b" containerID="0078863df0ba9a729a8bab7d97ee5f4f9450799f1706dae7ef7d5db93a0c81da" exitCode=0 Mar 16 15:40:41 crc kubenswrapper[4736]: I0316 15:40:41.576757 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84944565bc-qv7d5" event={"ID":"f3dd9367-a1d0-42c9-bf6c-3393089a493b","Type":"ContainerDied","Data":"0078863df0ba9a729a8bab7d97ee5f4f9450799f1706dae7ef7d5db93a0c81da"} Mar 16 15:40:42 crc kubenswrapper[4736]: I0316 15:40:42.592302 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84944565bc-qv7d5" event={"ID":"f3dd9367-a1d0-42c9-bf6c-3393089a493b","Type":"ContainerStarted","Data":"00e6885876c30fcbc6f5ab80b06f2c88fe8eeb57e7add9a13d2ffbfc9af60def"} Mar 16 15:40:42 crc kubenswrapper[4736]: I0316 15:40:42.615679 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84944565bc-qv7d5" podStartSLOduration=3.615655989 podStartE2EDuration="3.615655989s" podCreationTimestamp="2026-03-16 15:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:40:42.615602657 +0000 UTC m=+1644.342992944" watchObservedRunningTime="2026-03-16 15:40:42.615655989 +0000 UTC m=+1644.343046286" Mar 16 15:40:43 crc kubenswrapper[4736]: I0316 15:40:43.391380 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:43 crc kubenswrapper[4736]: I0316 15:40:43.391434 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:43 crc kubenswrapper[4736]: I0316 15:40:43.463902 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:43 crc kubenswrapper[4736]: I0316 15:40:43.601899 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:43 crc kubenswrapper[4736]: I0316 15:40:43.648649 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:44 crc kubenswrapper[4736]: I0316 15:40:44.340074 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r9b8d"] Mar 16 15:40:44 crc kubenswrapper[4736]: I0316 15:40:44.979518 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:40:44 crc kubenswrapper[4736]: E0316 15:40:44.980564 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:40:45 crc kubenswrapper[4736]: I0316 15:40:45.619234 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r9b8d" podUID="b7670450-d08a-434a-a9c3-455434e19343" containerName="registry-server" containerID="cri-o://ca9422bfcff2dff1724b6a93440833ded45aa9f3814cc287bff4ca666777b1c5" gracePeriod=2 Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.589331 4736 scope.go:117] "RemoveContainer" containerID="2b423fc65aca3355837af1c8c08bd0591ff54a664ef759155f18ef866d482d88" Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.634589 4736 generic.go:334] "Generic (PLEG): container finished" podID="b7670450-d08a-434a-a9c3-455434e19343" containerID="ca9422bfcff2dff1724b6a93440833ded45aa9f3814cc287bff4ca666777b1c5" exitCode=0 Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.634630 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9b8d" event={"ID":"b7670450-d08a-434a-a9c3-455434e19343","Type":"ContainerDied","Data":"ca9422bfcff2dff1724b6a93440833ded45aa9f3814cc287bff4ca666777b1c5"} Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.634659 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9b8d" event={"ID":"b7670450-d08a-434a-a9c3-455434e19343","Type":"ContainerDied","Data":"23c1d68387a44e05e8dbb8f0f962660f6fe2e3713c81faecaaf7a9cb495b64db"} Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.634671 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23c1d68387a44e05e8dbb8f0f962660f6fe2e3713c81faecaaf7a9cb495b64db" Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.690517 4736 scope.go:117] "RemoveContainer" containerID="b19f745e7621dc0afc50381b80edc26830f3e6c65e4a8e103d89b5ac5336e755" Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.696613 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.722424 4736 scope.go:117] "RemoveContainer" containerID="501dc9d256dd94e2b7424efff25421d73d4dd41d42dcdab4adda73b4b8210496" Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.826477 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7670450-d08a-434a-a9c3-455434e19343-utilities\") pod \"b7670450-d08a-434a-a9c3-455434e19343\" (UID: \"b7670450-d08a-434a-a9c3-455434e19343\") " Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.827348 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms6kd\" (UniqueName: \"kubernetes.io/projected/b7670450-d08a-434a-a9c3-455434e19343-kube-api-access-ms6kd\") pod \"b7670450-d08a-434a-a9c3-455434e19343\" (UID: \"b7670450-d08a-434a-a9c3-455434e19343\") " Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.827538 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7670450-d08a-434a-a9c3-455434e19343-catalog-content\") pod \"b7670450-d08a-434a-a9c3-455434e19343\" (UID: \"b7670450-d08a-434a-a9c3-455434e19343\") " Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.830755 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7670450-d08a-434a-a9c3-455434e19343-utilities" (OuterVolumeSpecName: "utilities") pod "b7670450-d08a-434a-a9c3-455434e19343" (UID: "b7670450-d08a-434a-a9c3-455434e19343"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.831358 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7670450-d08a-434a-a9c3-455434e19343-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.841344 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7670450-d08a-434a-a9c3-455434e19343-kube-api-access-ms6kd" (OuterVolumeSpecName: "kube-api-access-ms6kd") pod "b7670450-d08a-434a-a9c3-455434e19343" (UID: "b7670450-d08a-434a-a9c3-455434e19343"). InnerVolumeSpecName "kube-api-access-ms6kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.881936 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7670450-d08a-434a-a9c3-455434e19343-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7670450-d08a-434a-a9c3-455434e19343" (UID: "b7670450-d08a-434a-a9c3-455434e19343"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.933054 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7670450-d08a-434a-a9c3-455434e19343-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:46 crc kubenswrapper[4736]: I0316 15:40:46.933088 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms6kd\" (UniqueName: \"kubernetes.io/projected/b7670450-d08a-434a-a9c3-455434e19343-kube-api-access-ms6kd\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:47 crc kubenswrapper[4736]: I0316 15:40:47.642911 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9b8d" Mar 16 15:40:47 crc kubenswrapper[4736]: I0316 15:40:47.670022 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r9b8d"] Mar 16 15:40:47 crc kubenswrapper[4736]: I0316 15:40:47.701947 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r9b8d"] Mar 16 15:40:48 crc kubenswrapper[4736]: I0316 15:40:48.999180 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7670450-d08a-434a-a9c3-455434e19343" path="/var/lib/kubelet/pods/b7670450-d08a-434a-a9c3-455434e19343/volumes" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.366211 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.449558 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff6ff9699-tkccb"] Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.449830 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" podUID="c8ba66d5-5da5-472a-844d-a17b77655425" containerName="dnsmasq-dns" containerID="cri-o://f909aca514d002f1d15c88c7d2bacc714e3b2f65c8f7ec5266efda9e6a45109d" gracePeriod=10 Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.632178 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-577bbc9c65-pclks"] Mar 16 15:40:49 crc kubenswrapper[4736]: E0316 15:40:49.643205 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7670450-d08a-434a-a9c3-455434e19343" containerName="extract-content" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.643288 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7670450-d08a-434a-a9c3-455434e19343" containerName="extract-content" Mar 16 15:40:49 crc kubenswrapper[4736]: E0316 15:40:49.643409 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7670450-d08a-434a-a9c3-455434e19343" containerName="registry-server" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.643480 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7670450-d08a-434a-a9c3-455434e19343" containerName="registry-server" Mar 16 15:40:49 crc kubenswrapper[4736]: E0316 15:40:49.643550 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7670450-d08a-434a-a9c3-455434e19343" containerName="extract-utilities" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.643608 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7670450-d08a-434a-a9c3-455434e19343" containerName="extract-utilities" Mar 16 15:40:49 crc kubenswrapper[4736]: E0316 15:40:49.643662 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e129126d-fe87-47c1-8098-56d95255d546" containerName="extract-content" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.643731 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e129126d-fe87-47c1-8098-56d95255d546" containerName="extract-content" Mar 16 15:40:49 crc kubenswrapper[4736]: E0316 15:40:49.643810 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e129126d-fe87-47c1-8098-56d95255d546" containerName="registry-server" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.643878 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e129126d-fe87-47c1-8098-56d95255d546" containerName="registry-server" Mar 16 15:40:49 crc kubenswrapper[4736]: E0316 15:40:49.643946 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e129126d-fe87-47c1-8098-56d95255d546" containerName="extract-utilities" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.644009 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e129126d-fe87-47c1-8098-56d95255d546" containerName="extract-utilities" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.644256 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7670450-d08a-434a-a9c3-455434e19343" containerName="registry-server" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.644336 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e129126d-fe87-47c1-8098-56d95255d546" containerName="registry-server" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.645389 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.676680 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-577bbc9c65-pclks"] Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.677315 4736 generic.go:334] "Generic (PLEG): container finished" podID="c8ba66d5-5da5-472a-844d-a17b77655425" containerID="f909aca514d002f1d15c88c7d2bacc714e3b2f65c8f7ec5266efda9e6a45109d" exitCode=0 Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.677353 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" event={"ID":"c8ba66d5-5da5-472a-844d-a17b77655425","Type":"ContainerDied","Data":"f909aca514d002f1d15c88c7d2bacc714e3b2f65c8f7ec5266efda9e6a45109d"} Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.796518 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-openstack-edpm-ipam\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.796586 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-ovsdbserver-nb\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.796731 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-dns-swift-storage-0\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.796812 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-ovsdbserver-sb\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.797117 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ps2h\" (UniqueName: \"kubernetes.io/projected/8de44873-db27-4ee1-bade-ac87cec3c328-kube-api-access-7ps2h\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.797242 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-dns-svc\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.797281 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-config\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.898848 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-ovsdbserver-nb\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.898928 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-dns-swift-storage-0\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.898971 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-ovsdbserver-sb\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.899075 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ps2h\" (UniqueName: \"kubernetes.io/projected/8de44873-db27-4ee1-bade-ac87cec3c328-kube-api-access-7ps2h\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.899186 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-dns-svc\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.899217 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-config\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.899281 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-openstack-edpm-ipam\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.900538 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-dns-swift-storage-0\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.900569 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-openstack-edpm-ipam\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.901017 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-config\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.901548 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-dns-svc\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.902080 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-ovsdbserver-nb\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.912041 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8de44873-db27-4ee1-bade-ac87cec3c328-ovsdbserver-sb\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:49 crc kubenswrapper[4736]: I0316 15:40:49.935908 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ps2h\" (UniqueName: \"kubernetes.io/projected/8de44873-db27-4ee1-bade-ac87cec3c328-kube-api-access-7ps2h\") pod \"dnsmasq-dns-577bbc9c65-pclks\" (UID: \"8de44873-db27-4ee1-bade-ac87cec3c328\") " pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.020627 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.200891 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.306726 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-config\") pod \"c8ba66d5-5da5-472a-844d-a17b77655425\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.307061 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-swift-storage-0\") pod \"c8ba66d5-5da5-472a-844d-a17b77655425\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.307140 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-ovsdbserver-nb\") pod \"c8ba66d5-5da5-472a-844d-a17b77655425\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.307189 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-svc\") pod \"c8ba66d5-5da5-472a-844d-a17b77655425\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.307206 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q79f\" (UniqueName: \"kubernetes.io/projected/c8ba66d5-5da5-472a-844d-a17b77655425-kube-api-access-2q79f\") pod \"c8ba66d5-5da5-472a-844d-a17b77655425\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.307406 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-ovsdbserver-sb\") pod \"c8ba66d5-5da5-472a-844d-a17b77655425\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.318941 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8ba66d5-5da5-472a-844d-a17b77655425-kube-api-access-2q79f" (OuterVolumeSpecName: "kube-api-access-2q79f") pod "c8ba66d5-5da5-472a-844d-a17b77655425" (UID: "c8ba66d5-5da5-472a-844d-a17b77655425"). InnerVolumeSpecName "kube-api-access-2q79f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.398613 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c8ba66d5-5da5-472a-844d-a17b77655425" (UID: "c8ba66d5-5da5-472a-844d-a17b77655425"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.408807 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c8ba66d5-5da5-472a-844d-a17b77655425" (UID: "c8ba66d5-5da5-472a-844d-a17b77655425"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.409991 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8ba66d5-5da5-472a-844d-a17b77655425" (UID: "c8ba66d5-5da5-472a-844d-a17b77655425"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.411089 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-swift-storage-0\") pod \"c8ba66d5-5da5-472a-844d-a17b77655425\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.411201 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-svc\") pod \"c8ba66d5-5da5-472a-844d-a17b77655425\" (UID: \"c8ba66d5-5da5-472a-844d-a17b77655425\") " Mar 16 15:40:50 crc kubenswrapper[4736]: W0316 15:40:50.411249 4736 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c8ba66d5-5da5-472a-844d-a17b77655425/volumes/kubernetes.io~configmap/dns-swift-storage-0 Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.411275 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c8ba66d5-5da5-472a-844d-a17b77655425" (UID: "c8ba66d5-5da5-472a-844d-a17b77655425"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:40:50 crc kubenswrapper[4736]: W0316 15:40:50.411350 4736 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c8ba66d5-5da5-472a-844d-a17b77655425/volumes/kubernetes.io~configmap/dns-svc Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.411365 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8ba66d5-5da5-472a-844d-a17b77655425" (UID: "c8ba66d5-5da5-472a-844d-a17b77655425"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.411984 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.412002 4736 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.412013 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.412021 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q79f\" (UniqueName: \"kubernetes.io/projected/c8ba66d5-5da5-472a-844d-a17b77655425-kube-api-access-2q79f\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.417779 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c8ba66d5-5da5-472a-844d-a17b77655425" (UID: "c8ba66d5-5da5-472a-844d-a17b77655425"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.428774 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-config" (OuterVolumeSpecName: "config") pod "c8ba66d5-5da5-472a-844d-a17b77655425" (UID: "c8ba66d5-5da5-472a-844d-a17b77655425"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.514336 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.515004 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8ba66d5-5da5-472a-844d-a17b77655425-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.559548 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-577bbc9c65-pclks"] Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.714627 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577bbc9c65-pclks" event={"ID":"8de44873-db27-4ee1-bade-ac87cec3c328","Type":"ContainerStarted","Data":"8c49f031c03e045dd396891c90adc27870513643c35cd286fbf859cbce98099d"} Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.718641 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" event={"ID":"c8ba66d5-5da5-472a-844d-a17b77655425","Type":"ContainerDied","Data":"0278f71441a291bde7cfe390b80313ee2773c220b8c51abb3240eaa8e0813b8c"} Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.718693 4736 scope.go:117] "RemoveContainer" containerID="f909aca514d002f1d15c88c7d2bacc714e3b2f65c8f7ec5266efda9e6a45109d" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.718734 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.788310 4736 scope.go:117] "RemoveContainer" containerID="7c1f622b962971f095b8d2c8db09947a8910c489c88ff669165752f5a654dc54" Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.805072 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff6ff9699-tkccb"] Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.819498 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ff6ff9699-tkccb"] Mar 16 15:40:50 crc kubenswrapper[4736]: I0316 15:40:50.999678 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8ba66d5-5da5-472a-844d-a17b77655425" path="/var/lib/kubelet/pods/c8ba66d5-5da5-472a-844d-a17b77655425/volumes" Mar 16 15:40:51 crc kubenswrapper[4736]: I0316 15:40:51.729040 4736 generic.go:334] "Generic (PLEG): container finished" podID="8de44873-db27-4ee1-bade-ac87cec3c328" containerID="5e2cdddbadd1a2180726e4e516c71ec3751224bf710237f74b4049a1ac44b18c" exitCode=0 Mar 16 15:40:51 crc kubenswrapper[4736]: I0316 15:40:51.729136 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577bbc9c65-pclks" event={"ID":"8de44873-db27-4ee1-bade-ac87cec3c328","Type":"ContainerDied","Data":"5e2cdddbadd1a2180726e4e516c71ec3751224bf710237f74b4049a1ac44b18c"} Mar 16 15:40:52 crc kubenswrapper[4736]: I0316 15:40:52.744896 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-577bbc9c65-pclks" event={"ID":"8de44873-db27-4ee1-bade-ac87cec3c328","Type":"ContainerStarted","Data":"146711f297d4c8c414da4132693c6f764b30d08c30138fa0871723d5a91805c8"} Mar 16 15:40:52 crc kubenswrapper[4736]: I0316 15:40:52.745346 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:40:52 crc kubenswrapper[4736]: I0316 15:40:52.769529 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-577bbc9c65-pclks" podStartSLOduration=3.769508297 podStartE2EDuration="3.769508297s" podCreationTimestamp="2026-03-16 15:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:40:52.763592053 +0000 UTC m=+1654.490982350" watchObservedRunningTime="2026-03-16 15:40:52.769508297 +0000 UTC m=+1654.496898584" Mar 16 15:40:55 crc kubenswrapper[4736]: I0316 15:40:55.037975 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ff6ff9699-tkccb" podUID="c8ba66d5-5da5-472a-844d-a17b77655425" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.233:5353: i/o timeout" Mar 16 15:40:57 crc kubenswrapper[4736]: I0316 15:40:57.068155 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-tq8tw"] Mar 16 15:40:57 crc kubenswrapper[4736]: I0316 15:40:57.081688 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-tq8tw"] Mar 16 15:40:58 crc kubenswrapper[4736]: I0316 15:40:58.039274 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-430b-account-create-update-p6zhb"] Mar 16 15:40:58 crc kubenswrapper[4736]: I0316 15:40:58.052969 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-430b-account-create-update-p6zhb"] Mar 16 15:40:58 crc kubenswrapper[4736]: I0316 15:40:58.999157 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6809e9ed-0919-419c-87f4-86d756616c27" path="/var/lib/kubelet/pods/6809e9ed-0919-419c-87f4-86d756616c27/volumes" Mar 16 15:40:59 crc kubenswrapper[4736]: I0316 15:40:59.001667 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81dee002-8da2-4a98-a38a-4d3b55609e79" path="/var/lib/kubelet/pods/81dee002-8da2-4a98-a38a-4d3b55609e79/volumes" Mar 16 15:40:59 crc kubenswrapper[4736]: I0316 15:40:59.058514 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-v2mn2"] Mar 16 15:40:59 crc kubenswrapper[4736]: I0316 15:40:59.079774 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-18c6-account-create-update-887bw"] Mar 16 15:40:59 crc kubenswrapper[4736]: I0316 15:40:59.099904 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-v2mn2"] Mar 16 15:40:59 crc kubenswrapper[4736]: I0316 15:40:59.111767 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-18c6-account-create-update-887bw"] Mar 16 15:40:59 crc kubenswrapper[4736]: I0316 15:40:59.979022 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:40:59 crc kubenswrapper[4736]: E0316 15:40:59.979465 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.022452 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-577bbc9c65-pclks" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.103390 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84944565bc-qv7d5"] Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.103853 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84944565bc-qv7d5" podUID="f3dd9367-a1d0-42c9-bf6c-3393089a493b" containerName="dnsmasq-dns" containerID="cri-o://00e6885876c30fcbc6f5ab80b06f2c88fe8eeb57e7add9a13d2ffbfc9af60def" gracePeriod=10 Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.599122 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.728436 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-dns-svc\") pod \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.728874 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-ovsdbserver-nb\") pod \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.728925 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-ovsdbserver-sb\") pod \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.728983 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-config\") pod \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.729304 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-dns-swift-storage-0\") pod \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.729368 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c8ft\" (UniqueName: \"kubernetes.io/projected/f3dd9367-a1d0-42c9-bf6c-3393089a493b-kube-api-access-6c8ft\") pod \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.729630 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-openstack-edpm-ipam\") pod \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\" (UID: \"f3dd9367-a1d0-42c9-bf6c-3393089a493b\") " Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.750520 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3dd9367-a1d0-42c9-bf6c-3393089a493b-kube-api-access-6c8ft" (OuterVolumeSpecName: "kube-api-access-6c8ft") pod "f3dd9367-a1d0-42c9-bf6c-3393089a493b" (UID: "f3dd9367-a1d0-42c9-bf6c-3393089a493b"). InnerVolumeSpecName "kube-api-access-6c8ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.797412 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3dd9367-a1d0-42c9-bf6c-3393089a493b" (UID: "f3dd9367-a1d0-42c9-bf6c-3393089a493b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.806705 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3dd9367-a1d0-42c9-bf6c-3393089a493b" (UID: "f3dd9367-a1d0-42c9-bf6c-3393089a493b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.813573 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f3dd9367-a1d0-42c9-bf6c-3393089a493b" (UID: "f3dd9367-a1d0-42c9-bf6c-3393089a493b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.823947 4736 generic.go:334] "Generic (PLEG): container finished" podID="f3dd9367-a1d0-42c9-bf6c-3393089a493b" containerID="00e6885876c30fcbc6f5ab80b06f2c88fe8eeb57e7add9a13d2ffbfc9af60def" exitCode=0 Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.823992 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84944565bc-qv7d5" event={"ID":"f3dd9367-a1d0-42c9-bf6c-3393089a493b","Type":"ContainerDied","Data":"00e6885876c30fcbc6f5ab80b06f2c88fe8eeb57e7add9a13d2ffbfc9af60def"} Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.824020 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84944565bc-qv7d5" event={"ID":"f3dd9367-a1d0-42c9-bf6c-3393089a493b","Type":"ContainerDied","Data":"46c84808937f1c60eee1edf1514f1e5b179c8a7a9b9ce3f77ff59c1e5341e683"} Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.824036 4736 scope.go:117] "RemoveContainer" containerID="00e6885876c30fcbc6f5ab80b06f2c88fe8eeb57e7add9a13d2ffbfc9af60def" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.824347 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f3dd9367-a1d0-42c9-bf6c-3393089a493b" (UID: "f3dd9367-a1d0-42c9-bf6c-3393089a493b"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.824504 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84944565bc-qv7d5" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.827659 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-config" (OuterVolumeSpecName: "config") pod "f3dd9367-a1d0-42c9-bf6c-3393089a493b" (UID: "f3dd9367-a1d0-42c9-bf6c-3393089a493b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.835043 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.835608 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.835702 4736 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.835777 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-config\") on node \"crc\" DevicePath \"\"" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.835847 4736 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.835917 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c8ft\" (UniqueName: \"kubernetes.io/projected/f3dd9367-a1d0-42c9-bf6c-3393089a493b-kube-api-access-6c8ft\") on node \"crc\" DevicePath \"\"" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.842425 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3dd9367-a1d0-42c9-bf6c-3393089a493b" (UID: "f3dd9367-a1d0-42c9-bf6c-3393089a493b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.853708 4736 scope.go:117] "RemoveContainer" containerID="0078863df0ba9a729a8bab7d97ee5f4f9450799f1706dae7ef7d5db93a0c81da" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.874358 4736 scope.go:117] "RemoveContainer" containerID="00e6885876c30fcbc6f5ab80b06f2c88fe8eeb57e7add9a13d2ffbfc9af60def" Mar 16 15:41:00 crc kubenswrapper[4736]: E0316 15:41:00.874801 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00e6885876c30fcbc6f5ab80b06f2c88fe8eeb57e7add9a13d2ffbfc9af60def\": container with ID starting with 00e6885876c30fcbc6f5ab80b06f2c88fe8eeb57e7add9a13d2ffbfc9af60def not found: ID does not exist" containerID="00e6885876c30fcbc6f5ab80b06f2c88fe8eeb57e7add9a13d2ffbfc9af60def" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.874934 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e6885876c30fcbc6f5ab80b06f2c88fe8eeb57e7add9a13d2ffbfc9af60def"} err="failed to get container status \"00e6885876c30fcbc6f5ab80b06f2c88fe8eeb57e7add9a13d2ffbfc9af60def\": rpc error: code = NotFound desc = could not find container \"00e6885876c30fcbc6f5ab80b06f2c88fe8eeb57e7add9a13d2ffbfc9af60def\": container with ID starting with 00e6885876c30fcbc6f5ab80b06f2c88fe8eeb57e7add9a13d2ffbfc9af60def not found: ID does not exist" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.875021 4736 scope.go:117] "RemoveContainer" containerID="0078863df0ba9a729a8bab7d97ee5f4f9450799f1706dae7ef7d5db93a0c81da" Mar 16 15:41:00 crc kubenswrapper[4736]: E0316 15:41:00.875443 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0078863df0ba9a729a8bab7d97ee5f4f9450799f1706dae7ef7d5db93a0c81da\": container with ID starting with 0078863df0ba9a729a8bab7d97ee5f4f9450799f1706dae7ef7d5db93a0c81da not found: ID does not exist" containerID="0078863df0ba9a729a8bab7d97ee5f4f9450799f1706dae7ef7d5db93a0c81da" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.875532 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0078863df0ba9a729a8bab7d97ee5f4f9450799f1706dae7ef7d5db93a0c81da"} err="failed to get container status \"0078863df0ba9a729a8bab7d97ee5f4f9450799f1706dae7ef7d5db93a0c81da\": rpc error: code = NotFound desc = could not find container \"0078863df0ba9a729a8bab7d97ee5f4f9450799f1706dae7ef7d5db93a0c81da\": container with ID starting with 0078863df0ba9a729a8bab7d97ee5f4f9450799f1706dae7ef7d5db93a0c81da not found: ID does not exist" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.938000 4736 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3dd9367-a1d0-42c9-bf6c-3393089a493b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.988081 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78aac930-9cb0-48f5-80a3-aa0b50917c88" path="/var/lib/kubelet/pods/78aac930-9cb0-48f5-80a3-aa0b50917c88/volumes" Mar 16 15:41:00 crc kubenswrapper[4736]: I0316 15:41:00.988706 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e064290b-372f-478f-b907-557ecf3e5bc3" path="/var/lib/kubelet/pods/e064290b-372f-478f-b907-557ecf3e5bc3/volumes" Mar 16 15:41:01 crc kubenswrapper[4736]: I0316 15:41:01.164910 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84944565bc-qv7d5"] Mar 16 15:41:01 crc kubenswrapper[4736]: I0316 15:41:01.183497 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84944565bc-qv7d5"] Mar 16 15:41:02 crc kubenswrapper[4736]: I0316 15:41:02.993284 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3dd9367-a1d0-42c9-bf6c-3393089a493b" path="/var/lib/kubelet/pods/f3dd9367-a1d0-42c9-bf6c-3393089a493b/volumes" Mar 16 15:41:03 crc kubenswrapper[4736]: I0316 15:41:03.041052 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1291-account-create-update-64kxj"] Mar 16 15:41:03 crc kubenswrapper[4736]: I0316 15:41:03.056272 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-2nl2c"] Mar 16 15:41:03 crc kubenswrapper[4736]: I0316 15:41:03.065856 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1291-account-create-update-64kxj"] Mar 16 15:41:03 crc kubenswrapper[4736]: I0316 15:41:03.073433 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-2nl2c"] Mar 16 15:41:04 crc kubenswrapper[4736]: I0316 15:41:04.041887 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-24qsf"] Mar 16 15:41:04 crc kubenswrapper[4736]: I0316 15:41:04.054256 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-24qsf"] Mar 16 15:41:04 crc kubenswrapper[4736]: I0316 15:41:04.995043 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21b9e5ba-7a59-41be-9981-4dcf75383b70" path="/var/lib/kubelet/pods/21b9e5ba-7a59-41be-9981-4dcf75383b70/volumes" Mar 16 15:41:04 crc kubenswrapper[4736]: I0316 15:41:04.995872 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af39abb7-382d-4776-b257-0e47f0c50a64" path="/var/lib/kubelet/pods/af39abb7-382d-4776-b257-0e47f0c50a64/volumes" Mar 16 15:41:04 crc kubenswrapper[4736]: I0316 15:41:04.997476 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0f1f86e-2324-4429-beb7-14d4d02563fe" path="/var/lib/kubelet/pods/e0f1f86e-2324-4429-beb7-14d4d02563fe/volumes" Mar 16 15:41:12 crc kubenswrapper[4736]: I0316 15:41:12.965926 4736 generic.go:334] "Generic (PLEG): container finished" podID="0b8db200-f455-4868-8ffd-7f129434034e" containerID="f56b047cc25eac32facac80dbf6169d9a55e7e0a0d81e587500c6c25b99f6e55" exitCode=0 Mar 16 15:41:12 crc kubenswrapper[4736]: I0316 15:41:12.966148 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b8db200-f455-4868-8ffd-7f129434034e","Type":"ContainerDied","Data":"f56b047cc25eac32facac80dbf6169d9a55e7e0a0d81e587500c6c25b99f6e55"} Mar 16 15:41:12 crc kubenswrapper[4736]: I0316 15:41:12.973043 4736 generic.go:334] "Generic (PLEG): container finished" podID="da4145d1-110a-477c-ba28-813d6c53db11" containerID="1eb23b3f87620e74c88aaa7cf5ca8a92622bfe39c6ee3fdc607f6d78a2581467" exitCode=0 Mar 16 15:41:12 crc kubenswrapper[4736]: I0316 15:41:12.973085 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da4145d1-110a-477c-ba28-813d6c53db11","Type":"ContainerDied","Data":"1eb23b3f87620e74c88aaa7cf5ca8a92622bfe39c6ee3fdc607f6d78a2581467"} Mar 16 15:41:13 crc kubenswrapper[4736]: I0316 15:41:13.009907 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:41:13 crc kubenswrapper[4736]: E0316 15:41:13.010455 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:41:13 crc kubenswrapper[4736]: I0316 15:41:13.990445 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da4145d1-110a-477c-ba28-813d6c53db11","Type":"ContainerStarted","Data":"8aceb3949c2c1f929428107916e2aab95fbdac250da2c1f1661ee7381e31c9f6"} Mar 16 15:41:13 crc kubenswrapper[4736]: I0316 15:41:13.991328 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:41:13 crc kubenswrapper[4736]: I0316 15:41:13.993958 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0b8db200-f455-4868-8ffd-7f129434034e","Type":"ContainerStarted","Data":"a9606fef4fbbd8414c596c50f57e9d52312d46914d1361d99c7c9558cc4b972e"} Mar 16 15:41:13 crc kubenswrapper[4736]: I0316 15:41:13.995345 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.023495 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n"] Mar 16 15:41:14 crc kubenswrapper[4736]: E0316 15:41:14.023982 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ba66d5-5da5-472a-844d-a17b77655425" containerName="dnsmasq-dns" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.024323 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ba66d5-5da5-472a-844d-a17b77655425" containerName="dnsmasq-dns" Mar 16 15:41:14 crc kubenswrapper[4736]: E0316 15:41:14.024382 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3dd9367-a1d0-42c9-bf6c-3393089a493b" containerName="init" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.024394 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3dd9367-a1d0-42c9-bf6c-3393089a493b" containerName="init" Mar 16 15:41:14 crc kubenswrapper[4736]: E0316 15:41:14.024415 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ba66d5-5da5-472a-844d-a17b77655425" containerName="init" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.024425 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ba66d5-5da5-472a-844d-a17b77655425" containerName="init" Mar 16 15:41:14 crc kubenswrapper[4736]: E0316 15:41:14.024448 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3dd9367-a1d0-42c9-bf6c-3393089a493b" containerName="dnsmasq-dns" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.024457 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3dd9367-a1d0-42c9-bf6c-3393089a493b" containerName="dnsmasq-dns" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.025230 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ba66d5-5da5-472a-844d-a17b77655425" containerName="dnsmasq-dns" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.025269 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3dd9367-a1d0-42c9-bf6c-3393089a493b" containerName="dnsmasq-dns" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.026419 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.037216 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.037413 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.037245 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.039420 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.043856 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n"] Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.046888 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.046872541 podStartE2EDuration="37.046872541s" podCreationTimestamp="2026-03-16 15:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:41:14.024750448 +0000 UTC m=+1675.752140745" watchObservedRunningTime="2026-03-16 15:41:14.046872541 +0000 UTC m=+1675.774262828" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.086480 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.086465159 podStartE2EDuration="37.086465159s" podCreationTimestamp="2026-03-16 15:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 15:41:14.085618156 +0000 UTC m=+1675.813008443" watchObservedRunningTime="2026-03-16 15:41:14.086465159 +0000 UTC m=+1675.813855446" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.176942 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm9xz\" (UniqueName: \"kubernetes.io/projected/37a5c3b4-a904-4d80-8823-97fa52f36de3-kube-api-access-rm9xz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.177061 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.177151 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.177191 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.279099 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm9xz\" (UniqueName: \"kubernetes.io/projected/37a5c3b4-a904-4d80-8823-97fa52f36de3-kube-api-access-rm9xz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.279191 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.279233 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.279257 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.291243 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.291402 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.294852 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.296782 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm9xz\" (UniqueName: \"kubernetes.io/projected/37a5c3b4-a904-4d80-8823-97fa52f36de3-kube-api-access-rm9xz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:14 crc kubenswrapper[4736]: I0316 15:41:14.349717 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:15 crc kubenswrapper[4736]: I0316 15:41:15.083376 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n"] Mar 16 15:41:16 crc kubenswrapper[4736]: I0316 15:41:16.033991 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" event={"ID":"37a5c3b4-a904-4d80-8823-97fa52f36de3","Type":"ContainerStarted","Data":"2d3d75629e197073abe58bda7f04d5f98be36460e36c61bc600e72ba865eb126"} Mar 16 15:41:24 crc kubenswrapper[4736]: I0316 15:41:24.787402 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:41:25 crc kubenswrapper[4736]: I0316 15:41:25.135239 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" event={"ID":"37a5c3b4-a904-4d80-8823-97fa52f36de3","Type":"ContainerStarted","Data":"ad9247338549be90a7a145c38f516782701fca8f2ede199136edf805e62a5b84"} Mar 16 15:41:25 crc kubenswrapper[4736]: I0316 15:41:25.158765 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" podStartSLOduration=2.466833893 podStartE2EDuration="12.158744186s" podCreationTimestamp="2026-03-16 15:41:13 +0000 UTC" firstStartedPulling="2026-03-16 15:41:15.092585729 +0000 UTC m=+1676.819976016" lastFinishedPulling="2026-03-16 15:41:24.784496022 +0000 UTC m=+1686.511886309" observedRunningTime="2026-03-16 15:41:25.15705827 +0000 UTC m=+1686.884448577" watchObservedRunningTime="2026-03-16 15:41:25.158744186 +0000 UTC m=+1686.886134473" Mar 16 15:41:27 crc kubenswrapper[4736]: I0316 15:41:27.979067 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:41:27 crc kubenswrapper[4736]: E0316 15:41:27.980018 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:41:28 crc kubenswrapper[4736]: I0316 15:41:28.049204 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 16 15:41:28 crc kubenswrapper[4736]: I0316 15:41:28.473380 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 16 15:41:38 crc kubenswrapper[4736]: I0316 15:41:38.285727 4736 generic.go:334] "Generic (PLEG): container finished" podID="37a5c3b4-a904-4d80-8823-97fa52f36de3" containerID="ad9247338549be90a7a145c38f516782701fca8f2ede199136edf805e62a5b84" exitCode=0 Mar 16 15:41:38 crc kubenswrapper[4736]: I0316 15:41:38.285850 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" event={"ID":"37a5c3b4-a904-4d80-8823-97fa52f36de3","Type":"ContainerDied","Data":"ad9247338549be90a7a145c38f516782701fca8f2ede199136edf805e62a5b84"} Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.064132 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-a023-account-create-update-ncblm"] Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.074339 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-x6q66"] Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.084963 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-a023-account-create-update-ncblm"] Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.093966 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-x6q66"] Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.764929 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.878629 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm9xz\" (UniqueName: \"kubernetes.io/projected/37a5c3b4-a904-4d80-8823-97fa52f36de3-kube-api-access-rm9xz\") pod \"37a5c3b4-a904-4d80-8823-97fa52f36de3\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.878707 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-repo-setup-combined-ca-bundle\") pod \"37a5c3b4-a904-4d80-8823-97fa52f36de3\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.878764 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-ssh-key-openstack-edpm-ipam\") pod \"37a5c3b4-a904-4d80-8823-97fa52f36de3\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.878931 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-inventory\") pod \"37a5c3b4-a904-4d80-8823-97fa52f36de3\" (UID: \"37a5c3b4-a904-4d80-8823-97fa52f36de3\") " Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.889960 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "37a5c3b4-a904-4d80-8823-97fa52f36de3" (UID: "37a5c3b4-a904-4d80-8823-97fa52f36de3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.890395 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37a5c3b4-a904-4d80-8823-97fa52f36de3-kube-api-access-rm9xz" (OuterVolumeSpecName: "kube-api-access-rm9xz") pod "37a5c3b4-a904-4d80-8823-97fa52f36de3" (UID: "37a5c3b4-a904-4d80-8823-97fa52f36de3"). InnerVolumeSpecName "kube-api-access-rm9xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.907551 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-inventory" (OuterVolumeSpecName: "inventory") pod "37a5c3b4-a904-4d80-8823-97fa52f36de3" (UID: "37a5c3b4-a904-4d80-8823-97fa52f36de3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.915461 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "37a5c3b4-a904-4d80-8823-97fa52f36de3" (UID: "37a5c3b4-a904-4d80-8823-97fa52f36de3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.981582 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm9xz\" (UniqueName: \"kubernetes.io/projected/37a5c3b4-a904-4d80-8823-97fa52f36de3-kube-api-access-rm9xz\") on node \"crc\" DevicePath \"\"" Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.981607 4736 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.981618 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:41:39 crc kubenswrapper[4736]: I0316 15:41:39.981629 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37a5c3b4-a904-4d80-8823-97fa52f36de3-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.027038 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-ed8c-account-create-update-pxbvq"] Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.038334 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-f7de-account-create-update-wvjgd"] Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.053348 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-ed8c-account-create-update-pxbvq"] Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.064225 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-f7de-account-create-update-wvjgd"] Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.318746 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" event={"ID":"37a5c3b4-a904-4d80-8823-97fa52f36de3","Type":"ContainerDied","Data":"2d3d75629e197073abe58bda7f04d5f98be36460e36c61bc600e72ba865eb126"} Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.318787 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d3d75629e197073abe58bda7f04d5f98be36460e36c61bc600e72ba865eb126" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.318835 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.440964 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8"] Mar 16 15:41:40 crc kubenswrapper[4736]: E0316 15:41:40.441425 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a5c3b4-a904-4d80-8823-97fa52f36de3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.441445 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a5c3b4-a904-4d80-8823-97fa52f36de3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.441622 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="37a5c3b4-a904-4d80-8823-97fa52f36de3" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.443303 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.446470 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.446975 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.447039 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.449093 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.471200 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8"] Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.593941 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a44a03ad-9259-452c-8234-2ee8f93d66be-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lzcb8\" (UID: \"a44a03ad-9259-452c-8234-2ee8f93d66be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.594009 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a44a03ad-9259-452c-8234-2ee8f93d66be-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lzcb8\" (UID: \"a44a03ad-9259-452c-8234-2ee8f93d66be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.594070 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkrcg\" (UniqueName: \"kubernetes.io/projected/a44a03ad-9259-452c-8234-2ee8f93d66be-kube-api-access-vkrcg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lzcb8\" (UID: \"a44a03ad-9259-452c-8234-2ee8f93d66be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.696242 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a44a03ad-9259-452c-8234-2ee8f93d66be-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lzcb8\" (UID: \"a44a03ad-9259-452c-8234-2ee8f93d66be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.696295 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a44a03ad-9259-452c-8234-2ee8f93d66be-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lzcb8\" (UID: \"a44a03ad-9259-452c-8234-2ee8f93d66be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.696359 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkrcg\" (UniqueName: \"kubernetes.io/projected/a44a03ad-9259-452c-8234-2ee8f93d66be-kube-api-access-vkrcg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lzcb8\" (UID: \"a44a03ad-9259-452c-8234-2ee8f93d66be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.700790 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a44a03ad-9259-452c-8234-2ee8f93d66be-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lzcb8\" (UID: \"a44a03ad-9259-452c-8234-2ee8f93d66be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.701220 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a44a03ad-9259-452c-8234-2ee8f93d66be-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lzcb8\" (UID: \"a44a03ad-9259-452c-8234-2ee8f93d66be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.718474 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkrcg\" (UniqueName: \"kubernetes.io/projected/a44a03ad-9259-452c-8234-2ee8f93d66be-kube-api-access-vkrcg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lzcb8\" (UID: \"a44a03ad-9259-452c-8234-2ee8f93d66be\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" Mar 16 15:41:40 crc kubenswrapper[4736]: I0316 15:41:40.764485 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" Mar 16 15:41:41 crc kubenswrapper[4736]: I0316 15:41:41.002969 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2240a5bb-051a-419c-bc1d-f5dc902a10e6" path="/var/lib/kubelet/pods/2240a5bb-051a-419c-bc1d-f5dc902a10e6/volumes" Mar 16 15:41:41 crc kubenswrapper[4736]: I0316 15:41:41.004167 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2898e153-f18f-4134-bc3c-59928983c1b9" path="/var/lib/kubelet/pods/2898e153-f18f-4134-bc3c-59928983c1b9/volumes" Mar 16 15:41:41 crc kubenswrapper[4736]: I0316 15:41:41.004754 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fe221db-0528-4bf5-a178-fc56b36d79f0" path="/var/lib/kubelet/pods/7fe221db-0528-4bf5-a178-fc56b36d79f0/volumes" Mar 16 15:41:41 crc kubenswrapper[4736]: I0316 15:41:41.005492 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e53ccca4-e4ca-4621-b748-47fd9cea24f7" path="/var/lib/kubelet/pods/e53ccca4-e4ca-4621-b748-47fd9cea24f7/volumes" Mar 16 15:41:41 crc kubenswrapper[4736]: I0316 15:41:41.273793 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8"] Mar 16 15:41:41 crc kubenswrapper[4736]: I0316 15:41:41.339290 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" event={"ID":"a44a03ad-9259-452c-8234-2ee8f93d66be","Type":"ContainerStarted","Data":"00a3192c5e11c204eee2f57ee400f628c6b81c4111c66cd6c43268090e640760"} Mar 16 15:41:41 crc kubenswrapper[4736]: I0316 15:41:41.978395 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:41:41 crc kubenswrapper[4736]: E0316 15:41:41.978994 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:41:42 crc kubenswrapper[4736]: I0316 15:41:42.366558 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" event={"ID":"a44a03ad-9259-452c-8234-2ee8f93d66be","Type":"ContainerStarted","Data":"b5a04dd3af4b2a85799b6c1add742b3e4ebb41a10a9362a270425dcba41ddf29"} Mar 16 15:41:42 crc kubenswrapper[4736]: I0316 15:41:42.395718 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" podStartSLOduration=1.9480037860000001 podStartE2EDuration="2.395696226s" podCreationTimestamp="2026-03-16 15:41:40 +0000 UTC" firstStartedPulling="2026-03-16 15:41:41.283285159 +0000 UTC m=+1703.010675446" lastFinishedPulling="2026-03-16 15:41:41.730977589 +0000 UTC m=+1703.458367886" observedRunningTime="2026-03-16 15:41:42.390620636 +0000 UTC m=+1704.118010963" watchObservedRunningTime="2026-03-16 15:41:42.395696226 +0000 UTC m=+1704.123086513" Mar 16 15:41:43 crc kubenswrapper[4736]: I0316 15:41:43.035655 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-qwxkl"] Mar 16 15:41:43 crc kubenswrapper[4736]: I0316 15:41:43.048590 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-xbrjm"] Mar 16 15:41:43 crc kubenswrapper[4736]: I0316 15:41:43.060095 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-qwxkl"] Mar 16 15:41:43 crc kubenswrapper[4736]: I0316 15:41:43.069797 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-xbrjm"] Mar 16 15:41:44 crc kubenswrapper[4736]: I0316 15:41:44.053156 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-tc7zt"] Mar 16 15:41:44 crc kubenswrapper[4736]: I0316 15:41:44.070038 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-407f-account-create-update-cnjtc"] Mar 16 15:41:44 crc kubenswrapper[4736]: I0316 15:41:44.081140 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-407f-account-create-update-cnjtc"] Mar 16 15:41:44 crc kubenswrapper[4736]: I0316 15:41:44.089739 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-tc7zt"] Mar 16 15:41:44 crc kubenswrapper[4736]: I0316 15:41:44.996461 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e29bc35-4ebf-41e4-a7b0-ef90df644ca4" path="/var/lib/kubelet/pods/0e29bc35-4ebf-41e4-a7b0-ef90df644ca4/volumes" Mar 16 15:41:44 crc kubenswrapper[4736]: I0316 15:41:44.997577 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab1dd355-d321-4c74-be86-7b850f60a065" path="/var/lib/kubelet/pods/ab1dd355-d321-4c74-be86-7b850f60a065/volumes" Mar 16 15:41:44 crc kubenswrapper[4736]: I0316 15:41:44.998257 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="becf0d51-28cb-47fc-9c0d-9d3042fbec0a" path="/var/lib/kubelet/pods/becf0d51-28cb-47fc-9c0d-9d3042fbec0a/volumes" Mar 16 15:41:45 crc kubenswrapper[4736]: I0316 15:41:45.000419 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ebb150-6fe4-4bfc-92ec-948f02fc5d17" path="/var/lib/kubelet/pods/d8ebb150-6fe4-4bfc-92ec-948f02fc5d17/volumes" Mar 16 15:41:45 crc kubenswrapper[4736]: I0316 15:41:45.411042 4736 generic.go:334] "Generic (PLEG): container finished" podID="a44a03ad-9259-452c-8234-2ee8f93d66be" containerID="b5a04dd3af4b2a85799b6c1add742b3e4ebb41a10a9362a270425dcba41ddf29" exitCode=0 Mar 16 15:41:45 crc kubenswrapper[4736]: I0316 15:41:45.411090 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" event={"ID":"a44a03ad-9259-452c-8234-2ee8f93d66be","Type":"ContainerDied","Data":"b5a04dd3af4b2a85799b6c1add742b3e4ebb41a10a9362a270425dcba41ddf29"} Mar 16 15:41:46 crc kubenswrapper[4736]: I0316 15:41:46.877164 4736 scope.go:117] "RemoveContainer" containerID="acba7046e116755f14b29b90e864c1e6f3901cf2732d5e38c6002c4d08eb1be9" Mar 16 15:41:46 crc kubenswrapper[4736]: I0316 15:41:46.942496 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" Mar 16 15:41:46 crc kubenswrapper[4736]: I0316 15:41:46.958443 4736 scope.go:117] "RemoveContainer" containerID="638eb46df411b49e1b089ab7bce4669146bbbe801a15dafbd1810575e7003ebd" Mar 16 15:41:46 crc kubenswrapper[4736]: I0316 15:41:46.993698 4736 scope.go:117] "RemoveContainer" containerID="2709faefc4553add5911dd82703f8bb247ee54ada241fc8eea8be408fda322f0" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.022982 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkrcg\" (UniqueName: \"kubernetes.io/projected/a44a03ad-9259-452c-8234-2ee8f93d66be-kube-api-access-vkrcg\") pod \"a44a03ad-9259-452c-8234-2ee8f93d66be\" (UID: \"a44a03ad-9259-452c-8234-2ee8f93d66be\") " Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.023070 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a44a03ad-9259-452c-8234-2ee8f93d66be-inventory\") pod \"a44a03ad-9259-452c-8234-2ee8f93d66be\" (UID: \"a44a03ad-9259-452c-8234-2ee8f93d66be\") " Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.023215 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a44a03ad-9259-452c-8234-2ee8f93d66be-ssh-key-openstack-edpm-ipam\") pod \"a44a03ad-9259-452c-8234-2ee8f93d66be\" (UID: \"a44a03ad-9259-452c-8234-2ee8f93d66be\") " Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.030347 4736 scope.go:117] "RemoveContainer" containerID="53a15ec634f0d46d62306a1c5f51ee998812b65f3c3d21792dc43cd14de83dae" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.045768 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a44a03ad-9259-452c-8234-2ee8f93d66be-kube-api-access-vkrcg" (OuterVolumeSpecName: "kube-api-access-vkrcg") pod "a44a03ad-9259-452c-8234-2ee8f93d66be" (UID: "a44a03ad-9259-452c-8234-2ee8f93d66be"). InnerVolumeSpecName "kube-api-access-vkrcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.056672 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a44a03ad-9259-452c-8234-2ee8f93d66be-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a44a03ad-9259-452c-8234-2ee8f93d66be" (UID: "a44a03ad-9259-452c-8234-2ee8f93d66be"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.059080 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a44a03ad-9259-452c-8234-2ee8f93d66be-inventory" (OuterVolumeSpecName: "inventory") pod "a44a03ad-9259-452c-8234-2ee8f93d66be" (UID: "a44a03ad-9259-452c-8234-2ee8f93d66be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.123413 4736 scope.go:117] "RemoveContainer" containerID="bd6412bc313477a64c694b08a53e71b2d3cf2d30aafa60f32ab1431a61d42035" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.125708 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkrcg\" (UniqueName: \"kubernetes.io/projected/a44a03ad-9259-452c-8234-2ee8f93d66be-kube-api-access-vkrcg\") on node \"crc\" DevicePath \"\"" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.125735 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a44a03ad-9259-452c-8234-2ee8f93d66be-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.125746 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a44a03ad-9259-452c-8234-2ee8f93d66be-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.152300 4736 scope.go:117] "RemoveContainer" containerID="caa48ce8fc3c1d250d20913ef1b6daeb412c47aa82baf1193fd75c6d0178037a" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.187283 4736 scope.go:117] "RemoveContainer" containerID="697af517e42b276811837143ee066426967533af73af2e7a8ca850d0a1a3354f" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.209010 4736 scope.go:117] "RemoveContainer" containerID="f02d9b45ae6d3129da7816c391fc38fd1657272770481d72938c3f133d08ff93" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.230089 4736 scope.go:117] "RemoveContainer" containerID="cc6efb9dde20e90bbfd5b01123087696e9d336f1cb445f1922ed92edb75751cb" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.250501 4736 scope.go:117] "RemoveContainer" containerID="64de37c7cd958cf4e4fd7dd8eb36fce0b47e291836bf38a9ce90cbd6a083d0bb" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.272853 4736 scope.go:117] "RemoveContainer" containerID="cfdd9a9596b270afe3bba4f10528b20e0354bbdfcfae1209e265f7c222c3584b" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.295463 4736 scope.go:117] "RemoveContainer" containerID="e9d300993f33bd28843615e27106f966a001f3876dd6b290452761d9a138627b" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.321939 4736 scope.go:117] "RemoveContainer" containerID="68d0741c28017254de3a55137dee72d23723cb087378a23bbeeddc744181d7e2" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.351526 4736 scope.go:117] "RemoveContainer" containerID="82cc9c688b89915b5512d2f403cf0c16a3dac46edaf03953fd33cf9dbdec5df5" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.374025 4736 scope.go:117] "RemoveContainer" containerID="9c560c8506d22d71c570cfe52b90c8a77fcb15d8633e3201d1e4a6dac25a5ab2" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.391906 4736 scope.go:117] "RemoveContainer" containerID="78ee47d11a732d90e0a223a49ea99a4d40543193cb137539392405ea124eaf7a" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.418374 4736 scope.go:117] "RemoveContainer" containerID="be5a097bcf03f53f9eb56666a1eabe7d37d49a9d982069b1fc0665220e771a7d" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.453877 4736 scope.go:117] "RemoveContainer" containerID="3e1f8046fe55418048c301dc329fbaa3d4ad78f5d23dae64130ca6b632dd40c0" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.460346 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" event={"ID":"a44a03ad-9259-452c-8234-2ee8f93d66be","Type":"ContainerDied","Data":"00a3192c5e11c204eee2f57ee400f628c6b81c4111c66cd6c43268090e640760"} Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.460379 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00a3192c5e11c204eee2f57ee400f628c6b81c4111c66cd6c43268090e640760" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.460552 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lzcb8" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.512488 4736 scope.go:117] "RemoveContainer" containerID="718bcfdd2a4b2526e48e296eba2be2a0de00ad286c262892a19015dc2f16ad38" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.563244 4736 scope.go:117] "RemoveContainer" containerID="3722f183efc0219ffd1185b30f0daacef5b4c17d3b6b76626a5ec88c78e3323f" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.572669 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm"] Mar 16 15:41:47 crc kubenswrapper[4736]: E0316 15:41:47.573062 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a44a03ad-9259-452c-8234-2ee8f93d66be" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.573079 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a44a03ad-9259-452c-8234-2ee8f93d66be" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.573290 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a44a03ad-9259-452c-8234-2ee8f93d66be" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.573926 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.580667 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.580845 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.580990 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.581082 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.591022 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm"] Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.613678 4736 scope.go:117] "RemoveContainer" containerID="eee3f0277c6514c70468deded615fd0d24df773bc0c724f2ae9e436b3f43bcb2" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.735557 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.735668 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.735709 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qflxs\" (UniqueName: \"kubernetes.io/projected/bac31a5a-12b7-4a43-b596-91352137545b-kube-api-access-qflxs\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.735741 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.837686 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.837845 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qflxs\" (UniqueName: \"kubernetes.io/projected/bac31a5a-12b7-4a43-b596-91352137545b-kube-api-access-qflxs\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.837917 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.838028 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.842361 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.844547 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.855397 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.864000 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qflxs\" (UniqueName: \"kubernetes.io/projected/bac31a5a-12b7-4a43-b596-91352137545b-kube-api-access-qflxs\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:41:47 crc kubenswrapper[4736]: I0316 15:41:47.906079 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:41:48 crc kubenswrapper[4736]: I0316 15:41:48.472556 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm"] Mar 16 15:41:48 crc kubenswrapper[4736]: I0316 15:41:48.522588 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" event={"ID":"bac31a5a-12b7-4a43-b596-91352137545b","Type":"ContainerStarted","Data":"3a5b1aa8f643940e4eb43557e58c1dddebf94fe757e948f709ea7dda7b1c4aa4"} Mar 16 15:41:49 crc kubenswrapper[4736]: I0316 15:41:49.082323 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8spsp"] Mar 16 15:41:49 crc kubenswrapper[4736]: I0316 15:41:49.094758 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8spsp"] Mar 16 15:41:49 crc kubenswrapper[4736]: I0316 15:41:49.534252 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" event={"ID":"bac31a5a-12b7-4a43-b596-91352137545b","Type":"ContainerStarted","Data":"7322efc2e2168c71fff501ba61c003bad7f19d8068e25f0649f6b3963fb59ec0"} Mar 16 15:41:49 crc kubenswrapper[4736]: I0316 15:41:49.563170 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" podStartSLOduration=2.096850543 podStartE2EDuration="2.563148739s" podCreationTimestamp="2026-03-16 15:41:47 +0000 UTC" firstStartedPulling="2026-03-16 15:41:48.490194485 +0000 UTC m=+1710.217584812" lastFinishedPulling="2026-03-16 15:41:48.956492681 +0000 UTC m=+1710.683883008" observedRunningTime="2026-03-16 15:41:49.554981862 +0000 UTC m=+1711.282372149" watchObservedRunningTime="2026-03-16 15:41:49.563148739 +0000 UTC m=+1711.290539036" Mar 16 15:41:50 crc kubenswrapper[4736]: I0316 15:41:50.990647 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36d527aa-c409-49d5-8901-5bd60482dfe4" path="/var/lib/kubelet/pods/36d527aa-c409-49d5-8901-5bd60482dfe4/volumes" Mar 16 15:41:52 crc kubenswrapper[4736]: I0316 15:41:52.978553 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:41:52 crc kubenswrapper[4736]: E0316 15:41:52.979053 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:42:00 crc kubenswrapper[4736]: I0316 15:42:00.140556 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561262-fpvfp"] Mar 16 15:42:00 crc kubenswrapper[4736]: I0316 15:42:00.142581 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561262-fpvfp" Mar 16 15:42:00 crc kubenswrapper[4736]: I0316 15:42:00.144838 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:42:00 crc kubenswrapper[4736]: I0316 15:42:00.146164 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:42:00 crc kubenswrapper[4736]: I0316 15:42:00.146368 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:42:00 crc kubenswrapper[4736]: I0316 15:42:00.166260 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561262-fpvfp"] Mar 16 15:42:00 crc kubenswrapper[4736]: I0316 15:42:00.293640 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jvcl\" (UniqueName: \"kubernetes.io/projected/ed4b5032-89e0-4dee-8a6c-74f43a3762e0-kube-api-access-9jvcl\") pod \"auto-csr-approver-29561262-fpvfp\" (UID: \"ed4b5032-89e0-4dee-8a6c-74f43a3762e0\") " pod="openshift-infra/auto-csr-approver-29561262-fpvfp" Mar 16 15:42:00 crc kubenswrapper[4736]: I0316 15:42:00.399241 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jvcl\" (UniqueName: \"kubernetes.io/projected/ed4b5032-89e0-4dee-8a6c-74f43a3762e0-kube-api-access-9jvcl\") pod \"auto-csr-approver-29561262-fpvfp\" (UID: \"ed4b5032-89e0-4dee-8a6c-74f43a3762e0\") " pod="openshift-infra/auto-csr-approver-29561262-fpvfp" Mar 16 15:42:00 crc kubenswrapper[4736]: I0316 15:42:00.421796 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jvcl\" (UniqueName: \"kubernetes.io/projected/ed4b5032-89e0-4dee-8a6c-74f43a3762e0-kube-api-access-9jvcl\") pod \"auto-csr-approver-29561262-fpvfp\" (UID: \"ed4b5032-89e0-4dee-8a6c-74f43a3762e0\") " pod="openshift-infra/auto-csr-approver-29561262-fpvfp" Mar 16 15:42:00 crc kubenswrapper[4736]: I0316 15:42:00.463207 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561262-fpvfp" Mar 16 15:42:01 crc kubenswrapper[4736]: I0316 15:42:01.147586 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561262-fpvfp"] Mar 16 15:42:01 crc kubenswrapper[4736]: I0316 15:42:01.684370 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561262-fpvfp" event={"ID":"ed4b5032-89e0-4dee-8a6c-74f43a3762e0","Type":"ContainerStarted","Data":"07c8b223ddb1ef9e771b4ce231ddb0ecd7967232482f072d80149add45a460c1"} Mar 16 15:42:03 crc kubenswrapper[4736]: I0316 15:42:03.702040 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561262-fpvfp" event={"ID":"ed4b5032-89e0-4dee-8a6c-74f43a3762e0","Type":"ContainerStarted","Data":"37a86d47217afa88b89b58f9ff3074144559c5b471f87326beb2d2f53f2e7a60"} Mar 16 15:42:03 crc kubenswrapper[4736]: I0316 15:42:03.715954 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561262-fpvfp" podStartSLOduration=2.212246428 podStartE2EDuration="3.715934312s" podCreationTimestamp="2026-03-16 15:42:00 +0000 UTC" firstStartedPulling="2026-03-16 15:42:01.163559847 +0000 UTC m=+1722.890950134" lastFinishedPulling="2026-03-16 15:42:02.667247711 +0000 UTC m=+1724.394638018" observedRunningTime="2026-03-16 15:42:03.714472542 +0000 UTC m=+1725.441862829" watchObservedRunningTime="2026-03-16 15:42:03.715934312 +0000 UTC m=+1725.443324599" Mar 16 15:42:04 crc kubenswrapper[4736]: I0316 15:42:04.726456 4736 generic.go:334] "Generic (PLEG): container finished" podID="ed4b5032-89e0-4dee-8a6c-74f43a3762e0" containerID="37a86d47217afa88b89b58f9ff3074144559c5b471f87326beb2d2f53f2e7a60" exitCode=0 Mar 16 15:42:04 crc kubenswrapper[4736]: I0316 15:42:04.726879 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561262-fpvfp" event={"ID":"ed4b5032-89e0-4dee-8a6c-74f43a3762e0","Type":"ContainerDied","Data":"37a86d47217afa88b89b58f9ff3074144559c5b471f87326beb2d2f53f2e7a60"} Mar 16 15:42:06 crc kubenswrapper[4736]: I0316 15:42:06.086861 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561262-fpvfp" Mar 16 15:42:06 crc kubenswrapper[4736]: I0316 15:42:06.216065 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jvcl\" (UniqueName: \"kubernetes.io/projected/ed4b5032-89e0-4dee-8a6c-74f43a3762e0-kube-api-access-9jvcl\") pod \"ed4b5032-89e0-4dee-8a6c-74f43a3762e0\" (UID: \"ed4b5032-89e0-4dee-8a6c-74f43a3762e0\") " Mar 16 15:42:06 crc kubenswrapper[4736]: I0316 15:42:06.224396 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed4b5032-89e0-4dee-8a6c-74f43a3762e0-kube-api-access-9jvcl" (OuterVolumeSpecName: "kube-api-access-9jvcl") pod "ed4b5032-89e0-4dee-8a6c-74f43a3762e0" (UID: "ed4b5032-89e0-4dee-8a6c-74f43a3762e0"). InnerVolumeSpecName "kube-api-access-9jvcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:42:06 crc kubenswrapper[4736]: I0316 15:42:06.319096 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jvcl\" (UniqueName: \"kubernetes.io/projected/ed4b5032-89e0-4dee-8a6c-74f43a3762e0-kube-api-access-9jvcl\") on node \"crc\" DevicePath \"\"" Mar 16 15:42:06 crc kubenswrapper[4736]: I0316 15:42:06.753895 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561262-fpvfp" event={"ID":"ed4b5032-89e0-4dee-8a6c-74f43a3762e0","Type":"ContainerDied","Data":"07c8b223ddb1ef9e771b4ce231ddb0ecd7967232482f072d80149add45a460c1"} Mar 16 15:42:06 crc kubenswrapper[4736]: I0316 15:42:06.753947 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07c8b223ddb1ef9e771b4ce231ddb0ecd7967232482f072d80149add45a460c1" Mar 16 15:42:06 crc kubenswrapper[4736]: I0316 15:42:06.754009 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561262-fpvfp" Mar 16 15:42:06 crc kubenswrapper[4736]: I0316 15:42:06.782396 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561256-bpsnb"] Mar 16 15:42:06 crc kubenswrapper[4736]: I0316 15:42:06.789935 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561256-bpsnb"] Mar 16 15:42:06 crc kubenswrapper[4736]: I0316 15:42:06.978824 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:42:06 crc kubenswrapper[4736]: E0316 15:42:06.979284 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:42:06 crc kubenswrapper[4736]: I0316 15:42:06.990356 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="defbe484-8be6-4777-a843-c6e7dbd7e29e" path="/var/lib/kubelet/pods/defbe484-8be6-4777-a843-c6e7dbd7e29e/volumes" Mar 16 15:42:19 crc kubenswrapper[4736]: I0316 15:42:19.005353 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:42:19 crc kubenswrapper[4736]: E0316 15:42:19.011364 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:42:25 crc kubenswrapper[4736]: I0316 15:42:25.050896 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-7g9td"] Mar 16 15:42:25 crc kubenswrapper[4736]: I0316 15:42:25.063021 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-7g9td"] Mar 16 15:42:26 crc kubenswrapper[4736]: I0316 15:42:26.990621 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1002af8b-786a-47c1-8872-f417cc88561e" path="/var/lib/kubelet/pods/1002af8b-786a-47c1-8872-f417cc88561e/volumes" Mar 16 15:42:34 crc kubenswrapper[4736]: I0316 15:42:34.983859 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:42:34 crc kubenswrapper[4736]: E0316 15:42:34.984858 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:42:37 crc kubenswrapper[4736]: I0316 15:42:37.030749 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-cgsss"] Mar 16 15:42:37 crc kubenswrapper[4736]: I0316 15:42:37.040271 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-cgsss"] Mar 16 15:42:38 crc kubenswrapper[4736]: I0316 15:42:38.998428 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c0be59e-89c6-45e8-9697-e513c48bb23e" path="/var/lib/kubelet/pods/8c0be59e-89c6-45e8-9697-e513c48bb23e/volumes" Mar 16 15:42:48 crc kubenswrapper[4736]: I0316 15:42:48.017041 4736 scope.go:117] "RemoveContainer" containerID="0a6de9ff336fc1bf45bb128e769d0cb09c466b42651d6bf09d913ae4d5b2bbf1" Mar 16 15:42:48 crc kubenswrapper[4736]: I0316 15:42:48.065955 4736 scope.go:117] "RemoveContainer" containerID="6ae727ef6d343c071767b767ab9085c460258b2868c0cb6387f7b7855bd6e788" Mar 16 15:42:48 crc kubenswrapper[4736]: I0316 15:42:48.136739 4736 scope.go:117] "RemoveContainer" containerID="d95bae2195104a52bfcd78054330122fc0dd60b8e564404c345d0771b5e4be63" Mar 16 15:42:48 crc kubenswrapper[4736]: I0316 15:42:48.200706 4736 scope.go:117] "RemoveContainer" containerID="a1926a8abc6a1ababb1388071717ceebc5679efa7f69a8fe41e96624e9c62a11" Mar 16 15:42:49 crc kubenswrapper[4736]: I0316 15:42:49.979351 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:42:49 crc kubenswrapper[4736]: E0316 15:42:49.980149 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:42:50 crc kubenswrapper[4736]: I0316 15:42:50.051983 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-4xvcj"] Mar 16 15:42:50 crc kubenswrapper[4736]: I0316 15:42:50.059891 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-l6p6k"] Mar 16 15:42:50 crc kubenswrapper[4736]: I0316 15:42:50.069037 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-4xvcj"] Mar 16 15:42:50 crc kubenswrapper[4736]: I0316 15:42:50.080202 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-l6p6k"] Mar 16 15:42:50 crc kubenswrapper[4736]: I0316 15:42:50.988173 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="494a2ca6-a44a-4f96-8494-9708b72db762" path="/var/lib/kubelet/pods/494a2ca6-a44a-4f96-8494-9708b72db762/volumes" Mar 16 15:42:50 crc kubenswrapper[4736]: I0316 15:42:50.988856 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="512818ab-2555-491e-8cba-c3192bb85fc2" path="/var/lib/kubelet/pods/512818ab-2555-491e-8cba-c3192bb85fc2/volumes" Mar 16 15:42:53 crc kubenswrapper[4736]: I0316 15:42:53.036433 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tqk4w"] Mar 16 15:42:53 crc kubenswrapper[4736]: I0316 15:42:53.049098 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tqk4w"] Mar 16 15:42:54 crc kubenswrapper[4736]: I0316 15:42:54.993082 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e65fd8a-c9fa-43b4-b1de-3657226bfac0" path="/var/lib/kubelet/pods/4e65fd8a-c9fa-43b4-b1de-3657226bfac0/volumes" Mar 16 15:43:04 crc kubenswrapper[4736]: I0316 15:43:04.979014 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:43:04 crc kubenswrapper[4736]: E0316 15:43:04.980184 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:43:08 crc kubenswrapper[4736]: I0316 15:43:08.032802 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-r5gq2"] Mar 16 15:43:08 crc kubenswrapper[4736]: I0316 15:43:08.042857 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-r5gq2"] Mar 16 15:43:08 crc kubenswrapper[4736]: I0316 15:43:08.993016 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ea1215a-a5f6-406c-aba9-ba1f9da1a943" path="/var/lib/kubelet/pods/9ea1215a-a5f6-406c-aba9-ba1f9da1a943/volumes" Mar 16 15:43:11 crc kubenswrapper[4736]: I0316 15:43:11.057517 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-wvncr"] Mar 16 15:43:11 crc kubenswrapper[4736]: I0316 15:43:11.072242 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-wvncr"] Mar 16 15:43:12 crc kubenswrapper[4736]: I0316 15:43:12.992591 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2255bb68-be73-4c4f-8739-83783ae195f0" path="/var/lib/kubelet/pods/2255bb68-be73-4c4f-8739-83783ae195f0/volumes" Mar 16 15:43:18 crc kubenswrapper[4736]: I0316 15:43:18.978181 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:43:18 crc kubenswrapper[4736]: E0316 15:43:18.979440 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:43:30 crc kubenswrapper[4736]: I0316 15:43:30.978178 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:43:30 crc kubenswrapper[4736]: E0316 15:43:30.979061 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:43:44 crc kubenswrapper[4736]: I0316 15:43:44.978715 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:43:44 crc kubenswrapper[4736]: E0316 15:43:44.980190 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:43:48 crc kubenswrapper[4736]: I0316 15:43:48.322851 4736 scope.go:117] "RemoveContainer" containerID="03f0cc145fe562138ddc3e31435ea04222731111c6e8d5b00c5c62788542e54f" Mar 16 15:43:48 crc kubenswrapper[4736]: I0316 15:43:48.369975 4736 scope.go:117] "RemoveContainer" containerID="da1ab425d844aa2f278d9d46131758202b12c16bd610a5f05c16ec131c610fdb" Mar 16 15:43:48 crc kubenswrapper[4736]: I0316 15:43:48.423150 4736 scope.go:117] "RemoveContainer" containerID="83f02feb1d29de3ec609d84f317177939a50baf56442738d77ad551accdb782f" Mar 16 15:43:48 crc kubenswrapper[4736]: I0316 15:43:48.460143 4736 scope.go:117] "RemoveContainer" containerID="c3c8155b686b37f524cec516858f604d152cdf3bd27984a6fffbcb23462f9f7d" Mar 16 15:43:48 crc kubenswrapper[4736]: I0316 15:43:48.511864 4736 scope.go:117] "RemoveContainer" containerID="9bae3051308e2a907b4302da2df67ec0089a6bc0c225f16f27b88540abcc498c" Mar 16 15:43:52 crc kubenswrapper[4736]: I0316 15:43:52.065190 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dtt9x"] Mar 16 15:43:52 crc kubenswrapper[4736]: I0316 15:43:52.081653 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-kslp8"] Mar 16 15:43:52 crc kubenswrapper[4736]: I0316 15:43:52.095250 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dtt9x"] Mar 16 15:43:52 crc kubenswrapper[4736]: I0316 15:43:52.111028 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-kslp8"] Mar 16 15:43:53 crc kubenswrapper[4736]: I0316 15:43:53.025732 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f7edb4e-af8b-4049-b0d9-cd107b5cb783" path="/var/lib/kubelet/pods/3f7edb4e-af8b-4049-b0d9-cd107b5cb783/volumes" Mar 16 15:43:53 crc kubenswrapper[4736]: I0316 15:43:53.051754 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9763d87e-5b6c-45e2-9b19-a9799e96f9fd" path="/var/lib/kubelet/pods/9763d87e-5b6c-45e2-9b19-a9799e96f9fd/volumes" Mar 16 15:43:53 crc kubenswrapper[4736]: I0316 15:43:53.064054 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-snm2s"] Mar 16 15:43:53 crc kubenswrapper[4736]: I0316 15:43:53.074748 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-05c6-account-create-update-kmnjh"] Mar 16 15:43:53 crc kubenswrapper[4736]: I0316 15:43:53.083599 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-snm2s"] Mar 16 15:43:53 crc kubenswrapper[4736]: I0316 15:43:53.095287 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-05c6-account-create-update-kmnjh"] Mar 16 15:43:54 crc kubenswrapper[4736]: I0316 15:43:54.046811 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8b9d-account-create-update-jnndn"] Mar 16 15:43:54 crc kubenswrapper[4736]: I0316 15:43:54.063474 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8b9d-account-create-update-jnndn"] Mar 16 15:43:54 crc kubenswrapper[4736]: I0316 15:43:54.075740 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-7c80-account-create-update-2q7s7"] Mar 16 15:43:54 crc kubenswrapper[4736]: I0316 15:43:54.085131 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-7c80-account-create-update-2q7s7"] Mar 16 15:43:54 crc kubenswrapper[4736]: I0316 15:43:54.991294 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d25ea99-429a-495b-b0db-f30af232a75f" path="/var/lib/kubelet/pods/1d25ea99-429a-495b-b0db-f30af232a75f/volumes" Mar 16 15:43:54 crc kubenswrapper[4736]: I0316 15:43:54.993346 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25372155-a159-47f2-aba8-8eacd41bf2ff" path="/var/lib/kubelet/pods/25372155-a159-47f2-aba8-8eacd41bf2ff/volumes" Mar 16 15:43:54 crc kubenswrapper[4736]: I0316 15:43:54.995428 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fe90198-808b-48df-bdef-8341723e0511" path="/var/lib/kubelet/pods/6fe90198-808b-48df-bdef-8341723e0511/volumes" Mar 16 15:43:54 crc kubenswrapper[4736]: I0316 15:43:54.996850 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9093ab59-b372-4af4-8c67-a4ab97e79d33" path="/var/lib/kubelet/pods/9093ab59-b372-4af4-8c67-a4ab97e79d33/volumes" Mar 16 15:43:59 crc kubenswrapper[4736]: I0316 15:43:59.978950 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:43:59 crc kubenswrapper[4736]: E0316 15:43:59.980264 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:44:00 crc kubenswrapper[4736]: I0316 15:44:00.150360 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561264-wzn7k"] Mar 16 15:44:00 crc kubenswrapper[4736]: E0316 15:44:00.151013 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed4b5032-89e0-4dee-8a6c-74f43a3762e0" containerName="oc" Mar 16 15:44:00 crc kubenswrapper[4736]: I0316 15:44:00.151038 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4b5032-89e0-4dee-8a6c-74f43a3762e0" containerName="oc" Mar 16 15:44:00 crc kubenswrapper[4736]: I0316 15:44:00.151302 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed4b5032-89e0-4dee-8a6c-74f43a3762e0" containerName="oc" Mar 16 15:44:00 crc kubenswrapper[4736]: I0316 15:44:00.152220 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561264-wzn7k" Mar 16 15:44:00 crc kubenswrapper[4736]: I0316 15:44:00.155217 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:44:00 crc kubenswrapper[4736]: I0316 15:44:00.155425 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:44:00 crc kubenswrapper[4736]: I0316 15:44:00.156046 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:44:00 crc kubenswrapper[4736]: I0316 15:44:00.179647 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561264-wzn7k"] Mar 16 15:44:00 crc kubenswrapper[4736]: I0316 15:44:00.250570 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccgd2\" (UniqueName: \"kubernetes.io/projected/d5584bb7-64ce-4f53-b4e1-ce6432471a05-kube-api-access-ccgd2\") pod \"auto-csr-approver-29561264-wzn7k\" (UID: \"d5584bb7-64ce-4f53-b4e1-ce6432471a05\") " pod="openshift-infra/auto-csr-approver-29561264-wzn7k" Mar 16 15:44:00 crc kubenswrapper[4736]: I0316 15:44:00.352356 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccgd2\" (UniqueName: \"kubernetes.io/projected/d5584bb7-64ce-4f53-b4e1-ce6432471a05-kube-api-access-ccgd2\") pod \"auto-csr-approver-29561264-wzn7k\" (UID: \"d5584bb7-64ce-4f53-b4e1-ce6432471a05\") " pod="openshift-infra/auto-csr-approver-29561264-wzn7k" Mar 16 15:44:00 crc kubenswrapper[4736]: I0316 15:44:00.397872 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccgd2\" (UniqueName: \"kubernetes.io/projected/d5584bb7-64ce-4f53-b4e1-ce6432471a05-kube-api-access-ccgd2\") pod \"auto-csr-approver-29561264-wzn7k\" (UID: \"d5584bb7-64ce-4f53-b4e1-ce6432471a05\") " pod="openshift-infra/auto-csr-approver-29561264-wzn7k" Mar 16 15:44:00 crc kubenswrapper[4736]: I0316 15:44:00.474443 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561264-wzn7k" Mar 16 15:44:01 crc kubenswrapper[4736]: I0316 15:44:01.031266 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 15:44:01 crc kubenswrapper[4736]: I0316 15:44:01.051526 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561264-wzn7k"] Mar 16 15:44:01 crc kubenswrapper[4736]: I0316 15:44:01.327206 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561264-wzn7k" event={"ID":"d5584bb7-64ce-4f53-b4e1-ce6432471a05","Type":"ContainerStarted","Data":"5842466dc760fa71501a9bcec248fc97931409617d0c9733255dcf71677b8737"} Mar 16 15:44:02 crc kubenswrapper[4736]: I0316 15:44:02.340720 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561264-wzn7k" event={"ID":"d5584bb7-64ce-4f53-b4e1-ce6432471a05","Type":"ContainerStarted","Data":"fe5f8c29b5871382a79fae2158a1f2272366a91776b03e7ba9a43560a946df4a"} Mar 16 15:44:02 crc kubenswrapper[4736]: I0316 15:44:02.361842 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561264-wzn7k" podStartSLOduration=1.466417002 podStartE2EDuration="2.36181987s" podCreationTimestamp="2026-03-16 15:44:00 +0000 UTC" firstStartedPulling="2026-03-16 15:44:01.031043004 +0000 UTC m=+1842.758433291" lastFinishedPulling="2026-03-16 15:44:01.926445872 +0000 UTC m=+1843.653836159" observedRunningTime="2026-03-16 15:44:02.355565468 +0000 UTC m=+1844.082955765" watchObservedRunningTime="2026-03-16 15:44:02.36181987 +0000 UTC m=+1844.089210167" Mar 16 15:44:03 crc kubenswrapper[4736]: I0316 15:44:03.349222 4736 generic.go:334] "Generic (PLEG): container finished" podID="d5584bb7-64ce-4f53-b4e1-ce6432471a05" containerID="fe5f8c29b5871382a79fae2158a1f2272366a91776b03e7ba9a43560a946df4a" exitCode=0 Mar 16 15:44:03 crc kubenswrapper[4736]: I0316 15:44:03.349436 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561264-wzn7k" event={"ID":"d5584bb7-64ce-4f53-b4e1-ce6432471a05","Type":"ContainerDied","Data":"fe5f8c29b5871382a79fae2158a1f2272366a91776b03e7ba9a43560a946df4a"} Mar 16 15:44:04 crc kubenswrapper[4736]: I0316 15:44:04.702789 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561264-wzn7k" Mar 16 15:44:04 crc kubenswrapper[4736]: I0316 15:44:04.851080 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccgd2\" (UniqueName: \"kubernetes.io/projected/d5584bb7-64ce-4f53-b4e1-ce6432471a05-kube-api-access-ccgd2\") pod \"d5584bb7-64ce-4f53-b4e1-ce6432471a05\" (UID: \"d5584bb7-64ce-4f53-b4e1-ce6432471a05\") " Mar 16 15:44:04 crc kubenswrapper[4736]: I0316 15:44:04.877457 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5584bb7-64ce-4f53-b4e1-ce6432471a05-kube-api-access-ccgd2" (OuterVolumeSpecName: "kube-api-access-ccgd2") pod "d5584bb7-64ce-4f53-b4e1-ce6432471a05" (UID: "d5584bb7-64ce-4f53-b4e1-ce6432471a05"). InnerVolumeSpecName "kube-api-access-ccgd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:44:04 crc kubenswrapper[4736]: I0316 15:44:04.955376 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccgd2\" (UniqueName: \"kubernetes.io/projected/d5584bb7-64ce-4f53-b4e1-ce6432471a05-kube-api-access-ccgd2\") on node \"crc\" DevicePath \"\"" Mar 16 15:44:05 crc kubenswrapper[4736]: I0316 15:44:05.373735 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561264-wzn7k" event={"ID":"d5584bb7-64ce-4f53-b4e1-ce6432471a05","Type":"ContainerDied","Data":"5842466dc760fa71501a9bcec248fc97931409617d0c9733255dcf71677b8737"} Mar 16 15:44:05 crc kubenswrapper[4736]: I0316 15:44:05.373771 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5842466dc760fa71501a9bcec248fc97931409617d0c9733255dcf71677b8737" Mar 16 15:44:05 crc kubenswrapper[4736]: I0316 15:44:05.373824 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561264-wzn7k" Mar 16 15:44:05 crc kubenswrapper[4736]: I0316 15:44:05.429125 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561258-8kwp4"] Mar 16 15:44:05 crc kubenswrapper[4736]: I0316 15:44:05.437672 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561258-8kwp4"] Mar 16 15:44:06 crc kubenswrapper[4736]: I0316 15:44:06.990515 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5f66e4c-c0a5-4996-a966-05636ae1b7ad" path="/var/lib/kubelet/pods/a5f66e4c-c0a5-4996-a966-05636ae1b7ad/volumes" Mar 16 15:44:14 crc kubenswrapper[4736]: I0316 15:44:14.978430 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:44:14 crc kubenswrapper[4736]: E0316 15:44:14.979920 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:44:26 crc kubenswrapper[4736]: I0316 15:44:26.978270 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:44:26 crc kubenswrapper[4736]: E0316 15:44:26.979314 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:44:40 crc kubenswrapper[4736]: I0316 15:44:40.981742 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:44:40 crc kubenswrapper[4736]: E0316 15:44:40.985917 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:44:47 crc kubenswrapper[4736]: I0316 15:44:47.764484 4736 generic.go:334] "Generic (PLEG): container finished" podID="bac31a5a-12b7-4a43-b596-91352137545b" containerID="7322efc2e2168c71fff501ba61c003bad7f19d8068e25f0649f6b3963fb59ec0" exitCode=0 Mar 16 15:44:47 crc kubenswrapper[4736]: I0316 15:44:47.764559 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" event={"ID":"bac31a5a-12b7-4a43-b596-91352137545b","Type":"ContainerDied","Data":"7322efc2e2168c71fff501ba61c003bad7f19d8068e25f0649f6b3963fb59ec0"} Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.664217 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kjzt4"] Mar 16 15:44:48 crc kubenswrapper[4736]: E0316 15:44:48.664700 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5584bb7-64ce-4f53-b4e1-ce6432471a05" containerName="oc" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.664720 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5584bb7-64ce-4f53-b4e1-ce6432471a05" containerName="oc" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.664946 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5584bb7-64ce-4f53-b4e1-ce6432471a05" containerName="oc" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.666789 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.670686 4736 scope.go:117] "RemoveContainer" containerID="e3fb7bbc41336ef2c7255445d2c8b421415e29b8a23f058fa4ef68f054054db8" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.705683 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kjzt4"] Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.714601 4736 scope.go:117] "RemoveContainer" containerID="7d3af9008c47cc08df959583174e801bed5e3721387782fc8131598486ea4ec1" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.779737 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59fa174e-8c02-464a-b715-57b0108a556d-utilities\") pod \"redhat-operators-kjzt4\" (UID: \"59fa174e-8c02-464a-b715-57b0108a556d\") " pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.780220 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59fa174e-8c02-464a-b715-57b0108a556d-catalog-content\") pod \"redhat-operators-kjzt4\" (UID: \"59fa174e-8c02-464a-b715-57b0108a556d\") " pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.780275 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpdbk\" (UniqueName: \"kubernetes.io/projected/59fa174e-8c02-464a-b715-57b0108a556d-kube-api-access-bpdbk\") pod \"redhat-operators-kjzt4\" (UID: \"59fa174e-8c02-464a-b715-57b0108a556d\") " pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.825392 4736 scope.go:117] "RemoveContainer" containerID="6b9189de1af32b1e9bfc4c2d2b94b0385f0cbd4e383ac25a73770e7853a1f2ec" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.873420 4736 scope.go:117] "RemoveContainer" containerID="82950c21ae0eb5e9a47e77a7fc53aa6a8bff059c10dabb29bd372e5c84f3dce0" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.883131 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59fa174e-8c02-464a-b715-57b0108a556d-utilities\") pod \"redhat-operators-kjzt4\" (UID: \"59fa174e-8c02-464a-b715-57b0108a556d\") " pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.883186 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59fa174e-8c02-464a-b715-57b0108a556d-catalog-content\") pod \"redhat-operators-kjzt4\" (UID: \"59fa174e-8c02-464a-b715-57b0108a556d\") " pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.883245 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpdbk\" (UniqueName: \"kubernetes.io/projected/59fa174e-8c02-464a-b715-57b0108a556d-kube-api-access-bpdbk\") pod \"redhat-operators-kjzt4\" (UID: \"59fa174e-8c02-464a-b715-57b0108a556d\") " pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.883785 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59fa174e-8c02-464a-b715-57b0108a556d-catalog-content\") pod \"redhat-operators-kjzt4\" (UID: \"59fa174e-8c02-464a-b715-57b0108a556d\") " pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.883862 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59fa174e-8c02-464a-b715-57b0108a556d-utilities\") pod \"redhat-operators-kjzt4\" (UID: \"59fa174e-8c02-464a-b715-57b0108a556d\") " pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.910537 4736 scope.go:117] "RemoveContainer" containerID="7854849ab46c0bef0984d2c5417cc6c87124fbffa0f8908e2aaa37f5c675faaa" Mar 16 15:44:48 crc kubenswrapper[4736]: I0316 15:44:48.911540 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpdbk\" (UniqueName: \"kubernetes.io/projected/59fa174e-8c02-464a-b715-57b0108a556d-kube-api-access-bpdbk\") pod \"redhat-operators-kjzt4\" (UID: \"59fa174e-8c02-464a-b715-57b0108a556d\") " pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.010955 4736 scope.go:117] "RemoveContainer" containerID="39b0a7d78ec606ac459c47e296aafc8a73c10a01bd29d1100fbd7b6d079b2d56" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.039048 4736 scope.go:117] "RemoveContainer" containerID="2dc1cace83828633251f3ff46d3f8a607fd74dc90e2b56935c551ecb0b17c2e2" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.056512 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.147685 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.306124 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qflxs\" (UniqueName: \"kubernetes.io/projected/bac31a5a-12b7-4a43-b596-91352137545b-kube-api-access-qflxs\") pod \"bac31a5a-12b7-4a43-b596-91352137545b\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.306200 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-inventory\") pod \"bac31a5a-12b7-4a43-b596-91352137545b\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.306229 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-ssh-key-openstack-edpm-ipam\") pod \"bac31a5a-12b7-4a43-b596-91352137545b\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.306260 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-bootstrap-combined-ca-bundle\") pod \"bac31a5a-12b7-4a43-b596-91352137545b\" (UID: \"bac31a5a-12b7-4a43-b596-91352137545b\") " Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.312792 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "bac31a5a-12b7-4a43-b596-91352137545b" (UID: "bac31a5a-12b7-4a43-b596-91352137545b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.313919 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac31a5a-12b7-4a43-b596-91352137545b-kube-api-access-qflxs" (OuterVolumeSpecName: "kube-api-access-qflxs") pod "bac31a5a-12b7-4a43-b596-91352137545b" (UID: "bac31a5a-12b7-4a43-b596-91352137545b"). InnerVolumeSpecName "kube-api-access-qflxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.340255 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-inventory" (OuterVolumeSpecName: "inventory") pod "bac31a5a-12b7-4a43-b596-91352137545b" (UID: "bac31a5a-12b7-4a43-b596-91352137545b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.353360 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bac31a5a-12b7-4a43-b596-91352137545b" (UID: "bac31a5a-12b7-4a43-b596-91352137545b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.408860 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qflxs\" (UniqueName: \"kubernetes.io/projected/bac31a5a-12b7-4a43-b596-91352137545b-kube-api-access-qflxs\") on node \"crc\" DevicePath \"\"" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.408889 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.408901 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.408910 4736 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bac31a5a-12b7-4a43-b596-91352137545b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.640467 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kjzt4"] Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.833741 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjzt4" event={"ID":"59fa174e-8c02-464a-b715-57b0108a556d","Type":"ContainerStarted","Data":"bef89b06046cd55798ca2515b00650392bf15d3be4c9e5c98b148fe8a5715973"} Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.835807 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" event={"ID":"bac31a5a-12b7-4a43-b596-91352137545b","Type":"ContainerDied","Data":"3a5b1aa8f643940e4eb43557e58c1dddebf94fe757e948f709ea7dda7b1c4aa4"} Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.835874 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a5b1aa8f643940e4eb43557e58c1dddebf94fe757e948f709ea7dda7b1c4aa4" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.835953 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.897803 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx"] Mar 16 15:44:49 crc kubenswrapper[4736]: E0316 15:44:49.898448 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac31a5a-12b7-4a43-b596-91352137545b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.898528 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac31a5a-12b7-4a43-b596-91352137545b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.898775 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac31a5a-12b7-4a43-b596-91352137545b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.899466 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.904614 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.904614 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.904775 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.904939 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.917619 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad968503-ce02-492c-a946-1d0e986a99ff-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xxldx\" (UID: \"ad968503-ce02-492c-a946-1d0e986a99ff\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.917731 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad968503-ce02-492c-a946-1d0e986a99ff-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xxldx\" (UID: \"ad968503-ce02-492c-a946-1d0e986a99ff\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.917812 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l79nb\" (UniqueName: \"kubernetes.io/projected/ad968503-ce02-492c-a946-1d0e986a99ff-kube-api-access-l79nb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xxldx\" (UID: \"ad968503-ce02-492c-a946-1d0e986a99ff\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" Mar 16 15:44:49 crc kubenswrapper[4736]: I0316 15:44:49.927033 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx"] Mar 16 15:44:50 crc kubenswrapper[4736]: I0316 15:44:50.020962 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad968503-ce02-492c-a946-1d0e986a99ff-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xxldx\" (UID: \"ad968503-ce02-492c-a946-1d0e986a99ff\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" Mar 16 15:44:50 crc kubenswrapper[4736]: I0316 15:44:50.021078 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad968503-ce02-492c-a946-1d0e986a99ff-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xxldx\" (UID: \"ad968503-ce02-492c-a946-1d0e986a99ff\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" Mar 16 15:44:50 crc kubenswrapper[4736]: I0316 15:44:50.021177 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l79nb\" (UniqueName: \"kubernetes.io/projected/ad968503-ce02-492c-a946-1d0e986a99ff-kube-api-access-l79nb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xxldx\" (UID: \"ad968503-ce02-492c-a946-1d0e986a99ff\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" Mar 16 15:44:50 crc kubenswrapper[4736]: I0316 15:44:50.029402 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad968503-ce02-492c-a946-1d0e986a99ff-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xxldx\" (UID: \"ad968503-ce02-492c-a946-1d0e986a99ff\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" Mar 16 15:44:50 crc kubenswrapper[4736]: I0316 15:44:50.029612 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad968503-ce02-492c-a946-1d0e986a99ff-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xxldx\" (UID: \"ad968503-ce02-492c-a946-1d0e986a99ff\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" Mar 16 15:44:50 crc kubenswrapper[4736]: I0316 15:44:50.075994 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l79nb\" (UniqueName: \"kubernetes.io/projected/ad968503-ce02-492c-a946-1d0e986a99ff-kube-api-access-l79nb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xxldx\" (UID: \"ad968503-ce02-492c-a946-1d0e986a99ff\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" Mar 16 15:44:50 crc kubenswrapper[4736]: I0316 15:44:50.217363 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" Mar 16 15:44:50 crc kubenswrapper[4736]: I0316 15:44:50.768987 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx"] Mar 16 15:44:50 crc kubenswrapper[4736]: W0316 15:44:50.775339 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad968503_ce02_492c_a946_1d0e986a99ff.slice/crio-0e54df5ac9630e4f22afb5d96abee5d3ffa836ea1e01bfebf11b65fd0194f6be WatchSource:0}: Error finding container 0e54df5ac9630e4f22afb5d96abee5d3ffa836ea1e01bfebf11b65fd0194f6be: Status 404 returned error can't find the container with id 0e54df5ac9630e4f22afb5d96abee5d3ffa836ea1e01bfebf11b65fd0194f6be Mar 16 15:44:50 crc kubenswrapper[4736]: I0316 15:44:50.847913 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" event={"ID":"ad968503-ce02-492c-a946-1d0e986a99ff","Type":"ContainerStarted","Data":"0e54df5ac9630e4f22afb5d96abee5d3ffa836ea1e01bfebf11b65fd0194f6be"} Mar 16 15:44:50 crc kubenswrapper[4736]: I0316 15:44:50.849444 4736 generic.go:334] "Generic (PLEG): container finished" podID="59fa174e-8c02-464a-b715-57b0108a556d" containerID="f75c22cbcfbc16238a38915b3827fffb17a0d4febf63e64ae914af70aad16cc0" exitCode=0 Mar 16 15:44:50 crc kubenswrapper[4736]: I0316 15:44:50.849494 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjzt4" event={"ID":"59fa174e-8c02-464a-b715-57b0108a556d","Type":"ContainerDied","Data":"f75c22cbcfbc16238a38915b3827fffb17a0d4febf63e64ae914af70aad16cc0"} Mar 16 15:44:51 crc kubenswrapper[4736]: I0316 15:44:51.873976 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjzt4" event={"ID":"59fa174e-8c02-464a-b715-57b0108a556d","Type":"ContainerStarted","Data":"4d0bd760b085e973f327273d457f53155cb130156e518ac08931932201fbc0c5"} Mar 16 15:44:51 crc kubenswrapper[4736]: I0316 15:44:51.877049 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" event={"ID":"ad968503-ce02-492c-a946-1d0e986a99ff","Type":"ContainerStarted","Data":"a9a17aa45f5e1868a9de42410cb3ffc577a5974534c25fcc4a76d95c119e970f"} Mar 16 15:44:51 crc kubenswrapper[4736]: I0316 15:44:51.916355 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" podStartSLOduration=2.356204146 podStartE2EDuration="2.916337158s" podCreationTimestamp="2026-03-16 15:44:49 +0000 UTC" firstStartedPulling="2026-03-16 15:44:50.777526947 +0000 UTC m=+1892.504917234" lastFinishedPulling="2026-03-16 15:44:51.337659959 +0000 UTC m=+1893.065050246" observedRunningTime="2026-03-16 15:44:51.910858778 +0000 UTC m=+1893.638249065" watchObservedRunningTime="2026-03-16 15:44:51.916337158 +0000 UTC m=+1893.643727435" Mar 16 15:44:52 crc kubenswrapper[4736]: I0316 15:44:52.978782 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:44:52 crc kubenswrapper[4736]: E0316 15:44:52.979458 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:44:56 crc kubenswrapper[4736]: I0316 15:44:56.924503 4736 generic.go:334] "Generic (PLEG): container finished" podID="59fa174e-8c02-464a-b715-57b0108a556d" containerID="4d0bd760b085e973f327273d457f53155cb130156e518ac08931932201fbc0c5" exitCode=0 Mar 16 15:44:56 crc kubenswrapper[4736]: I0316 15:44:56.924705 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjzt4" event={"ID":"59fa174e-8c02-464a-b715-57b0108a556d","Type":"ContainerDied","Data":"4d0bd760b085e973f327273d457f53155cb130156e518ac08931932201fbc0c5"} Mar 16 15:44:57 crc kubenswrapper[4736]: I0316 15:44:57.945327 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjzt4" event={"ID":"59fa174e-8c02-464a-b715-57b0108a556d","Type":"ContainerStarted","Data":"7e94bff2d7228f45df238c0bd2ad5fd75012053d9d3e1029f92a74f838c66ab0"} Mar 16 15:44:57 crc kubenswrapper[4736]: I0316 15:44:57.972859 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kjzt4" podStartSLOduration=3.474093893 podStartE2EDuration="9.972841663s" podCreationTimestamp="2026-03-16 15:44:48 +0000 UTC" firstStartedPulling="2026-03-16 15:44:50.852372636 +0000 UTC m=+1892.579762923" lastFinishedPulling="2026-03-16 15:44:57.351120406 +0000 UTC m=+1899.078510693" observedRunningTime="2026-03-16 15:44:57.970874199 +0000 UTC m=+1899.698264486" watchObservedRunningTime="2026-03-16 15:44:57.972841663 +0000 UTC m=+1899.700231950" Mar 16 15:44:59 crc kubenswrapper[4736]: I0316 15:44:59.057538 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:44:59 crc kubenswrapper[4736]: I0316 15:44:59.057787 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.150010 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg"] Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.152574 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.156776 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.157088 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.162290 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg"] Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.164907 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kjzt4" podUID="59fa174e-8c02-464a-b715-57b0108a556d" containerName="registry-server" probeResult="failure" output=< Mar 16 15:45:00 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 15:45:00 crc kubenswrapper[4736]: > Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.324944 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh6qs\" (UniqueName: \"kubernetes.io/projected/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-kube-api-access-bh6qs\") pod \"collect-profiles-29561265-v59gg\" (UID: \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.325027 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-config-volume\") pod \"collect-profiles-29561265-v59gg\" (UID: \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.325153 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-secret-volume\") pod \"collect-profiles-29561265-v59gg\" (UID: \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.427545 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-secret-volume\") pod \"collect-profiles-29561265-v59gg\" (UID: \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.427844 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh6qs\" (UniqueName: \"kubernetes.io/projected/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-kube-api-access-bh6qs\") pod \"collect-profiles-29561265-v59gg\" (UID: \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.427921 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-config-volume\") pod \"collect-profiles-29561265-v59gg\" (UID: \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.429707 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-config-volume\") pod \"collect-profiles-29561265-v59gg\" (UID: \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.435897 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-secret-volume\") pod \"collect-profiles-29561265-v59gg\" (UID: \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.449087 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh6qs\" (UniqueName: \"kubernetes.io/projected/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-kube-api-access-bh6qs\") pod \"collect-profiles-29561265-v59gg\" (UID: \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" Mar 16 15:45:00 crc kubenswrapper[4736]: I0316 15:45:00.470163 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" Mar 16 15:45:01 crc kubenswrapper[4736]: W0316 15:45:01.020062 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bfc0fbd_42db_46d1_9c49_32da6f56fef4.slice/crio-7edb400847a367e00f20592ae761c5bd4108b80c830bb24caf783597891aa22a WatchSource:0}: Error finding container 7edb400847a367e00f20592ae761c5bd4108b80c830bb24caf783597891aa22a: Status 404 returned error can't find the container with id 7edb400847a367e00f20592ae761c5bd4108b80c830bb24caf783597891aa22a Mar 16 15:45:01 crc kubenswrapper[4736]: I0316 15:45:01.031211 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg"] Mar 16 15:45:01 crc kubenswrapper[4736]: I0316 15:45:01.988490 4736 generic.go:334] "Generic (PLEG): container finished" podID="4bfc0fbd-42db-46d1-9c49-32da6f56fef4" containerID="f728562bfa062c53d64c656da6a15919972173ad0695ae030c04d1368a0482a1" exitCode=0 Mar 16 15:45:01 crc kubenswrapper[4736]: I0316 15:45:01.988535 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" event={"ID":"4bfc0fbd-42db-46d1-9c49-32da6f56fef4","Type":"ContainerDied","Data":"f728562bfa062c53d64c656da6a15919972173ad0695ae030c04d1368a0482a1"} Mar 16 15:45:01 crc kubenswrapper[4736]: I0316 15:45:01.988561 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" event={"ID":"4bfc0fbd-42db-46d1-9c49-32da6f56fef4","Type":"ContainerStarted","Data":"7edb400847a367e00f20592ae761c5bd4108b80c830bb24caf783597891aa22a"} Mar 16 15:45:03 crc kubenswrapper[4736]: I0316 15:45:03.331921 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" Mar 16 15:45:03 crc kubenswrapper[4736]: I0316 15:45:03.489998 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh6qs\" (UniqueName: \"kubernetes.io/projected/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-kube-api-access-bh6qs\") pod \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\" (UID: \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\") " Mar 16 15:45:03 crc kubenswrapper[4736]: I0316 15:45:03.490238 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-config-volume\") pod \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\" (UID: \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\") " Mar 16 15:45:03 crc kubenswrapper[4736]: I0316 15:45:03.490288 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-secret-volume\") pod \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\" (UID: \"4bfc0fbd-42db-46d1-9c49-32da6f56fef4\") " Mar 16 15:45:03 crc kubenswrapper[4736]: I0316 15:45:03.490811 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-config-volume" (OuterVolumeSpecName: "config-volume") pod "4bfc0fbd-42db-46d1-9c49-32da6f56fef4" (UID: "4bfc0fbd-42db-46d1-9c49-32da6f56fef4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:45:03 crc kubenswrapper[4736]: I0316 15:45:03.501682 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4bfc0fbd-42db-46d1-9c49-32da6f56fef4" (UID: "4bfc0fbd-42db-46d1-9c49-32da6f56fef4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:45:03 crc kubenswrapper[4736]: I0316 15:45:03.502342 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-kube-api-access-bh6qs" (OuterVolumeSpecName: "kube-api-access-bh6qs") pod "4bfc0fbd-42db-46d1-9c49-32da6f56fef4" (UID: "4bfc0fbd-42db-46d1-9c49-32da6f56fef4"). InnerVolumeSpecName "kube-api-access-bh6qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:45:03 crc kubenswrapper[4736]: I0316 15:45:03.592407 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh6qs\" (UniqueName: \"kubernetes.io/projected/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-kube-api-access-bh6qs\") on node \"crc\" DevicePath \"\"" Mar 16 15:45:03 crc kubenswrapper[4736]: I0316 15:45:03.592435 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 15:45:03 crc kubenswrapper[4736]: I0316 15:45:03.592445 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bfc0fbd-42db-46d1-9c49-32da6f56fef4-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 15:45:04 crc kubenswrapper[4736]: I0316 15:45:04.013186 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" event={"ID":"4bfc0fbd-42db-46d1-9c49-32da6f56fef4","Type":"ContainerDied","Data":"7edb400847a367e00f20592ae761c5bd4108b80c830bb24caf783597891aa22a"} Mar 16 15:45:04 crc kubenswrapper[4736]: I0316 15:45:04.013227 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7edb400847a367e00f20592ae761c5bd4108b80c830bb24caf783597891aa22a" Mar 16 15:45:04 crc kubenswrapper[4736]: I0316 15:45:04.013236 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg" Mar 16 15:45:06 crc kubenswrapper[4736]: I0316 15:45:06.978824 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:45:06 crc kubenswrapper[4736]: E0316 15:45:06.979441 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:45:10 crc kubenswrapper[4736]: I0316 15:45:10.109956 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kjzt4" podUID="59fa174e-8c02-464a-b715-57b0108a556d" containerName="registry-server" probeResult="failure" output=< Mar 16 15:45:10 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 15:45:10 crc kubenswrapper[4736]: > Mar 16 15:45:11 crc kubenswrapper[4736]: I0316 15:45:11.041738 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zhxnt"] Mar 16 15:45:11 crc kubenswrapper[4736]: I0316 15:45:11.049795 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zhxnt"] Mar 16 15:45:12 crc kubenswrapper[4736]: I0316 15:45:12.991578 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c548edb2-4f30-4790-bf13-c2509601cd25" path="/var/lib/kubelet/pods/c548edb2-4f30-4790-bf13-c2509601cd25/volumes" Mar 16 15:45:18 crc kubenswrapper[4736]: I0316 15:45:18.987724 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:45:20 crc kubenswrapper[4736]: I0316 15:45:20.103124 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kjzt4" podUID="59fa174e-8c02-464a-b715-57b0108a556d" containerName="registry-server" probeResult="failure" output=< Mar 16 15:45:20 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 15:45:20 crc kubenswrapper[4736]: > Mar 16 15:45:20 crc kubenswrapper[4736]: I0316 15:45:20.140876 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"6064bfd54b4506f269b651af4704ab9ae454797c4e81b4b99e7443432c2e1d90"} Mar 16 15:45:30 crc kubenswrapper[4736]: I0316 15:45:30.128567 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kjzt4" podUID="59fa174e-8c02-464a-b715-57b0108a556d" containerName="registry-server" probeResult="failure" output=< Mar 16 15:45:30 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 15:45:30 crc kubenswrapper[4736]: > Mar 16 15:45:39 crc kubenswrapper[4736]: I0316 15:45:39.119774 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:45:39 crc kubenswrapper[4736]: I0316 15:45:39.169317 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:45:39 crc kubenswrapper[4736]: I0316 15:45:39.362049 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kjzt4"] Mar 16 15:45:40 crc kubenswrapper[4736]: I0316 15:45:40.340135 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kjzt4" podUID="59fa174e-8c02-464a-b715-57b0108a556d" containerName="registry-server" containerID="cri-o://7e94bff2d7228f45df238c0bd2ad5fd75012053d9d3e1029f92a74f838c66ab0" gracePeriod=2 Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.049590 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.164020 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59fa174e-8c02-464a-b715-57b0108a556d-catalog-content\") pod \"59fa174e-8c02-464a-b715-57b0108a556d\" (UID: \"59fa174e-8c02-464a-b715-57b0108a556d\") " Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.164689 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59fa174e-8c02-464a-b715-57b0108a556d-utilities\") pod \"59fa174e-8c02-464a-b715-57b0108a556d\" (UID: \"59fa174e-8c02-464a-b715-57b0108a556d\") " Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.165341 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59fa174e-8c02-464a-b715-57b0108a556d-utilities" (OuterVolumeSpecName: "utilities") pod "59fa174e-8c02-464a-b715-57b0108a556d" (UID: "59fa174e-8c02-464a-b715-57b0108a556d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.165557 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpdbk\" (UniqueName: \"kubernetes.io/projected/59fa174e-8c02-464a-b715-57b0108a556d-kube-api-access-bpdbk\") pod \"59fa174e-8c02-464a-b715-57b0108a556d\" (UID: \"59fa174e-8c02-464a-b715-57b0108a556d\") " Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.166473 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59fa174e-8c02-464a-b715-57b0108a556d-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.177269 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59fa174e-8c02-464a-b715-57b0108a556d-kube-api-access-bpdbk" (OuterVolumeSpecName: "kube-api-access-bpdbk") pod "59fa174e-8c02-464a-b715-57b0108a556d" (UID: "59fa174e-8c02-464a-b715-57b0108a556d"). InnerVolumeSpecName "kube-api-access-bpdbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.268044 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpdbk\" (UniqueName: \"kubernetes.io/projected/59fa174e-8c02-464a-b715-57b0108a556d-kube-api-access-bpdbk\") on node \"crc\" DevicePath \"\"" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.327604 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59fa174e-8c02-464a-b715-57b0108a556d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59fa174e-8c02-464a-b715-57b0108a556d" (UID: "59fa174e-8c02-464a-b715-57b0108a556d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.351090 4736 generic.go:334] "Generic (PLEG): container finished" podID="59fa174e-8c02-464a-b715-57b0108a556d" containerID="7e94bff2d7228f45df238c0bd2ad5fd75012053d9d3e1029f92a74f838c66ab0" exitCode=0 Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.351143 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjzt4" event={"ID":"59fa174e-8c02-464a-b715-57b0108a556d","Type":"ContainerDied","Data":"7e94bff2d7228f45df238c0bd2ad5fd75012053d9d3e1029f92a74f838c66ab0"} Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.351170 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjzt4" event={"ID":"59fa174e-8c02-464a-b715-57b0108a556d","Type":"ContainerDied","Data":"bef89b06046cd55798ca2515b00650392bf15d3be4c9e5c98b148fe8a5715973"} Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.351189 4736 scope.go:117] "RemoveContainer" containerID="7e94bff2d7228f45df238c0bd2ad5fd75012053d9d3e1029f92a74f838c66ab0" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.351352 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjzt4" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.370062 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59fa174e-8c02-464a-b715-57b0108a556d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.383408 4736 scope.go:117] "RemoveContainer" containerID="4d0bd760b085e973f327273d457f53155cb130156e518ac08931932201fbc0c5" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.386225 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kjzt4"] Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.395514 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kjzt4"] Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.400460 4736 scope.go:117] "RemoveContainer" containerID="f75c22cbcfbc16238a38915b3827fffb17a0d4febf63e64ae914af70aad16cc0" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.439637 4736 scope.go:117] "RemoveContainer" containerID="7e94bff2d7228f45df238c0bd2ad5fd75012053d9d3e1029f92a74f838c66ab0" Mar 16 15:45:41 crc kubenswrapper[4736]: E0316 15:45:41.440245 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e94bff2d7228f45df238c0bd2ad5fd75012053d9d3e1029f92a74f838c66ab0\": container with ID starting with 7e94bff2d7228f45df238c0bd2ad5fd75012053d9d3e1029f92a74f838c66ab0 not found: ID does not exist" containerID="7e94bff2d7228f45df238c0bd2ad5fd75012053d9d3e1029f92a74f838c66ab0" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.440279 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e94bff2d7228f45df238c0bd2ad5fd75012053d9d3e1029f92a74f838c66ab0"} err="failed to get container status \"7e94bff2d7228f45df238c0bd2ad5fd75012053d9d3e1029f92a74f838c66ab0\": rpc error: code = NotFound desc = could not find container \"7e94bff2d7228f45df238c0bd2ad5fd75012053d9d3e1029f92a74f838c66ab0\": container with ID starting with 7e94bff2d7228f45df238c0bd2ad5fd75012053d9d3e1029f92a74f838c66ab0 not found: ID does not exist" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.440302 4736 scope.go:117] "RemoveContainer" containerID="4d0bd760b085e973f327273d457f53155cb130156e518ac08931932201fbc0c5" Mar 16 15:45:41 crc kubenswrapper[4736]: E0316 15:45:41.441664 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d0bd760b085e973f327273d457f53155cb130156e518ac08931932201fbc0c5\": container with ID starting with 4d0bd760b085e973f327273d457f53155cb130156e518ac08931932201fbc0c5 not found: ID does not exist" containerID="4d0bd760b085e973f327273d457f53155cb130156e518ac08931932201fbc0c5" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.441691 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d0bd760b085e973f327273d457f53155cb130156e518ac08931932201fbc0c5"} err="failed to get container status \"4d0bd760b085e973f327273d457f53155cb130156e518ac08931932201fbc0c5\": rpc error: code = NotFound desc = could not find container \"4d0bd760b085e973f327273d457f53155cb130156e518ac08931932201fbc0c5\": container with ID starting with 4d0bd760b085e973f327273d457f53155cb130156e518ac08931932201fbc0c5 not found: ID does not exist" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.441707 4736 scope.go:117] "RemoveContainer" containerID="f75c22cbcfbc16238a38915b3827fffb17a0d4febf63e64ae914af70aad16cc0" Mar 16 15:45:41 crc kubenswrapper[4736]: E0316 15:45:41.442369 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75c22cbcfbc16238a38915b3827fffb17a0d4febf63e64ae914af70aad16cc0\": container with ID starting with f75c22cbcfbc16238a38915b3827fffb17a0d4febf63e64ae914af70aad16cc0 not found: ID does not exist" containerID="f75c22cbcfbc16238a38915b3827fffb17a0d4febf63e64ae914af70aad16cc0" Mar 16 15:45:41 crc kubenswrapper[4736]: I0316 15:45:41.442414 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f75c22cbcfbc16238a38915b3827fffb17a0d4febf63e64ae914af70aad16cc0"} err="failed to get container status \"f75c22cbcfbc16238a38915b3827fffb17a0d4febf63e64ae914af70aad16cc0\": rpc error: code = NotFound desc = could not find container \"f75c22cbcfbc16238a38915b3827fffb17a0d4febf63e64ae914af70aad16cc0\": container with ID starting with f75c22cbcfbc16238a38915b3827fffb17a0d4febf63e64ae914af70aad16cc0 not found: ID does not exist" Mar 16 15:45:42 crc kubenswrapper[4736]: I0316 15:45:42.997276 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59fa174e-8c02-464a-b715-57b0108a556d" path="/var/lib/kubelet/pods/59fa174e-8c02-464a-b715-57b0108a556d/volumes" Mar 16 15:45:48 crc kubenswrapper[4736]: I0316 15:45:48.052323 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-gmclf"] Mar 16 15:45:48 crc kubenswrapper[4736]: I0316 15:45:48.064155 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-gmclf"] Mar 16 15:45:48 crc kubenswrapper[4736]: I0316 15:45:48.991407 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c9139de-bf71-4fc5-8c71-071cb42f9f35" path="/var/lib/kubelet/pods/4c9139de-bf71-4fc5-8c71-071cb42f9f35/volumes" Mar 16 15:45:49 crc kubenswrapper[4736]: I0316 15:45:49.271172 4736 scope.go:117] "RemoveContainer" containerID="593f6c4f6e367017237c6e34d929180b33ecce95153e774ffaccc1d0e1699812" Mar 16 15:45:49 crc kubenswrapper[4736]: I0316 15:45:49.300334 4736 scope.go:117] "RemoveContainer" containerID="2d5ccf73209154aeb5ca1a33715bc3834de2d61dc950286f511a9448e10724e9" Mar 16 15:45:51 crc kubenswrapper[4736]: I0316 15:45:51.045890 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vlmpz"] Mar 16 15:45:51 crc kubenswrapper[4736]: I0316 15:45:51.059733 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-vlmpz"] Mar 16 15:45:52 crc kubenswrapper[4736]: I0316 15:45:52.996375 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3db0b47-39c5-4414-b863-6c472b6ee78a" path="/var/lib/kubelet/pods/d3db0b47-39c5-4414-b863-6c472b6ee78a/volumes" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.155407 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561266-9xgc7"] Mar 16 15:46:00 crc kubenswrapper[4736]: E0316 15:46:00.156772 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bfc0fbd-42db-46d1-9c49-32da6f56fef4" containerName="collect-profiles" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.156795 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bfc0fbd-42db-46d1-9c49-32da6f56fef4" containerName="collect-profiles" Mar 16 15:46:00 crc kubenswrapper[4736]: E0316 15:46:00.156827 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59fa174e-8c02-464a-b715-57b0108a556d" containerName="extract-content" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.156839 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="59fa174e-8c02-464a-b715-57b0108a556d" containerName="extract-content" Mar 16 15:46:00 crc kubenswrapper[4736]: E0316 15:46:00.156890 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59fa174e-8c02-464a-b715-57b0108a556d" containerName="registry-server" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.156901 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="59fa174e-8c02-464a-b715-57b0108a556d" containerName="registry-server" Mar 16 15:46:00 crc kubenswrapper[4736]: E0316 15:46:00.156915 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59fa174e-8c02-464a-b715-57b0108a556d" containerName="extract-utilities" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.156926 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="59fa174e-8c02-464a-b715-57b0108a556d" containerName="extract-utilities" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.157233 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="59fa174e-8c02-464a-b715-57b0108a556d" containerName="registry-server" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.157263 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bfc0fbd-42db-46d1-9c49-32da6f56fef4" containerName="collect-profiles" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.158320 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561266-9xgc7" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.163942 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.164830 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.166619 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.180191 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561266-9xgc7"] Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.268857 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccc2f\" (UniqueName: \"kubernetes.io/projected/e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f-kube-api-access-ccc2f\") pod \"auto-csr-approver-29561266-9xgc7\" (UID: \"e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f\") " pod="openshift-infra/auto-csr-approver-29561266-9xgc7" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.371942 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccc2f\" (UniqueName: \"kubernetes.io/projected/e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f-kube-api-access-ccc2f\") pod \"auto-csr-approver-29561266-9xgc7\" (UID: \"e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f\") " pod="openshift-infra/auto-csr-approver-29561266-9xgc7" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.395837 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccc2f\" (UniqueName: \"kubernetes.io/projected/e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f-kube-api-access-ccc2f\") pod \"auto-csr-approver-29561266-9xgc7\" (UID: \"e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f\") " pod="openshift-infra/auto-csr-approver-29561266-9xgc7" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.481219 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561266-9xgc7" Mar 16 15:46:00 crc kubenswrapper[4736]: I0316 15:46:00.934378 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561266-9xgc7"] Mar 16 15:46:01 crc kubenswrapper[4736]: I0316 15:46:01.554380 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561266-9xgc7" event={"ID":"e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f","Type":"ContainerStarted","Data":"88e48f242d8e8f95ecc7548b661d1e99291a0e5b72c44bc890dccc280ec4a009"} Mar 16 15:46:03 crc kubenswrapper[4736]: I0316 15:46:03.571195 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561266-9xgc7" event={"ID":"e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f","Type":"ContainerStarted","Data":"f956110bc5b24afb3302309c55739d10017f3aa05b5cec1835b0ea89e60a4c5e"} Mar 16 15:46:03 crc kubenswrapper[4736]: I0316 15:46:03.584043 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561266-9xgc7" podStartSLOduration=2.277020109 podStartE2EDuration="3.584023984s" podCreationTimestamp="2026-03-16 15:46:00 +0000 UTC" firstStartedPulling="2026-03-16 15:46:00.943684045 +0000 UTC m=+1962.671074332" lastFinishedPulling="2026-03-16 15:46:02.25068791 +0000 UTC m=+1963.978078207" observedRunningTime="2026-03-16 15:46:03.58349336 +0000 UTC m=+1965.310883657" watchObservedRunningTime="2026-03-16 15:46:03.584023984 +0000 UTC m=+1965.311414271" Mar 16 15:46:04 crc kubenswrapper[4736]: I0316 15:46:04.581064 4736 generic.go:334] "Generic (PLEG): container finished" podID="e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f" containerID="f956110bc5b24afb3302309c55739d10017f3aa05b5cec1835b0ea89e60a4c5e" exitCode=0 Mar 16 15:46:04 crc kubenswrapper[4736]: I0316 15:46:04.581123 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561266-9xgc7" event={"ID":"e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f","Type":"ContainerDied","Data":"f956110bc5b24afb3302309c55739d10017f3aa05b5cec1835b0ea89e60a4c5e"} Mar 16 15:46:05 crc kubenswrapper[4736]: I0316 15:46:05.954038 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561266-9xgc7" Mar 16 15:46:06 crc kubenswrapper[4736]: I0316 15:46:06.089685 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccc2f\" (UniqueName: \"kubernetes.io/projected/e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f-kube-api-access-ccc2f\") pod \"e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f\" (UID: \"e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f\") " Mar 16 15:46:06 crc kubenswrapper[4736]: I0316 15:46:06.097506 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f-kube-api-access-ccc2f" (OuterVolumeSpecName: "kube-api-access-ccc2f") pod "e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f" (UID: "e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f"). InnerVolumeSpecName "kube-api-access-ccc2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:46:06 crc kubenswrapper[4736]: I0316 15:46:06.192269 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccc2f\" (UniqueName: \"kubernetes.io/projected/e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f-kube-api-access-ccc2f\") on node \"crc\" DevicePath \"\"" Mar 16 15:46:06 crc kubenswrapper[4736]: I0316 15:46:06.602480 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561266-9xgc7" event={"ID":"e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f","Type":"ContainerDied","Data":"88e48f242d8e8f95ecc7548b661d1e99291a0e5b72c44bc890dccc280ec4a009"} Mar 16 15:46:06 crc kubenswrapper[4736]: I0316 15:46:06.602535 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88e48f242d8e8f95ecc7548b661d1e99291a0e5b72c44bc890dccc280ec4a009" Mar 16 15:46:06 crc kubenswrapper[4736]: I0316 15:46:06.602509 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561266-9xgc7" Mar 16 15:46:06 crc kubenswrapper[4736]: I0316 15:46:06.649715 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561260-bflcm"] Mar 16 15:46:06 crc kubenswrapper[4736]: I0316 15:46:06.661921 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561260-bflcm"] Mar 16 15:46:06 crc kubenswrapper[4736]: I0316 15:46:06.992260 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07ed35f7-92c1-4d82-acea-7733d8dd9be9" path="/var/lib/kubelet/pods/07ed35f7-92c1-4d82-acea-7733d8dd9be9/volumes" Mar 16 15:46:33 crc kubenswrapper[4736]: I0316 15:46:33.044376 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qw4c4"] Mar 16 15:46:33 crc kubenswrapper[4736]: I0316 15:46:33.059577 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qw4c4"] Mar 16 15:46:34 crc kubenswrapper[4736]: I0316 15:46:34.992748 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edef7df8-01f6-4f77-a1e9-25f7feef5ccd" path="/var/lib/kubelet/pods/edef7df8-01f6-4f77-a1e9-25f7feef5ccd/volumes" Mar 16 15:46:46 crc kubenswrapper[4736]: I0316 15:46:46.325345 4736 generic.go:334] "Generic (PLEG): container finished" podID="ad968503-ce02-492c-a946-1d0e986a99ff" containerID="a9a17aa45f5e1868a9de42410cb3ffc577a5974534c25fcc4a76d95c119e970f" exitCode=0 Mar 16 15:46:46 crc kubenswrapper[4736]: I0316 15:46:46.325494 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" event={"ID":"ad968503-ce02-492c-a946-1d0e986a99ff","Type":"ContainerDied","Data":"a9a17aa45f5e1868a9de42410cb3ffc577a5974534c25fcc4a76d95c119e970f"} Mar 16 15:46:47 crc kubenswrapper[4736]: I0316 15:46:47.843547 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" Mar 16 15:46:47 crc kubenswrapper[4736]: I0316 15:46:47.952202 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad968503-ce02-492c-a946-1d0e986a99ff-inventory\") pod \"ad968503-ce02-492c-a946-1d0e986a99ff\" (UID: \"ad968503-ce02-492c-a946-1d0e986a99ff\") " Mar 16 15:46:47 crc kubenswrapper[4736]: I0316 15:46:47.952579 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l79nb\" (UniqueName: \"kubernetes.io/projected/ad968503-ce02-492c-a946-1d0e986a99ff-kube-api-access-l79nb\") pod \"ad968503-ce02-492c-a946-1d0e986a99ff\" (UID: \"ad968503-ce02-492c-a946-1d0e986a99ff\") " Mar 16 15:46:47 crc kubenswrapper[4736]: I0316 15:46:47.952691 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad968503-ce02-492c-a946-1d0e986a99ff-ssh-key-openstack-edpm-ipam\") pod \"ad968503-ce02-492c-a946-1d0e986a99ff\" (UID: \"ad968503-ce02-492c-a946-1d0e986a99ff\") " Mar 16 15:46:47 crc kubenswrapper[4736]: I0316 15:46:47.962515 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad968503-ce02-492c-a946-1d0e986a99ff-kube-api-access-l79nb" (OuterVolumeSpecName: "kube-api-access-l79nb") pod "ad968503-ce02-492c-a946-1d0e986a99ff" (UID: "ad968503-ce02-492c-a946-1d0e986a99ff"). InnerVolumeSpecName "kube-api-access-l79nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.001768 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad968503-ce02-492c-a946-1d0e986a99ff-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ad968503-ce02-492c-a946-1d0e986a99ff" (UID: "ad968503-ce02-492c-a946-1d0e986a99ff"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.011470 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad968503-ce02-492c-a946-1d0e986a99ff-inventory" (OuterVolumeSpecName: "inventory") pod "ad968503-ce02-492c-a946-1d0e986a99ff" (UID: "ad968503-ce02-492c-a946-1d0e986a99ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.054555 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad968503-ce02-492c-a946-1d0e986a99ff-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.054586 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l79nb\" (UniqueName: \"kubernetes.io/projected/ad968503-ce02-492c-a946-1d0e986a99ff-kube-api-access-l79nb\") on node \"crc\" DevicePath \"\"" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.054597 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ad968503-ce02-492c-a946-1d0e986a99ff-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.348909 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" event={"ID":"ad968503-ce02-492c-a946-1d0e986a99ff","Type":"ContainerDied","Data":"0e54df5ac9630e4f22afb5d96abee5d3ffa836ea1e01bfebf11b65fd0194f6be"} Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.348960 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e54df5ac9630e4f22afb5d96abee5d3ffa836ea1e01bfebf11b65fd0194f6be" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.349020 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xxldx" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.458572 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t"] Mar 16 15:46:48 crc kubenswrapper[4736]: E0316 15:46:48.459289 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad968503-ce02-492c-a946-1d0e986a99ff" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.459323 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad968503-ce02-492c-a946-1d0e986a99ff" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 16 15:46:48 crc kubenswrapper[4736]: E0316 15:46:48.459362 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f" containerName="oc" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.459375 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f" containerName="oc" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.459715 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad968503-ce02-492c-a946-1d0e986a99ff" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.459786 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f" containerName="oc" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.460852 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.465971 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.466282 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.466902 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.468055 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.470978 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t"] Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.565043 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgfrz\" (UniqueName: \"kubernetes.io/projected/123f23c5-bce7-4080-a7da-1bce3b43d685-kube-api-access-zgfrz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-shf2t\" (UID: \"123f23c5-bce7-4080-a7da-1bce3b43d685\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.565086 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/123f23c5-bce7-4080-a7da-1bce3b43d685-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-shf2t\" (UID: \"123f23c5-bce7-4080-a7da-1bce3b43d685\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.565237 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/123f23c5-bce7-4080-a7da-1bce3b43d685-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-shf2t\" (UID: \"123f23c5-bce7-4080-a7da-1bce3b43d685\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.667219 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/123f23c5-bce7-4080-a7da-1bce3b43d685-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-shf2t\" (UID: \"123f23c5-bce7-4080-a7da-1bce3b43d685\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.667322 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/123f23c5-bce7-4080-a7da-1bce3b43d685-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-shf2t\" (UID: \"123f23c5-bce7-4080-a7da-1bce3b43d685\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.667424 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgfrz\" (UniqueName: \"kubernetes.io/projected/123f23c5-bce7-4080-a7da-1bce3b43d685-kube-api-access-zgfrz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-shf2t\" (UID: \"123f23c5-bce7-4080-a7da-1bce3b43d685\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.675884 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/123f23c5-bce7-4080-a7da-1bce3b43d685-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-shf2t\" (UID: \"123f23c5-bce7-4080-a7da-1bce3b43d685\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.676257 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/123f23c5-bce7-4080-a7da-1bce3b43d685-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-shf2t\" (UID: \"123f23c5-bce7-4080-a7da-1bce3b43d685\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.686030 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgfrz\" (UniqueName: \"kubernetes.io/projected/123f23c5-bce7-4080-a7da-1bce3b43d685-kube-api-access-zgfrz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-shf2t\" (UID: \"123f23c5-bce7-4080-a7da-1bce3b43d685\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" Mar 16 15:46:48 crc kubenswrapper[4736]: I0316 15:46:48.793642 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" Mar 16 15:46:49 crc kubenswrapper[4736]: I0316 15:46:49.375252 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t"] Mar 16 15:46:49 crc kubenswrapper[4736]: I0316 15:46:49.451072 4736 scope.go:117] "RemoveContainer" containerID="f2a596d31f9913531bb6a14d73b142598d0fa23305070bad9c9782b7947b7d30" Mar 16 15:46:49 crc kubenswrapper[4736]: I0316 15:46:49.486437 4736 scope.go:117] "RemoveContainer" containerID="ca9422bfcff2dff1724b6a93440833ded45aa9f3814cc287bff4ca666777b1c5" Mar 16 15:46:49 crc kubenswrapper[4736]: I0316 15:46:49.539430 4736 scope.go:117] "RemoveContainer" containerID="eadeab5dd3ae7d2eb43c14cc41c2c0af76eb0f8e113b9b4501a79510f56957c7" Mar 16 15:46:49 crc kubenswrapper[4736]: I0316 15:46:49.558312 4736 scope.go:117] "RemoveContainer" containerID="7087e06342218756c470df5b1083d0946e2775c0af67bc6e95e97996790157b3" Mar 16 15:46:49 crc kubenswrapper[4736]: I0316 15:46:49.587961 4736 scope.go:117] "RemoveContainer" containerID="b88453b0f7cc550dd01bb72ee2568af1bda0a64a99cacda2dd02fc8dd2537e6a" Mar 16 15:46:49 crc kubenswrapper[4736]: I0316 15:46:49.608508 4736 scope.go:117] "RemoveContainer" containerID="77f54924305095b576cf3ff70fee23ebc95ce4a82f1a0e59911a5b2a12ccc9e2" Mar 16 15:46:50 crc kubenswrapper[4736]: I0316 15:46:50.367152 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" event={"ID":"123f23c5-bce7-4080-a7da-1bce3b43d685","Type":"ContainerStarted","Data":"bed08e548f83cae20ddcf4b0a4aa0e3047051027fc3ad346d3af8be2de8d85f7"} Mar 16 15:46:50 crc kubenswrapper[4736]: I0316 15:46:50.367489 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" event={"ID":"123f23c5-bce7-4080-a7da-1bce3b43d685","Type":"ContainerStarted","Data":"7b7787aabe0aaab210054c39fb08b936224fed913a8259a13d4d2a5560a7a190"} Mar 16 15:46:50 crc kubenswrapper[4736]: I0316 15:46:50.397736 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" podStartSLOduration=1.8300734589999998 podStartE2EDuration="2.397715178s" podCreationTimestamp="2026-03-16 15:46:48 +0000 UTC" firstStartedPulling="2026-03-16 15:46:49.385658539 +0000 UTC m=+2011.113048846" lastFinishedPulling="2026-03-16 15:46:49.953300278 +0000 UTC m=+2011.680690565" observedRunningTime="2026-03-16 15:46:50.386215094 +0000 UTC m=+2012.113605391" watchObservedRunningTime="2026-03-16 15:46:50.397715178 +0000 UTC m=+2012.125105485" Mar 16 15:47:38 crc kubenswrapper[4736]: I0316 15:47:38.507888 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:47:38 crc kubenswrapper[4736]: I0316 15:47:38.508429 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:47:58 crc kubenswrapper[4736]: I0316 15:47:58.993389 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" event={"ID":"123f23c5-bce7-4080-a7da-1bce3b43d685","Type":"ContainerDied","Data":"bed08e548f83cae20ddcf4b0a4aa0e3047051027fc3ad346d3af8be2de8d85f7"} Mar 16 15:47:58 crc kubenswrapper[4736]: I0316 15:47:58.993400 4736 generic.go:334] "Generic (PLEG): container finished" podID="123f23c5-bce7-4080-a7da-1bce3b43d685" containerID="bed08e548f83cae20ddcf4b0a4aa0e3047051027fc3ad346d3af8be2de8d85f7" exitCode=0 Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.133779 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561268-pr7tq"] Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.138798 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rfp5\" (UniqueName: \"kubernetes.io/projected/80cd2bef-f6eb-4e45-8c17-b74fab484d78-kube-api-access-5rfp5\") pod \"auto-csr-approver-29561268-pr7tq\" (UID: \"80cd2bef-f6eb-4e45-8c17-b74fab484d78\") " pod="openshift-infra/auto-csr-approver-29561268-pr7tq" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.140300 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561268-pr7tq" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.144491 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.144533 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.144827 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.147930 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561268-pr7tq"] Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.239978 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rfp5\" (UniqueName: \"kubernetes.io/projected/80cd2bef-f6eb-4e45-8c17-b74fab484d78-kube-api-access-5rfp5\") pod \"auto-csr-approver-29561268-pr7tq\" (UID: \"80cd2bef-f6eb-4e45-8c17-b74fab484d78\") " pod="openshift-infra/auto-csr-approver-29561268-pr7tq" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.262499 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rfp5\" (UniqueName: \"kubernetes.io/projected/80cd2bef-f6eb-4e45-8c17-b74fab484d78-kube-api-access-5rfp5\") pod \"auto-csr-approver-29561268-pr7tq\" (UID: \"80cd2bef-f6eb-4e45-8c17-b74fab484d78\") " pod="openshift-infra/auto-csr-approver-29561268-pr7tq" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.411473 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.461944 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561268-pr7tq" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.545376 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/123f23c5-bce7-4080-a7da-1bce3b43d685-inventory\") pod \"123f23c5-bce7-4080-a7da-1bce3b43d685\" (UID: \"123f23c5-bce7-4080-a7da-1bce3b43d685\") " Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.545583 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/123f23c5-bce7-4080-a7da-1bce3b43d685-ssh-key-openstack-edpm-ipam\") pod \"123f23c5-bce7-4080-a7da-1bce3b43d685\" (UID: \"123f23c5-bce7-4080-a7da-1bce3b43d685\") " Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.545703 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgfrz\" (UniqueName: \"kubernetes.io/projected/123f23c5-bce7-4080-a7da-1bce3b43d685-kube-api-access-zgfrz\") pod \"123f23c5-bce7-4080-a7da-1bce3b43d685\" (UID: \"123f23c5-bce7-4080-a7da-1bce3b43d685\") " Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.551729 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123f23c5-bce7-4080-a7da-1bce3b43d685-kube-api-access-zgfrz" (OuterVolumeSpecName: "kube-api-access-zgfrz") pod "123f23c5-bce7-4080-a7da-1bce3b43d685" (UID: "123f23c5-bce7-4080-a7da-1bce3b43d685"). InnerVolumeSpecName "kube-api-access-zgfrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.597646 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123f23c5-bce7-4080-a7da-1bce3b43d685-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "123f23c5-bce7-4080-a7da-1bce3b43d685" (UID: "123f23c5-bce7-4080-a7da-1bce3b43d685"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.602037 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123f23c5-bce7-4080-a7da-1bce3b43d685-inventory" (OuterVolumeSpecName: "inventory") pod "123f23c5-bce7-4080-a7da-1bce3b43d685" (UID: "123f23c5-bce7-4080-a7da-1bce3b43d685"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.649306 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/123f23c5-bce7-4080-a7da-1bce3b43d685-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.649335 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgfrz\" (UniqueName: \"kubernetes.io/projected/123f23c5-bce7-4080-a7da-1bce3b43d685-kube-api-access-zgfrz\") on node \"crc\" DevicePath \"\"" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.649345 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/123f23c5-bce7-4080-a7da-1bce3b43d685-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:48:00 crc kubenswrapper[4736]: I0316 15:48:00.995014 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561268-pr7tq"] Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.009628 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561268-pr7tq" event={"ID":"80cd2bef-f6eb-4e45-8c17-b74fab484d78","Type":"ContainerStarted","Data":"276bacd2913d3c7465b9c48f11cdcf5c944c403a8887ebd83f74e9375c054115"} Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.011604 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" event={"ID":"123f23c5-bce7-4080-a7da-1bce3b43d685","Type":"ContainerDied","Data":"7b7787aabe0aaab210054c39fb08b936224fed913a8259a13d4d2a5560a7a190"} Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.011630 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b7787aabe0aaab210054c39fb08b936224fed913a8259a13d4d2a5560a7a190" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.011680 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-shf2t" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.093225 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj"] Mar 16 15:48:01 crc kubenswrapper[4736]: E0316 15:48:01.093687 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123f23c5-bce7-4080-a7da-1bce3b43d685" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.093705 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="123f23c5-bce7-4080-a7da-1bce3b43d685" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.093901 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="123f23c5-bce7-4080-a7da-1bce3b43d685" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.094510 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.096944 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.097251 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.097560 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.097796 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.116192 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj"] Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.158189 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f274275-8257-4335-b3d8-a2441d5ddf1e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj\" (UID: \"6f274275-8257-4335-b3d8-a2441d5ddf1e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.158298 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5zs4\" (UniqueName: \"kubernetes.io/projected/6f274275-8257-4335-b3d8-a2441d5ddf1e-kube-api-access-p5zs4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj\" (UID: \"6f274275-8257-4335-b3d8-a2441d5ddf1e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.158417 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f274275-8257-4335-b3d8-a2441d5ddf1e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj\" (UID: \"6f274275-8257-4335-b3d8-a2441d5ddf1e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.260605 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f274275-8257-4335-b3d8-a2441d5ddf1e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj\" (UID: \"6f274275-8257-4335-b3d8-a2441d5ddf1e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.260732 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f274275-8257-4335-b3d8-a2441d5ddf1e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj\" (UID: \"6f274275-8257-4335-b3d8-a2441d5ddf1e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.260757 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5zs4\" (UniqueName: \"kubernetes.io/projected/6f274275-8257-4335-b3d8-a2441d5ddf1e-kube-api-access-p5zs4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj\" (UID: \"6f274275-8257-4335-b3d8-a2441d5ddf1e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.265311 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f274275-8257-4335-b3d8-a2441d5ddf1e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj\" (UID: \"6f274275-8257-4335-b3d8-a2441d5ddf1e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.265649 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f274275-8257-4335-b3d8-a2441d5ddf1e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj\" (UID: \"6f274275-8257-4335-b3d8-a2441d5ddf1e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.278842 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5zs4\" (UniqueName: \"kubernetes.io/projected/6f274275-8257-4335-b3d8-a2441d5ddf1e-kube-api-access-p5zs4\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj\" (UID: \"6f274275-8257-4335-b3d8-a2441d5ddf1e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.409669 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" Mar 16 15:48:01 crc kubenswrapper[4736]: I0316 15:48:01.963791 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj"] Mar 16 15:48:02 crc kubenswrapper[4736]: I0316 15:48:02.020039 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" event={"ID":"6f274275-8257-4335-b3d8-a2441d5ddf1e","Type":"ContainerStarted","Data":"492a261cbb80272fdd1d593715677ede76469f63ee6f636912960b1b22f2f44f"} Mar 16 15:48:03 crc kubenswrapper[4736]: I0316 15:48:03.043124 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" event={"ID":"6f274275-8257-4335-b3d8-a2441d5ddf1e","Type":"ContainerStarted","Data":"fb95f77ba5811e0b42ea70b22ffe655cfb7318e9c6381cfa2545feb96031e6f9"} Mar 16 15:48:03 crc kubenswrapper[4736]: I0316 15:48:03.047919 4736 generic.go:334] "Generic (PLEG): container finished" podID="80cd2bef-f6eb-4e45-8c17-b74fab484d78" containerID="e206711ee14c56b94a7025a8ffc1f49637c871d58337d4409e1b2b74fe951b0e" exitCode=0 Mar 16 15:48:03 crc kubenswrapper[4736]: I0316 15:48:03.048157 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561268-pr7tq" event={"ID":"80cd2bef-f6eb-4e45-8c17-b74fab484d78","Type":"ContainerDied","Data":"e206711ee14c56b94a7025a8ffc1f49637c871d58337d4409e1b2b74fe951b0e"} Mar 16 15:48:03 crc kubenswrapper[4736]: I0316 15:48:03.074751 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" podStartSLOduration=1.3246408920000001 podStartE2EDuration="2.074723808s" podCreationTimestamp="2026-03-16 15:48:01 +0000 UTC" firstStartedPulling="2026-03-16 15:48:01.970210812 +0000 UTC m=+2083.697601099" lastFinishedPulling="2026-03-16 15:48:02.720293728 +0000 UTC m=+2084.447684015" observedRunningTime="2026-03-16 15:48:03.064017445 +0000 UTC m=+2084.791407722" watchObservedRunningTime="2026-03-16 15:48:03.074723808 +0000 UTC m=+2084.802114115" Mar 16 15:48:04 crc kubenswrapper[4736]: I0316 15:48:04.399404 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561268-pr7tq" Mar 16 15:48:04 crc kubenswrapper[4736]: I0316 15:48:04.531669 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rfp5\" (UniqueName: \"kubernetes.io/projected/80cd2bef-f6eb-4e45-8c17-b74fab484d78-kube-api-access-5rfp5\") pod \"80cd2bef-f6eb-4e45-8c17-b74fab484d78\" (UID: \"80cd2bef-f6eb-4e45-8c17-b74fab484d78\") " Mar 16 15:48:04 crc kubenswrapper[4736]: I0316 15:48:04.538083 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cd2bef-f6eb-4e45-8c17-b74fab484d78-kube-api-access-5rfp5" (OuterVolumeSpecName: "kube-api-access-5rfp5") pod "80cd2bef-f6eb-4e45-8c17-b74fab484d78" (UID: "80cd2bef-f6eb-4e45-8c17-b74fab484d78"). InnerVolumeSpecName "kube-api-access-5rfp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:48:04 crc kubenswrapper[4736]: I0316 15:48:04.634685 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rfp5\" (UniqueName: \"kubernetes.io/projected/80cd2bef-f6eb-4e45-8c17-b74fab484d78-kube-api-access-5rfp5\") on node \"crc\" DevicePath \"\"" Mar 16 15:48:05 crc kubenswrapper[4736]: I0316 15:48:05.066325 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561268-pr7tq" event={"ID":"80cd2bef-f6eb-4e45-8c17-b74fab484d78","Type":"ContainerDied","Data":"276bacd2913d3c7465b9c48f11cdcf5c944c403a8887ebd83f74e9375c054115"} Mar 16 15:48:05 crc kubenswrapper[4736]: I0316 15:48:05.066365 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="276bacd2913d3c7465b9c48f11cdcf5c944c403a8887ebd83f74e9375c054115" Mar 16 15:48:05 crc kubenswrapper[4736]: I0316 15:48:05.066401 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561268-pr7tq" Mar 16 15:48:05 crc kubenswrapper[4736]: I0316 15:48:05.484336 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561262-fpvfp"] Mar 16 15:48:05 crc kubenswrapper[4736]: I0316 15:48:05.497971 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561262-fpvfp"] Mar 16 15:48:06 crc kubenswrapper[4736]: I0316 15:48:06.992913 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed4b5032-89e0-4dee-8a6c-74f43a3762e0" path="/var/lib/kubelet/pods/ed4b5032-89e0-4dee-8a6c-74f43a3762e0/volumes" Mar 16 15:48:08 crc kubenswrapper[4736]: I0316 15:48:08.507563 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:48:08 crc kubenswrapper[4736]: I0316 15:48:08.507609 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:48:11 crc kubenswrapper[4736]: I0316 15:48:11.121359 4736 generic.go:334] "Generic (PLEG): container finished" podID="6f274275-8257-4335-b3d8-a2441d5ddf1e" containerID="fb95f77ba5811e0b42ea70b22ffe655cfb7318e9c6381cfa2545feb96031e6f9" exitCode=0 Mar 16 15:48:11 crc kubenswrapper[4736]: I0316 15:48:11.121432 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" event={"ID":"6f274275-8257-4335-b3d8-a2441d5ddf1e","Type":"ContainerDied","Data":"fb95f77ba5811e0b42ea70b22ffe655cfb7318e9c6381cfa2545feb96031e6f9"} Mar 16 15:48:12 crc kubenswrapper[4736]: I0316 15:48:12.538415 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" Mar 16 15:48:12 crc kubenswrapper[4736]: I0316 15:48:12.681484 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5zs4\" (UniqueName: \"kubernetes.io/projected/6f274275-8257-4335-b3d8-a2441d5ddf1e-kube-api-access-p5zs4\") pod \"6f274275-8257-4335-b3d8-a2441d5ddf1e\" (UID: \"6f274275-8257-4335-b3d8-a2441d5ddf1e\") " Mar 16 15:48:12 crc kubenswrapper[4736]: I0316 15:48:12.681611 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f274275-8257-4335-b3d8-a2441d5ddf1e-inventory\") pod \"6f274275-8257-4335-b3d8-a2441d5ddf1e\" (UID: \"6f274275-8257-4335-b3d8-a2441d5ddf1e\") " Mar 16 15:48:12 crc kubenswrapper[4736]: I0316 15:48:12.681634 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f274275-8257-4335-b3d8-a2441d5ddf1e-ssh-key-openstack-edpm-ipam\") pod \"6f274275-8257-4335-b3d8-a2441d5ddf1e\" (UID: \"6f274275-8257-4335-b3d8-a2441d5ddf1e\") " Mar 16 15:48:12 crc kubenswrapper[4736]: I0316 15:48:12.686948 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f274275-8257-4335-b3d8-a2441d5ddf1e-kube-api-access-p5zs4" (OuterVolumeSpecName: "kube-api-access-p5zs4") pod "6f274275-8257-4335-b3d8-a2441d5ddf1e" (UID: "6f274275-8257-4335-b3d8-a2441d5ddf1e"). InnerVolumeSpecName "kube-api-access-p5zs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:48:12 crc kubenswrapper[4736]: I0316 15:48:12.711348 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f274275-8257-4335-b3d8-a2441d5ddf1e-inventory" (OuterVolumeSpecName: "inventory") pod "6f274275-8257-4335-b3d8-a2441d5ddf1e" (UID: "6f274275-8257-4335-b3d8-a2441d5ddf1e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:48:12 crc kubenswrapper[4736]: I0316 15:48:12.712089 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f274275-8257-4335-b3d8-a2441d5ddf1e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6f274275-8257-4335-b3d8-a2441d5ddf1e" (UID: "6f274275-8257-4335-b3d8-a2441d5ddf1e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:48:12 crc kubenswrapper[4736]: I0316 15:48:12.784442 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5zs4\" (UniqueName: \"kubernetes.io/projected/6f274275-8257-4335-b3d8-a2441d5ddf1e-kube-api-access-p5zs4\") on node \"crc\" DevicePath \"\"" Mar 16 15:48:12 crc kubenswrapper[4736]: I0316 15:48:12.784491 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f274275-8257-4335-b3d8-a2441d5ddf1e-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:48:12 crc kubenswrapper[4736]: I0316 15:48:12.784516 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6f274275-8257-4335-b3d8-a2441d5ddf1e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.139765 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" event={"ID":"6f274275-8257-4335-b3d8-a2441d5ddf1e","Type":"ContainerDied","Data":"492a261cbb80272fdd1d593715677ede76469f63ee6f636912960b1b22f2f44f"} Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.139829 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="492a261cbb80272fdd1d593715677ede76469f63ee6f636912960b1b22f2f44f" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.139856 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.238953 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq"] Mar 16 15:48:13 crc kubenswrapper[4736]: E0316 15:48:13.239442 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f274275-8257-4335-b3d8-a2441d5ddf1e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.239466 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f274275-8257-4335-b3d8-a2441d5ddf1e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 16 15:48:13 crc kubenswrapper[4736]: E0316 15:48:13.239513 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cd2bef-f6eb-4e45-8c17-b74fab484d78" containerName="oc" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.239522 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cd2bef-f6eb-4e45-8c17-b74fab484d78" containerName="oc" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.239825 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cd2bef-f6eb-4e45-8c17-b74fab484d78" containerName="oc" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.239866 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f274275-8257-4335-b3d8-a2441d5ddf1e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.240707 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.244345 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.244375 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.244552 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.244701 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.248714 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq"] Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.397353 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3477848e-08a4-4e82-a565-d5e83bf58c7d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f5xbq\" (UID: \"3477848e-08a4-4e82-a565-d5e83bf58c7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.397731 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4dk4\" (UniqueName: \"kubernetes.io/projected/3477848e-08a4-4e82-a565-d5e83bf58c7d-kube-api-access-l4dk4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f5xbq\" (UID: \"3477848e-08a4-4e82-a565-d5e83bf58c7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.397766 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3477848e-08a4-4e82-a565-d5e83bf58c7d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f5xbq\" (UID: \"3477848e-08a4-4e82-a565-d5e83bf58c7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.499488 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3477848e-08a4-4e82-a565-d5e83bf58c7d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f5xbq\" (UID: \"3477848e-08a4-4e82-a565-d5e83bf58c7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.499592 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4dk4\" (UniqueName: \"kubernetes.io/projected/3477848e-08a4-4e82-a565-d5e83bf58c7d-kube-api-access-l4dk4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f5xbq\" (UID: \"3477848e-08a4-4e82-a565-d5e83bf58c7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.499622 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3477848e-08a4-4e82-a565-d5e83bf58c7d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f5xbq\" (UID: \"3477848e-08a4-4e82-a565-d5e83bf58c7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.507142 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3477848e-08a4-4e82-a565-d5e83bf58c7d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f5xbq\" (UID: \"3477848e-08a4-4e82-a565-d5e83bf58c7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.508659 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3477848e-08a4-4e82-a565-d5e83bf58c7d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f5xbq\" (UID: \"3477848e-08a4-4e82-a565-d5e83bf58c7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.517795 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4dk4\" (UniqueName: \"kubernetes.io/projected/3477848e-08a4-4e82-a565-d5e83bf58c7d-kube-api-access-l4dk4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-f5xbq\" (UID: \"3477848e-08a4-4e82-a565-d5e83bf58c7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.617793 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" Mar 16 15:48:13 crc kubenswrapper[4736]: I0316 15:48:13.948816 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq"] Mar 16 15:48:14 crc kubenswrapper[4736]: I0316 15:48:14.153986 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" event={"ID":"3477848e-08a4-4e82-a565-d5e83bf58c7d","Type":"ContainerStarted","Data":"9d3c3bda23a8bfd5f664ab06e146d0a466477640e352dfeaee802ddf7e157dd7"} Mar 16 15:48:15 crc kubenswrapper[4736]: I0316 15:48:15.165906 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" event={"ID":"3477848e-08a4-4e82-a565-d5e83bf58c7d","Type":"ContainerStarted","Data":"43107415994bf549dd94fb858ea87173cdcfd78d2016b56571e9334908a6a3c3"} Mar 16 15:48:15 crc kubenswrapper[4736]: I0316 15:48:15.190488 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" podStartSLOduration=1.6664536129999998 podStartE2EDuration="2.190471579s" podCreationTimestamp="2026-03-16 15:48:13 +0000 UTC" firstStartedPulling="2026-03-16 15:48:13.954567051 +0000 UTC m=+2095.681957338" lastFinishedPulling="2026-03-16 15:48:14.478585007 +0000 UTC m=+2096.205975304" observedRunningTime="2026-03-16 15:48:15.184893977 +0000 UTC m=+2096.912284264" watchObservedRunningTime="2026-03-16 15:48:15.190471579 +0000 UTC m=+2096.917861866" Mar 16 15:48:38 crc kubenswrapper[4736]: I0316 15:48:38.508493 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:48:38 crc kubenswrapper[4736]: I0316 15:48:38.509437 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:48:38 crc kubenswrapper[4736]: I0316 15:48:38.509523 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:48:38 crc kubenswrapper[4736]: I0316 15:48:38.510572 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6064bfd54b4506f269b651af4704ab9ae454797c4e81b4b99e7443432c2e1d90"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 15:48:38 crc kubenswrapper[4736]: I0316 15:48:38.510728 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://6064bfd54b4506f269b651af4704ab9ae454797c4e81b4b99e7443432c2e1d90" gracePeriod=600 Mar 16 15:48:39 crc kubenswrapper[4736]: I0316 15:48:39.410307 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="6064bfd54b4506f269b651af4704ab9ae454797c4e81b4b99e7443432c2e1d90" exitCode=0 Mar 16 15:48:39 crc kubenswrapper[4736]: I0316 15:48:39.410394 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"6064bfd54b4506f269b651af4704ab9ae454797c4e81b4b99e7443432c2e1d90"} Mar 16 15:48:39 crc kubenswrapper[4736]: I0316 15:48:39.411232 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985"} Mar 16 15:48:39 crc kubenswrapper[4736]: I0316 15:48:39.411344 4736 scope.go:117] "RemoveContainer" containerID="2c25b687ba341df6ceeec5350cee7b196d9614a2abadc497a953dd77aa61ecd7" Mar 16 15:48:49 crc kubenswrapper[4736]: I0316 15:48:49.737519 4736 scope.go:117] "RemoveContainer" containerID="37a86d47217afa88b89b58f9ff3074144559c5b471f87326beb2d2f53f2e7a60" Mar 16 15:48:52 crc kubenswrapper[4736]: I0316 15:48:52.534283 4736 generic.go:334] "Generic (PLEG): container finished" podID="3477848e-08a4-4e82-a565-d5e83bf58c7d" containerID="43107415994bf549dd94fb858ea87173cdcfd78d2016b56571e9334908a6a3c3" exitCode=0 Mar 16 15:48:52 crc kubenswrapper[4736]: I0316 15:48:52.534378 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" event={"ID":"3477848e-08a4-4e82-a565-d5e83bf58c7d","Type":"ContainerDied","Data":"43107415994bf549dd94fb858ea87173cdcfd78d2016b56571e9334908a6a3c3"} Mar 16 15:48:53 crc kubenswrapper[4736]: I0316 15:48:53.963660 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.109528 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3477848e-08a4-4e82-a565-d5e83bf58c7d-ssh-key-openstack-edpm-ipam\") pod \"3477848e-08a4-4e82-a565-d5e83bf58c7d\" (UID: \"3477848e-08a4-4e82-a565-d5e83bf58c7d\") " Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.109599 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4dk4\" (UniqueName: \"kubernetes.io/projected/3477848e-08a4-4e82-a565-d5e83bf58c7d-kube-api-access-l4dk4\") pod \"3477848e-08a4-4e82-a565-d5e83bf58c7d\" (UID: \"3477848e-08a4-4e82-a565-d5e83bf58c7d\") " Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.109671 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3477848e-08a4-4e82-a565-d5e83bf58c7d-inventory\") pod \"3477848e-08a4-4e82-a565-d5e83bf58c7d\" (UID: \"3477848e-08a4-4e82-a565-d5e83bf58c7d\") " Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.115160 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3477848e-08a4-4e82-a565-d5e83bf58c7d-kube-api-access-l4dk4" (OuterVolumeSpecName: "kube-api-access-l4dk4") pod "3477848e-08a4-4e82-a565-d5e83bf58c7d" (UID: "3477848e-08a4-4e82-a565-d5e83bf58c7d"). InnerVolumeSpecName "kube-api-access-l4dk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.141563 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3477848e-08a4-4e82-a565-d5e83bf58c7d-inventory" (OuterVolumeSpecName: "inventory") pod "3477848e-08a4-4e82-a565-d5e83bf58c7d" (UID: "3477848e-08a4-4e82-a565-d5e83bf58c7d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.154398 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3477848e-08a4-4e82-a565-d5e83bf58c7d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3477848e-08a4-4e82-a565-d5e83bf58c7d" (UID: "3477848e-08a4-4e82-a565-d5e83bf58c7d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.212099 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3477848e-08a4-4e82-a565-d5e83bf58c7d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.212325 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4dk4\" (UniqueName: \"kubernetes.io/projected/3477848e-08a4-4e82-a565-d5e83bf58c7d-kube-api-access-l4dk4\") on node \"crc\" DevicePath \"\"" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.212385 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3477848e-08a4-4e82-a565-d5e83bf58c7d-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.554663 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" event={"ID":"3477848e-08a4-4e82-a565-d5e83bf58c7d","Type":"ContainerDied","Data":"9d3c3bda23a8bfd5f664ab06e146d0a466477640e352dfeaee802ddf7e157dd7"} Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.554952 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d3c3bda23a8bfd5f664ab06e146d0a466477640e352dfeaee802ddf7e157dd7" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.555014 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-f5xbq" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.677422 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth"] Mar 16 15:48:54 crc kubenswrapper[4736]: E0316 15:48:54.677977 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3477848e-08a4-4e82-a565-d5e83bf58c7d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.678005 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3477848e-08a4-4e82-a565-d5e83bf58c7d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.681956 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3477848e-08a4-4e82-a565-d5e83bf58c7d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.682861 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.685566 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.685914 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.686202 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.689551 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.696909 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth"] Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.823803 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09227c49-1a61-4c9c-827d-336efc0fe550-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-thsth\" (UID: \"09227c49-1a61-4c9c-827d-336efc0fe550\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.823869 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ztz5\" (UniqueName: \"kubernetes.io/projected/09227c49-1a61-4c9c-827d-336efc0fe550-kube-api-access-5ztz5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-thsth\" (UID: \"09227c49-1a61-4c9c-827d-336efc0fe550\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.823911 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09227c49-1a61-4c9c-827d-336efc0fe550-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-thsth\" (UID: \"09227c49-1a61-4c9c-827d-336efc0fe550\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.925542 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09227c49-1a61-4c9c-827d-336efc0fe550-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-thsth\" (UID: \"09227c49-1a61-4c9c-827d-336efc0fe550\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.925598 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ztz5\" (UniqueName: \"kubernetes.io/projected/09227c49-1a61-4c9c-827d-336efc0fe550-kube-api-access-5ztz5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-thsth\" (UID: \"09227c49-1a61-4c9c-827d-336efc0fe550\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.925637 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09227c49-1a61-4c9c-827d-336efc0fe550-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-thsth\" (UID: \"09227c49-1a61-4c9c-827d-336efc0fe550\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.930835 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09227c49-1a61-4c9c-827d-336efc0fe550-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-thsth\" (UID: \"09227c49-1a61-4c9c-827d-336efc0fe550\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.931038 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09227c49-1a61-4c9c-827d-336efc0fe550-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-thsth\" (UID: \"09227c49-1a61-4c9c-827d-336efc0fe550\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" Mar 16 15:48:54 crc kubenswrapper[4736]: I0316 15:48:54.958707 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ztz5\" (UniqueName: \"kubernetes.io/projected/09227c49-1a61-4c9c-827d-336efc0fe550-kube-api-access-5ztz5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-thsth\" (UID: \"09227c49-1a61-4c9c-827d-336efc0fe550\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" Mar 16 15:48:55 crc kubenswrapper[4736]: I0316 15:48:55.006669 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" Mar 16 15:48:55 crc kubenswrapper[4736]: I0316 15:48:55.533758 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth"] Mar 16 15:48:55 crc kubenswrapper[4736]: I0316 15:48:55.562896 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" event={"ID":"09227c49-1a61-4c9c-827d-336efc0fe550","Type":"ContainerStarted","Data":"fb063c5ace77dee60d25b1183c0fa000d27715e38016a19a43bb33526770e86e"} Mar 16 15:48:56 crc kubenswrapper[4736]: I0316 15:48:56.578793 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" event={"ID":"09227c49-1a61-4c9c-827d-336efc0fe550","Type":"ContainerStarted","Data":"dcce7aa1fb0f5bc9e17cfebb189ad1f06a5fde21697651fc152e438725aba1d7"} Mar 16 15:48:56 crc kubenswrapper[4736]: I0316 15:48:56.620708 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" podStartSLOduration=2.220811105 podStartE2EDuration="2.620678116s" podCreationTimestamp="2026-03-16 15:48:54 +0000 UTC" firstStartedPulling="2026-03-16 15:48:55.543787995 +0000 UTC m=+2137.271178272" lastFinishedPulling="2026-03-16 15:48:55.943654996 +0000 UTC m=+2137.671045283" observedRunningTime="2026-03-16 15:48:56.601948843 +0000 UTC m=+2138.329339130" watchObservedRunningTime="2026-03-16 15:48:56.620678116 +0000 UTC m=+2138.348068403" Mar 16 15:49:46 crc kubenswrapper[4736]: I0316 15:49:46.635905 4736 generic.go:334] "Generic (PLEG): container finished" podID="09227c49-1a61-4c9c-827d-336efc0fe550" containerID="dcce7aa1fb0f5bc9e17cfebb189ad1f06a5fde21697651fc152e438725aba1d7" exitCode=0 Mar 16 15:49:46 crc kubenswrapper[4736]: I0316 15:49:46.636013 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" event={"ID":"09227c49-1a61-4c9c-827d-336efc0fe550","Type":"ContainerDied","Data":"dcce7aa1fb0f5bc9e17cfebb189ad1f06a5fde21697651fc152e438725aba1d7"} Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.052323 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.224362 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09227c49-1a61-4c9c-827d-336efc0fe550-inventory\") pod \"09227c49-1a61-4c9c-827d-336efc0fe550\" (UID: \"09227c49-1a61-4c9c-827d-336efc0fe550\") " Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.225159 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09227c49-1a61-4c9c-827d-336efc0fe550-ssh-key-openstack-edpm-ipam\") pod \"09227c49-1a61-4c9c-827d-336efc0fe550\" (UID: \"09227c49-1a61-4c9c-827d-336efc0fe550\") " Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.225314 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ztz5\" (UniqueName: \"kubernetes.io/projected/09227c49-1a61-4c9c-827d-336efc0fe550-kube-api-access-5ztz5\") pod \"09227c49-1a61-4c9c-827d-336efc0fe550\" (UID: \"09227c49-1a61-4c9c-827d-336efc0fe550\") " Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.229569 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09227c49-1a61-4c9c-827d-336efc0fe550-kube-api-access-5ztz5" (OuterVolumeSpecName: "kube-api-access-5ztz5") pod "09227c49-1a61-4c9c-827d-336efc0fe550" (UID: "09227c49-1a61-4c9c-827d-336efc0fe550"). InnerVolumeSpecName "kube-api-access-5ztz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.255685 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09227c49-1a61-4c9c-827d-336efc0fe550-inventory" (OuterVolumeSpecName: "inventory") pod "09227c49-1a61-4c9c-827d-336efc0fe550" (UID: "09227c49-1a61-4c9c-827d-336efc0fe550"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.256722 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09227c49-1a61-4c9c-827d-336efc0fe550-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "09227c49-1a61-4c9c-827d-336efc0fe550" (UID: "09227c49-1a61-4c9c-827d-336efc0fe550"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.328584 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09227c49-1a61-4c9c-827d-336efc0fe550-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.328657 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/09227c49-1a61-4c9c-827d-336efc0fe550-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.328686 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ztz5\" (UniqueName: \"kubernetes.io/projected/09227c49-1a61-4c9c-827d-336efc0fe550-kube-api-access-5ztz5\") on node \"crc\" DevicePath \"\"" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.692714 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" event={"ID":"09227c49-1a61-4c9c-827d-336efc0fe550","Type":"ContainerDied","Data":"fb063c5ace77dee60d25b1183c0fa000d27715e38016a19a43bb33526770e86e"} Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.692758 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb063c5ace77dee60d25b1183c0fa000d27715e38016a19a43bb33526770e86e" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.692843 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-thsth" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.750728 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-clhnp"] Mar 16 15:49:48 crc kubenswrapper[4736]: E0316 15:49:48.751206 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09227c49-1a61-4c9c-827d-336efc0fe550" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.751229 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="09227c49-1a61-4c9c-827d-336efc0fe550" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.751488 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="09227c49-1a61-4c9c-827d-336efc0fe550" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.752313 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.755081 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.756458 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.756575 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.756810 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.759217 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-clhnp"] Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.944599 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sj59\" (UniqueName: \"kubernetes.io/projected/b21d0120-de7a-44aa-a9a7-469ff2670bd4-kube-api-access-5sj59\") pod \"ssh-known-hosts-edpm-deployment-clhnp\" (UID: \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\") " pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.944877 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b21d0120-de7a-44aa-a9a7-469ff2670bd4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-clhnp\" (UID: \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\") " pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" Mar 16 15:49:48 crc kubenswrapper[4736]: I0316 15:49:48.945095 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b21d0120-de7a-44aa-a9a7-469ff2670bd4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-clhnp\" (UID: \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\") " pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" Mar 16 15:49:49 crc kubenswrapper[4736]: I0316 15:49:49.047480 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sj59\" (UniqueName: \"kubernetes.io/projected/b21d0120-de7a-44aa-a9a7-469ff2670bd4-kube-api-access-5sj59\") pod \"ssh-known-hosts-edpm-deployment-clhnp\" (UID: \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\") " pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" Mar 16 15:49:49 crc kubenswrapper[4736]: I0316 15:49:49.047667 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b21d0120-de7a-44aa-a9a7-469ff2670bd4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-clhnp\" (UID: \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\") " pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" Mar 16 15:49:49 crc kubenswrapper[4736]: I0316 15:49:49.047749 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b21d0120-de7a-44aa-a9a7-469ff2670bd4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-clhnp\" (UID: \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\") " pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" Mar 16 15:49:49 crc kubenswrapper[4736]: I0316 15:49:49.052410 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b21d0120-de7a-44aa-a9a7-469ff2670bd4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-clhnp\" (UID: \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\") " pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" Mar 16 15:49:49 crc kubenswrapper[4736]: I0316 15:49:49.058893 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b21d0120-de7a-44aa-a9a7-469ff2670bd4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-clhnp\" (UID: \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\") " pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" Mar 16 15:49:49 crc kubenswrapper[4736]: I0316 15:49:49.070564 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sj59\" (UniqueName: \"kubernetes.io/projected/b21d0120-de7a-44aa-a9a7-469ff2670bd4-kube-api-access-5sj59\") pod \"ssh-known-hosts-edpm-deployment-clhnp\" (UID: \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\") " pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" Mar 16 15:49:49 crc kubenswrapper[4736]: I0316 15:49:49.081802 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" Mar 16 15:49:49 crc kubenswrapper[4736]: I0316 15:49:49.665017 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-clhnp"] Mar 16 15:49:49 crc kubenswrapper[4736]: I0316 15:49:49.665410 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 15:49:49 crc kubenswrapper[4736]: I0316 15:49:49.702590 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" event={"ID":"b21d0120-de7a-44aa-a9a7-469ff2670bd4","Type":"ContainerStarted","Data":"f31de15fb1b8edad82e43cc6ed94dfcc6fce93b1269ba682e3176008a151ab05"} Mar 16 15:49:50 crc kubenswrapper[4736]: I0316 15:49:50.713018 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" event={"ID":"b21d0120-de7a-44aa-a9a7-469ff2670bd4","Type":"ContainerStarted","Data":"7bcba73d2b1d36adfa5a9df74431d4991c3533d207c94f690a2ee41d362e0ddc"} Mar 16 15:49:50 crc kubenswrapper[4736]: I0316 15:49:50.739128 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" podStartSLOduration=2.291557119 podStartE2EDuration="2.739089904s" podCreationTimestamp="2026-03-16 15:49:48 +0000 UTC" firstStartedPulling="2026-03-16 15:49:49.665166404 +0000 UTC m=+2191.392556711" lastFinishedPulling="2026-03-16 15:49:50.112699209 +0000 UTC m=+2191.840089496" observedRunningTime="2026-03-16 15:49:50.729450981 +0000 UTC m=+2192.456841268" watchObservedRunningTime="2026-03-16 15:49:50.739089904 +0000 UTC m=+2192.466480181" Mar 16 15:49:56 crc kubenswrapper[4736]: I0316 15:49:56.784245 4736 generic.go:334] "Generic (PLEG): container finished" podID="b21d0120-de7a-44aa-a9a7-469ff2670bd4" containerID="7bcba73d2b1d36adfa5a9df74431d4991c3533d207c94f690a2ee41d362e0ddc" exitCode=0 Mar 16 15:49:56 crc kubenswrapper[4736]: I0316 15:49:56.784344 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" event={"ID":"b21d0120-de7a-44aa-a9a7-469ff2670bd4","Type":"ContainerDied","Data":"7bcba73d2b1d36adfa5a9df74431d4991c3533d207c94f690a2ee41d362e0ddc"} Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.248064 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.436937 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b21d0120-de7a-44aa-a9a7-469ff2670bd4-inventory-0\") pod \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\" (UID: \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\") " Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.437047 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sj59\" (UniqueName: \"kubernetes.io/projected/b21d0120-de7a-44aa-a9a7-469ff2670bd4-kube-api-access-5sj59\") pod \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\" (UID: \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\") " Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.437186 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b21d0120-de7a-44aa-a9a7-469ff2670bd4-ssh-key-openstack-edpm-ipam\") pod \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\" (UID: \"b21d0120-de7a-44aa-a9a7-469ff2670bd4\") " Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.452400 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21d0120-de7a-44aa-a9a7-469ff2670bd4-kube-api-access-5sj59" (OuterVolumeSpecName: "kube-api-access-5sj59") pod "b21d0120-de7a-44aa-a9a7-469ff2670bd4" (UID: "b21d0120-de7a-44aa-a9a7-469ff2670bd4"). InnerVolumeSpecName "kube-api-access-5sj59". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.466291 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b21d0120-de7a-44aa-a9a7-469ff2670bd4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b21d0120-de7a-44aa-a9a7-469ff2670bd4" (UID: "b21d0120-de7a-44aa-a9a7-469ff2670bd4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.472036 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b21d0120-de7a-44aa-a9a7-469ff2670bd4-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "b21d0120-de7a-44aa-a9a7-469ff2670bd4" (UID: "b21d0120-de7a-44aa-a9a7-469ff2670bd4"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.539011 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b21d0120-de7a-44aa-a9a7-469ff2670bd4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.539050 4736 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b21d0120-de7a-44aa-a9a7-469ff2670bd4-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.539060 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sj59\" (UniqueName: \"kubernetes.io/projected/b21d0120-de7a-44aa-a9a7-469ff2670bd4-kube-api-access-5sj59\") on node \"crc\" DevicePath \"\"" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.805371 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" event={"ID":"b21d0120-de7a-44aa-a9a7-469ff2670bd4","Type":"ContainerDied","Data":"f31de15fb1b8edad82e43cc6ed94dfcc6fce93b1269ba682e3176008a151ab05"} Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.805424 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f31de15fb1b8edad82e43cc6ed94dfcc6fce93b1269ba682e3176008a151ab05" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.805389 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-clhnp" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.935410 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck"] Mar 16 15:49:58 crc kubenswrapper[4736]: E0316 15:49:58.935798 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b21d0120-de7a-44aa-a9a7-469ff2670bd4" containerName="ssh-known-hosts-edpm-deployment" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.935813 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21d0120-de7a-44aa-a9a7-469ff2670bd4" containerName="ssh-known-hosts-edpm-deployment" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.936027 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b21d0120-de7a-44aa-a9a7-469ff2670bd4" containerName="ssh-known-hosts-edpm-deployment" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.936738 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.938442 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.939388 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.939588 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.939707 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:49:58 crc kubenswrapper[4736]: I0316 15:49:58.954220 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck"] Mar 16 15:49:59 crc kubenswrapper[4736]: I0316 15:49:59.047629 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trqck\" (UID: \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" Mar 16 15:49:59 crc kubenswrapper[4736]: I0316 15:49:59.047677 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6272f\" (UniqueName: \"kubernetes.io/projected/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-kube-api-access-6272f\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trqck\" (UID: \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" Mar 16 15:49:59 crc kubenswrapper[4736]: I0316 15:49:59.047977 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trqck\" (UID: \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" Mar 16 15:49:59 crc kubenswrapper[4736]: I0316 15:49:59.148676 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trqck\" (UID: \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" Mar 16 15:49:59 crc kubenswrapper[4736]: I0316 15:49:59.148949 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6272f\" (UniqueName: \"kubernetes.io/projected/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-kube-api-access-6272f\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trqck\" (UID: \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" Mar 16 15:49:59 crc kubenswrapper[4736]: I0316 15:49:59.149181 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trqck\" (UID: \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" Mar 16 15:49:59 crc kubenswrapper[4736]: I0316 15:49:59.162490 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trqck\" (UID: \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" Mar 16 15:49:59 crc kubenswrapper[4736]: I0316 15:49:59.166138 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trqck\" (UID: \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" Mar 16 15:49:59 crc kubenswrapper[4736]: I0316 15:49:59.166418 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6272f\" (UniqueName: \"kubernetes.io/projected/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-kube-api-access-6272f\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-trqck\" (UID: \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" Mar 16 15:49:59 crc kubenswrapper[4736]: I0316 15:49:59.259934 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" Mar 16 15:49:59 crc kubenswrapper[4736]: I0316 15:49:59.774136 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck"] Mar 16 15:49:59 crc kubenswrapper[4736]: W0316 15:49:59.784588 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ac6b0e7_17e2_4e8d_8e5c_5f188af3ed0a.slice/crio-ddc5329718fe43a5848b49682245e36b2c0a102510a9e2c3f866930e267c015c WatchSource:0}: Error finding container ddc5329718fe43a5848b49682245e36b2c0a102510a9e2c3f866930e267c015c: Status 404 returned error can't find the container with id ddc5329718fe43a5848b49682245e36b2c0a102510a9e2c3f866930e267c015c Mar 16 15:49:59 crc kubenswrapper[4736]: I0316 15:49:59.816848 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" event={"ID":"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a","Type":"ContainerStarted","Data":"ddc5329718fe43a5848b49682245e36b2c0a102510a9e2c3f866930e267c015c"} Mar 16 15:50:00 crc kubenswrapper[4736]: I0316 15:50:00.131534 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561270-8hwgt"] Mar 16 15:50:00 crc kubenswrapper[4736]: I0316 15:50:00.132969 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561270-8hwgt" Mar 16 15:50:00 crc kubenswrapper[4736]: I0316 15:50:00.134880 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:50:00 crc kubenswrapper[4736]: I0316 15:50:00.134931 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:50:00 crc kubenswrapper[4736]: I0316 15:50:00.135318 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:50:00 crc kubenswrapper[4736]: I0316 15:50:00.212205 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561270-8hwgt"] Mar 16 15:50:00 crc kubenswrapper[4736]: I0316 15:50:00.267785 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnnnh\" (UniqueName: \"kubernetes.io/projected/8409a279-a5c8-4f8f-b208-e741e0ecb7d9-kube-api-access-lnnnh\") pod \"auto-csr-approver-29561270-8hwgt\" (UID: \"8409a279-a5c8-4f8f-b208-e741e0ecb7d9\") " pod="openshift-infra/auto-csr-approver-29561270-8hwgt" Mar 16 15:50:00 crc kubenswrapper[4736]: I0316 15:50:00.369144 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnnnh\" (UniqueName: \"kubernetes.io/projected/8409a279-a5c8-4f8f-b208-e741e0ecb7d9-kube-api-access-lnnnh\") pod \"auto-csr-approver-29561270-8hwgt\" (UID: \"8409a279-a5c8-4f8f-b208-e741e0ecb7d9\") " pod="openshift-infra/auto-csr-approver-29561270-8hwgt" Mar 16 15:50:00 crc kubenswrapper[4736]: I0316 15:50:00.387901 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnnnh\" (UniqueName: \"kubernetes.io/projected/8409a279-a5c8-4f8f-b208-e741e0ecb7d9-kube-api-access-lnnnh\") pod \"auto-csr-approver-29561270-8hwgt\" (UID: \"8409a279-a5c8-4f8f-b208-e741e0ecb7d9\") " pod="openshift-infra/auto-csr-approver-29561270-8hwgt" Mar 16 15:50:00 crc kubenswrapper[4736]: I0316 15:50:00.455324 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561270-8hwgt" Mar 16 15:50:00 crc kubenswrapper[4736]: I0316 15:50:00.827893 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" event={"ID":"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a","Type":"ContainerStarted","Data":"4312e66fe0dcab64f84eb7bfd0e22f3244843ce3622f0f5d62d22ac29a119e4a"} Mar 16 15:50:00 crc kubenswrapper[4736]: I0316 15:50:00.850155 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" podStartSLOduration=2.185305554 podStartE2EDuration="2.85013147s" podCreationTimestamp="2026-03-16 15:49:58 +0000 UTC" firstStartedPulling="2026-03-16 15:49:59.788915507 +0000 UTC m=+2201.516305794" lastFinishedPulling="2026-03-16 15:50:00.453741423 +0000 UTC m=+2202.181131710" observedRunningTime="2026-03-16 15:50:00.843685303 +0000 UTC m=+2202.571075590" watchObservedRunningTime="2026-03-16 15:50:00.85013147 +0000 UTC m=+2202.577521757" Mar 16 15:50:00 crc kubenswrapper[4736]: W0316 15:50:00.903718 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8409a279_a5c8_4f8f_b208_e741e0ecb7d9.slice/crio-3b64857294b726dc79b5b73ced68c0e629910ddbebf942115302283ead9b6146 WatchSource:0}: Error finding container 3b64857294b726dc79b5b73ced68c0e629910ddbebf942115302283ead9b6146: Status 404 returned error can't find the container with id 3b64857294b726dc79b5b73ced68c0e629910ddbebf942115302283ead9b6146 Mar 16 15:50:00 crc kubenswrapper[4736]: I0316 15:50:00.916865 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561270-8hwgt"] Mar 16 15:50:01 crc kubenswrapper[4736]: I0316 15:50:01.848816 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561270-8hwgt" event={"ID":"8409a279-a5c8-4f8f-b208-e741e0ecb7d9","Type":"ContainerStarted","Data":"3b64857294b726dc79b5b73ced68c0e629910ddbebf942115302283ead9b6146"} Mar 16 15:50:02 crc kubenswrapper[4736]: I0316 15:50:02.859058 4736 generic.go:334] "Generic (PLEG): container finished" podID="8409a279-a5c8-4f8f-b208-e741e0ecb7d9" containerID="5c44dcf3243512af820d90e18a8790a66e606735ba314734e1ecfd97c8647636" exitCode=0 Mar 16 15:50:02 crc kubenswrapper[4736]: I0316 15:50:02.859152 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561270-8hwgt" event={"ID":"8409a279-a5c8-4f8f-b208-e741e0ecb7d9","Type":"ContainerDied","Data":"5c44dcf3243512af820d90e18a8790a66e606735ba314734e1ecfd97c8647636"} Mar 16 15:50:04 crc kubenswrapper[4736]: I0316 15:50:04.348470 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561270-8hwgt" Mar 16 15:50:04 crc kubenswrapper[4736]: I0316 15:50:04.449778 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnnnh\" (UniqueName: \"kubernetes.io/projected/8409a279-a5c8-4f8f-b208-e741e0ecb7d9-kube-api-access-lnnnh\") pod \"8409a279-a5c8-4f8f-b208-e741e0ecb7d9\" (UID: \"8409a279-a5c8-4f8f-b208-e741e0ecb7d9\") " Mar 16 15:50:04 crc kubenswrapper[4736]: I0316 15:50:04.456843 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8409a279-a5c8-4f8f-b208-e741e0ecb7d9-kube-api-access-lnnnh" (OuterVolumeSpecName: "kube-api-access-lnnnh") pod "8409a279-a5c8-4f8f-b208-e741e0ecb7d9" (UID: "8409a279-a5c8-4f8f-b208-e741e0ecb7d9"). InnerVolumeSpecName "kube-api-access-lnnnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:50:04 crc kubenswrapper[4736]: I0316 15:50:04.551802 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnnnh\" (UniqueName: \"kubernetes.io/projected/8409a279-a5c8-4f8f-b208-e741e0ecb7d9-kube-api-access-lnnnh\") on node \"crc\" DevicePath \"\"" Mar 16 15:50:04 crc kubenswrapper[4736]: I0316 15:50:04.877281 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561270-8hwgt" event={"ID":"8409a279-a5c8-4f8f-b208-e741e0ecb7d9","Type":"ContainerDied","Data":"3b64857294b726dc79b5b73ced68c0e629910ddbebf942115302283ead9b6146"} Mar 16 15:50:04 crc kubenswrapper[4736]: I0316 15:50:04.877330 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b64857294b726dc79b5b73ced68c0e629910ddbebf942115302283ead9b6146" Mar 16 15:50:04 crc kubenswrapper[4736]: I0316 15:50:04.877385 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561270-8hwgt" Mar 16 15:50:05 crc kubenswrapper[4736]: I0316 15:50:05.442924 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561264-wzn7k"] Mar 16 15:50:05 crc kubenswrapper[4736]: I0316 15:50:05.449796 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561264-wzn7k"] Mar 16 15:50:06 crc kubenswrapper[4736]: I0316 15:50:06.988379 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5584bb7-64ce-4f53-b4e1-ce6432471a05" path="/var/lib/kubelet/pods/d5584bb7-64ce-4f53-b4e1-ce6432471a05/volumes" Mar 16 15:50:08 crc kubenswrapper[4736]: I0316 15:50:08.915854 4736 generic.go:334] "Generic (PLEG): container finished" podID="9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a" containerID="4312e66fe0dcab64f84eb7bfd0e22f3244843ce3622f0f5d62d22ac29a119e4a" exitCode=0 Mar 16 15:50:08 crc kubenswrapper[4736]: I0316 15:50:08.915967 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" event={"ID":"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a","Type":"ContainerDied","Data":"4312e66fe0dcab64f84eb7bfd0e22f3244843ce3622f0f5d62d22ac29a119e4a"} Mar 16 15:50:10 crc kubenswrapper[4736]: I0316 15:50:10.354666 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" Mar 16 15:50:10 crc kubenswrapper[4736]: I0316 15:50:10.502971 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-ssh-key-openstack-edpm-ipam\") pod \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\" (UID: \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\") " Mar 16 15:50:10 crc kubenswrapper[4736]: I0316 15:50:10.503194 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-inventory\") pod \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\" (UID: \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\") " Mar 16 15:50:10 crc kubenswrapper[4736]: I0316 15:50:10.503265 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6272f\" (UniqueName: \"kubernetes.io/projected/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-kube-api-access-6272f\") pod \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\" (UID: \"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a\") " Mar 16 15:50:10 crc kubenswrapper[4736]: I0316 15:50:10.508281 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-kube-api-access-6272f" (OuterVolumeSpecName: "kube-api-access-6272f") pod "9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a" (UID: "9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a"). InnerVolumeSpecName "kube-api-access-6272f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:50:10 crc kubenswrapper[4736]: I0316 15:50:10.534948 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a" (UID: "9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:50:10 crc kubenswrapper[4736]: I0316 15:50:10.535510 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-inventory" (OuterVolumeSpecName: "inventory") pod "9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a" (UID: "9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:50:10 crc kubenswrapper[4736]: I0316 15:50:10.605708 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:50:10 crc kubenswrapper[4736]: I0316 15:50:10.605747 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:50:10 crc kubenswrapper[4736]: I0316 15:50:10.605760 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6272f\" (UniqueName: \"kubernetes.io/projected/9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a-kube-api-access-6272f\") on node \"crc\" DevicePath \"\"" Mar 16 15:50:10 crc kubenswrapper[4736]: I0316 15:50:10.932732 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" event={"ID":"9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a","Type":"ContainerDied","Data":"ddc5329718fe43a5848b49682245e36b2c0a102510a9e2c3f866930e267c015c"} Mar 16 15:50:10 crc kubenswrapper[4736]: I0316 15:50:10.933043 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddc5329718fe43a5848b49682245e36b2c0a102510a9e2c3f866930e267c015c" Mar 16 15:50:10 crc kubenswrapper[4736]: I0316 15:50:10.932983 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-trqck" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.016150 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb"] Mar 16 15:50:11 crc kubenswrapper[4736]: E0316 15:50:11.016521 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8409a279-a5c8-4f8f-b208-e741e0ecb7d9" containerName="oc" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.016539 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8409a279-a5c8-4f8f-b208-e741e0ecb7d9" containerName="oc" Mar 16 15:50:11 crc kubenswrapper[4736]: E0316 15:50:11.016563 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.016571 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.016734 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.016744 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8409a279-a5c8-4f8f-b208-e741e0ecb7d9" containerName="oc" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.017327 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.019399 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.019621 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.019849 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.022775 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.046070 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb"] Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.115322 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/431e1d6e-e9a2-4414-b37e-9612991eb00c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb\" (UID: \"431e1d6e-e9a2-4414-b37e-9612991eb00c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.115380 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/431e1d6e-e9a2-4414-b37e-9612991eb00c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb\" (UID: \"431e1d6e-e9a2-4414-b37e-9612991eb00c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.115449 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vssmv\" (UniqueName: \"kubernetes.io/projected/431e1d6e-e9a2-4414-b37e-9612991eb00c-kube-api-access-vssmv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb\" (UID: \"431e1d6e-e9a2-4414-b37e-9612991eb00c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.217217 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/431e1d6e-e9a2-4414-b37e-9612991eb00c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb\" (UID: \"431e1d6e-e9a2-4414-b37e-9612991eb00c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.217294 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/431e1d6e-e9a2-4414-b37e-9612991eb00c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb\" (UID: \"431e1d6e-e9a2-4414-b37e-9612991eb00c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.217383 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vssmv\" (UniqueName: \"kubernetes.io/projected/431e1d6e-e9a2-4414-b37e-9612991eb00c-kube-api-access-vssmv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb\" (UID: \"431e1d6e-e9a2-4414-b37e-9612991eb00c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.223018 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/431e1d6e-e9a2-4414-b37e-9612991eb00c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb\" (UID: \"431e1d6e-e9a2-4414-b37e-9612991eb00c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.225122 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/431e1d6e-e9a2-4414-b37e-9612991eb00c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb\" (UID: \"431e1d6e-e9a2-4414-b37e-9612991eb00c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.233420 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vssmv\" (UniqueName: \"kubernetes.io/projected/431e1d6e-e9a2-4414-b37e-9612991eb00c-kube-api-access-vssmv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb\" (UID: \"431e1d6e-e9a2-4414-b37e-9612991eb00c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.333648 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.871610 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb"] Mar 16 15:50:11 crc kubenswrapper[4736]: I0316 15:50:11.948189 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" event={"ID":"431e1d6e-e9a2-4414-b37e-9612991eb00c","Type":"ContainerStarted","Data":"f47b75e5fa42e246aed268728891d454e2654a1f9b0d0fa1b42cd788f8b76f92"} Mar 16 15:50:12 crc kubenswrapper[4736]: I0316 15:50:12.961640 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" event={"ID":"431e1d6e-e9a2-4414-b37e-9612991eb00c","Type":"ContainerStarted","Data":"05473c9867631ec3fd40d8cb052493364faf88f517d720a2829c53a0e7b32337"} Mar 16 15:50:12 crc kubenswrapper[4736]: I0316 15:50:12.977263 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" podStartSLOduration=2.463269979 podStartE2EDuration="2.977242361s" podCreationTimestamp="2026-03-16 15:50:10 +0000 UTC" firstStartedPulling="2026-03-16 15:50:11.883060807 +0000 UTC m=+2213.610451094" lastFinishedPulling="2026-03-16 15:50:12.397033179 +0000 UTC m=+2214.124423476" observedRunningTime="2026-03-16 15:50:12.975206735 +0000 UTC m=+2214.702597032" watchObservedRunningTime="2026-03-16 15:50:12.977242361 +0000 UTC m=+2214.704632648" Mar 16 15:50:14 crc kubenswrapper[4736]: I0316 15:50:14.661454 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jjmnv"] Mar 16 15:50:14 crc kubenswrapper[4736]: I0316 15:50:14.666641 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:14 crc kubenswrapper[4736]: I0316 15:50:14.676034 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jjmnv"] Mar 16 15:50:14 crc kubenswrapper[4736]: I0316 15:50:14.797176 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4028e23f-104c-4f85-9b12-159728a7db60-catalog-content\") pod \"community-operators-jjmnv\" (UID: \"4028e23f-104c-4f85-9b12-159728a7db60\") " pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:14 crc kubenswrapper[4736]: I0316 15:50:14.797346 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5kg\" (UniqueName: \"kubernetes.io/projected/4028e23f-104c-4f85-9b12-159728a7db60-kube-api-access-lc5kg\") pod \"community-operators-jjmnv\" (UID: \"4028e23f-104c-4f85-9b12-159728a7db60\") " pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:14 crc kubenswrapper[4736]: I0316 15:50:14.797406 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4028e23f-104c-4f85-9b12-159728a7db60-utilities\") pod \"community-operators-jjmnv\" (UID: \"4028e23f-104c-4f85-9b12-159728a7db60\") " pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:15 crc kubenswrapper[4736]: I0316 15:50:15.638384 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4028e23f-104c-4f85-9b12-159728a7db60-utilities\") pod \"community-operators-jjmnv\" (UID: \"4028e23f-104c-4f85-9b12-159728a7db60\") " pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:15 crc kubenswrapper[4736]: I0316 15:50:15.638802 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4028e23f-104c-4f85-9b12-159728a7db60-catalog-content\") pod \"community-operators-jjmnv\" (UID: \"4028e23f-104c-4f85-9b12-159728a7db60\") " pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:15 crc kubenswrapper[4736]: I0316 15:50:15.638905 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc5kg\" (UniqueName: \"kubernetes.io/projected/4028e23f-104c-4f85-9b12-159728a7db60-kube-api-access-lc5kg\") pod \"community-operators-jjmnv\" (UID: \"4028e23f-104c-4f85-9b12-159728a7db60\") " pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:15 crc kubenswrapper[4736]: I0316 15:50:15.639193 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4028e23f-104c-4f85-9b12-159728a7db60-utilities\") pod \"community-operators-jjmnv\" (UID: \"4028e23f-104c-4f85-9b12-159728a7db60\") " pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:15 crc kubenswrapper[4736]: I0316 15:50:15.639451 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4028e23f-104c-4f85-9b12-159728a7db60-catalog-content\") pod \"community-operators-jjmnv\" (UID: \"4028e23f-104c-4f85-9b12-159728a7db60\") " pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:15 crc kubenswrapper[4736]: I0316 15:50:15.682455 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc5kg\" (UniqueName: \"kubernetes.io/projected/4028e23f-104c-4f85-9b12-159728a7db60-kube-api-access-lc5kg\") pod \"community-operators-jjmnv\" (UID: \"4028e23f-104c-4f85-9b12-159728a7db60\") " pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:15 crc kubenswrapper[4736]: I0316 15:50:15.898514 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:16 crc kubenswrapper[4736]: I0316 15:50:16.440436 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jjmnv"] Mar 16 15:50:16 crc kubenswrapper[4736]: I0316 15:50:16.779322 4736 generic.go:334] "Generic (PLEG): container finished" podID="4028e23f-104c-4f85-9b12-159728a7db60" containerID="8190bc55b3edf4aaa45cab11d6e94662f91a25d5b30df046b2925f4c75f716ae" exitCode=0 Mar 16 15:50:16 crc kubenswrapper[4736]: I0316 15:50:16.779368 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjmnv" event={"ID":"4028e23f-104c-4f85-9b12-159728a7db60","Type":"ContainerDied","Data":"8190bc55b3edf4aaa45cab11d6e94662f91a25d5b30df046b2925f4c75f716ae"} Mar 16 15:50:16 crc kubenswrapper[4736]: I0316 15:50:16.779615 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjmnv" event={"ID":"4028e23f-104c-4f85-9b12-159728a7db60","Type":"ContainerStarted","Data":"0bfdc75f4302dd28e60d31c20d080aa6eb603688b8a7a47d42388b2a7b4fe4e8"} Mar 16 15:50:18 crc kubenswrapper[4736]: I0316 15:50:18.803712 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjmnv" event={"ID":"4028e23f-104c-4f85-9b12-159728a7db60","Type":"ContainerStarted","Data":"8ea23ed4944619c835e0ba664883488fb9cb1f9d7ef09501f83ffbc680ea6673"} Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.041501 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hztzd"] Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.044068 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.054227 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hztzd"] Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.234526 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/485547bc-37b0-4488-bf02-bcef66249018-catalog-content\") pod \"redhat-marketplace-hztzd\" (UID: \"485547bc-37b0-4488-bf02-bcef66249018\") " pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.239380 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/485547bc-37b0-4488-bf02-bcef66249018-utilities\") pod \"redhat-marketplace-hztzd\" (UID: \"485547bc-37b0-4488-bf02-bcef66249018\") " pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.239746 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85cjd\" (UniqueName: \"kubernetes.io/projected/485547bc-37b0-4488-bf02-bcef66249018-kube-api-access-85cjd\") pod \"redhat-marketplace-hztzd\" (UID: \"485547bc-37b0-4488-bf02-bcef66249018\") " pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.341522 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/485547bc-37b0-4488-bf02-bcef66249018-catalog-content\") pod \"redhat-marketplace-hztzd\" (UID: \"485547bc-37b0-4488-bf02-bcef66249018\") " pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.341614 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/485547bc-37b0-4488-bf02-bcef66249018-utilities\") pod \"redhat-marketplace-hztzd\" (UID: \"485547bc-37b0-4488-bf02-bcef66249018\") " pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.341743 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85cjd\" (UniqueName: \"kubernetes.io/projected/485547bc-37b0-4488-bf02-bcef66249018-kube-api-access-85cjd\") pod \"redhat-marketplace-hztzd\" (UID: \"485547bc-37b0-4488-bf02-bcef66249018\") " pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.344235 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/485547bc-37b0-4488-bf02-bcef66249018-utilities\") pod \"redhat-marketplace-hztzd\" (UID: \"485547bc-37b0-4488-bf02-bcef66249018\") " pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.344633 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/485547bc-37b0-4488-bf02-bcef66249018-catalog-content\") pod \"redhat-marketplace-hztzd\" (UID: \"485547bc-37b0-4488-bf02-bcef66249018\") " pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.374887 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85cjd\" (UniqueName: \"kubernetes.io/projected/485547bc-37b0-4488-bf02-bcef66249018-kube-api-access-85cjd\") pod \"redhat-marketplace-hztzd\" (UID: \"485547bc-37b0-4488-bf02-bcef66249018\") " pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.665077 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.825797 4736 generic.go:334] "Generic (PLEG): container finished" podID="4028e23f-104c-4f85-9b12-159728a7db60" containerID="8ea23ed4944619c835e0ba664883488fb9cb1f9d7ef09501f83ffbc680ea6673" exitCode=0 Mar 16 15:50:20 crc kubenswrapper[4736]: I0316 15:50:20.825868 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjmnv" event={"ID":"4028e23f-104c-4f85-9b12-159728a7db60","Type":"ContainerDied","Data":"8ea23ed4944619c835e0ba664883488fb9cb1f9d7ef09501f83ffbc680ea6673"} Mar 16 15:50:21 crc kubenswrapper[4736]: I0316 15:50:21.837355 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjmnv" event={"ID":"4028e23f-104c-4f85-9b12-159728a7db60","Type":"ContainerStarted","Data":"f9884b47627d18ed1c77544449e658a759c2e1ef30e22c2930d2c2707c5f5ff3"} Mar 16 15:50:21 crc kubenswrapper[4736]: I0316 15:50:21.870225 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jjmnv" podStartSLOduration=3.214080931 podStartE2EDuration="7.870202264s" podCreationTimestamp="2026-03-16 15:50:14 +0000 UTC" firstStartedPulling="2026-03-16 15:50:16.780892149 +0000 UTC m=+2218.508282436" lastFinishedPulling="2026-03-16 15:50:21.437013472 +0000 UTC m=+2223.164403769" observedRunningTime="2026-03-16 15:50:21.860683005 +0000 UTC m=+2223.588073292" watchObservedRunningTime="2026-03-16 15:50:21.870202264 +0000 UTC m=+2223.597592551" Mar 16 15:50:22 crc kubenswrapper[4736]: I0316 15:50:22.049503 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hztzd"] Mar 16 15:50:22 crc kubenswrapper[4736]: W0316 15:50:22.053856 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod485547bc_37b0_4488_bf02_bcef66249018.slice/crio-6a439691d537f86bb8dad3836b31d78b9a319dae4c17383bc97e272294b725d3 WatchSource:0}: Error finding container 6a439691d537f86bb8dad3836b31d78b9a319dae4c17383bc97e272294b725d3: Status 404 returned error can't find the container with id 6a439691d537f86bb8dad3836b31d78b9a319dae4c17383bc97e272294b725d3 Mar 16 15:50:22 crc kubenswrapper[4736]: I0316 15:50:22.845701 4736 generic.go:334] "Generic (PLEG): container finished" podID="485547bc-37b0-4488-bf02-bcef66249018" containerID="3bde03327ffaf92f2c9c30635211a449643510fbc9bd705ecb6985bb07d65908" exitCode=0 Mar 16 15:50:22 crc kubenswrapper[4736]: I0316 15:50:22.845787 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hztzd" event={"ID":"485547bc-37b0-4488-bf02-bcef66249018","Type":"ContainerDied","Data":"3bde03327ffaf92f2c9c30635211a449643510fbc9bd705ecb6985bb07d65908"} Mar 16 15:50:22 crc kubenswrapper[4736]: I0316 15:50:22.846060 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hztzd" event={"ID":"485547bc-37b0-4488-bf02-bcef66249018","Type":"ContainerStarted","Data":"6a439691d537f86bb8dad3836b31d78b9a319dae4c17383bc97e272294b725d3"} Mar 16 15:50:23 crc kubenswrapper[4736]: I0316 15:50:23.858733 4736 generic.go:334] "Generic (PLEG): container finished" podID="431e1d6e-e9a2-4414-b37e-9612991eb00c" containerID="05473c9867631ec3fd40d8cb052493364faf88f517d720a2829c53a0e7b32337" exitCode=0 Mar 16 15:50:23 crc kubenswrapper[4736]: I0316 15:50:23.858830 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" event={"ID":"431e1d6e-e9a2-4414-b37e-9612991eb00c","Type":"ContainerDied","Data":"05473c9867631ec3fd40d8cb052493364faf88f517d720a2829c53a0e7b32337"} Mar 16 15:50:23 crc kubenswrapper[4736]: I0316 15:50:23.861850 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hztzd" event={"ID":"485547bc-37b0-4488-bf02-bcef66249018","Type":"ContainerStarted","Data":"adfec97102ff12a1ad17cc9a6e7421ca06ce85943d2aa975615b4424708c520e"} Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.368957 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.551768 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/431e1d6e-e9a2-4414-b37e-9612991eb00c-ssh-key-openstack-edpm-ipam\") pod \"431e1d6e-e9a2-4414-b37e-9612991eb00c\" (UID: \"431e1d6e-e9a2-4414-b37e-9612991eb00c\") " Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.551878 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/431e1d6e-e9a2-4414-b37e-9612991eb00c-inventory\") pod \"431e1d6e-e9a2-4414-b37e-9612991eb00c\" (UID: \"431e1d6e-e9a2-4414-b37e-9612991eb00c\") " Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.552027 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vssmv\" (UniqueName: \"kubernetes.io/projected/431e1d6e-e9a2-4414-b37e-9612991eb00c-kube-api-access-vssmv\") pod \"431e1d6e-e9a2-4414-b37e-9612991eb00c\" (UID: \"431e1d6e-e9a2-4414-b37e-9612991eb00c\") " Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.559430 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/431e1d6e-e9a2-4414-b37e-9612991eb00c-kube-api-access-vssmv" (OuterVolumeSpecName: "kube-api-access-vssmv") pod "431e1d6e-e9a2-4414-b37e-9612991eb00c" (UID: "431e1d6e-e9a2-4414-b37e-9612991eb00c"). InnerVolumeSpecName "kube-api-access-vssmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.581533 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431e1d6e-e9a2-4414-b37e-9612991eb00c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "431e1d6e-e9a2-4414-b37e-9612991eb00c" (UID: "431e1d6e-e9a2-4414-b37e-9612991eb00c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.654530 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/431e1d6e-e9a2-4414-b37e-9612991eb00c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.654574 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vssmv\" (UniqueName: \"kubernetes.io/projected/431e1d6e-e9a2-4414-b37e-9612991eb00c-kube-api-access-vssmv\") on node \"crc\" DevicePath \"\"" Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.740155 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/431e1d6e-e9a2-4414-b37e-9612991eb00c-inventory" (OuterVolumeSpecName: "inventory") pod "431e1d6e-e9a2-4414-b37e-9612991eb00c" (UID: "431e1d6e-e9a2-4414-b37e-9612991eb00c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.759295 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/431e1d6e-e9a2-4414-b37e-9612991eb00c-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.878554 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" event={"ID":"431e1d6e-e9a2-4414-b37e-9612991eb00c","Type":"ContainerDied","Data":"f47b75e5fa42e246aed268728891d454e2654a1f9b0d0fa1b42cd788f8b76f92"} Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.878601 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f47b75e5fa42e246aed268728891d454e2654a1f9b0d0fa1b42cd788f8b76f92" Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.878655 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb" Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.898836 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.899017 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:25 crc kubenswrapper[4736]: I0316 15:50:25.981066 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.012234 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9"] Mar 16 15:50:26 crc kubenswrapper[4736]: E0316 15:50:26.012749 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="431e1d6e-e9a2-4414-b37e-9612991eb00c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.012772 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="431e1d6e-e9a2-4414-b37e-9612991eb00c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.012999 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="431e1d6e-e9a2-4414-b37e-9612991eb00c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.013824 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.019291 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.019479 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.019540 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.019496 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.019436 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.019349 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.019811 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.019945 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.040592 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9"] Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.170485 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.170538 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.170584 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.170651 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.170675 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.170710 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.170755 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.170814 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.170860 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9bwv\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-kube-api-access-s9bwv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.170913 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.170977 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.171041 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.171095 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.171149 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.273289 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.273338 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.273364 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.273408 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.273428 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.273452 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.273483 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.273521 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.273546 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9bwv\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-kube-api-access-s9bwv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.273576 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.273608 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.273643 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.273676 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.273701 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.313543 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.313923 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.314227 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.315976 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.316481 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.316982 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.318929 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.319153 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.320353 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.320807 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.320837 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.320876 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.320973 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9bwv\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-kube-api-access-s9bwv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.324352 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.334365 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.892333 4736 generic.go:334] "Generic (PLEG): container finished" podID="485547bc-37b0-4488-bf02-bcef66249018" containerID="adfec97102ff12a1ad17cc9a6e7421ca06ce85943d2aa975615b4424708c520e" exitCode=0 Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.892421 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hztzd" event={"ID":"485547bc-37b0-4488-bf02-bcef66249018","Type":"ContainerDied","Data":"adfec97102ff12a1ad17cc9a6e7421ca06ce85943d2aa975615b4424708c520e"} Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.917472 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9"] Mar 16 15:50:26 crc kubenswrapper[4736]: I0316 15:50:26.953159 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:27 crc kubenswrapper[4736]: I0316 15:50:27.439435 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jjmnv"] Mar 16 15:50:27 crc kubenswrapper[4736]: I0316 15:50:27.912201 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" event={"ID":"7f216da9-755f-42e5-8058-15af7388a669","Type":"ContainerStarted","Data":"28ace84c765a251975337cfcef7af8f0a069c2ac0a7827866955d9c715efce8e"} Mar 16 15:50:27 crc kubenswrapper[4736]: I0316 15:50:27.913133 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" event={"ID":"7f216da9-755f-42e5-8058-15af7388a669","Type":"ContainerStarted","Data":"7ec242addc6b714dfda94931d4c71851396f5dbbfe67c85ffa0e2b1c164ec5c8"} Mar 16 15:50:27 crc kubenswrapper[4736]: I0316 15:50:27.916458 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hztzd" event={"ID":"485547bc-37b0-4488-bf02-bcef66249018","Type":"ContainerStarted","Data":"6ecb48be62bbb6c3ba0ae5fc9b1080810e4f47d49123ddea6d82e747a4ad4f27"} Mar 16 15:50:27 crc kubenswrapper[4736]: I0316 15:50:27.938061 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" podStartSLOduration=2.488608606 podStartE2EDuration="2.938043823s" podCreationTimestamp="2026-03-16 15:50:25 +0000 UTC" firstStartedPulling="2026-03-16 15:50:26.932164964 +0000 UTC m=+2228.659555251" lastFinishedPulling="2026-03-16 15:50:27.381600181 +0000 UTC m=+2229.108990468" observedRunningTime="2026-03-16 15:50:27.932520762 +0000 UTC m=+2229.659911059" watchObservedRunningTime="2026-03-16 15:50:27.938043823 +0000 UTC m=+2229.665434110" Mar 16 15:50:27 crc kubenswrapper[4736]: I0316 15:50:27.965038 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hztzd" podStartSLOduration=3.516804031 podStartE2EDuration="7.965022461s" podCreationTimestamp="2026-03-16 15:50:20 +0000 UTC" firstStartedPulling="2026-03-16 15:50:22.848194962 +0000 UTC m=+2224.575585249" lastFinishedPulling="2026-03-16 15:50:27.296413392 +0000 UTC m=+2229.023803679" observedRunningTime="2026-03-16 15:50:27.960898988 +0000 UTC m=+2229.688289285" watchObservedRunningTime="2026-03-16 15:50:27.965022461 +0000 UTC m=+2229.692412748" Mar 16 15:50:28 crc kubenswrapper[4736]: I0316 15:50:28.922934 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jjmnv" podUID="4028e23f-104c-4f85-9b12-159728a7db60" containerName="registry-server" containerID="cri-o://f9884b47627d18ed1c77544449e658a759c2e1ef30e22c2930d2c2707c5f5ff3" gracePeriod=2 Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.498028 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.642356 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4028e23f-104c-4f85-9b12-159728a7db60-catalog-content\") pod \"4028e23f-104c-4f85-9b12-159728a7db60\" (UID: \"4028e23f-104c-4f85-9b12-159728a7db60\") " Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.642427 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc5kg\" (UniqueName: \"kubernetes.io/projected/4028e23f-104c-4f85-9b12-159728a7db60-kube-api-access-lc5kg\") pod \"4028e23f-104c-4f85-9b12-159728a7db60\" (UID: \"4028e23f-104c-4f85-9b12-159728a7db60\") " Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.642605 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4028e23f-104c-4f85-9b12-159728a7db60-utilities\") pod \"4028e23f-104c-4f85-9b12-159728a7db60\" (UID: \"4028e23f-104c-4f85-9b12-159728a7db60\") " Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.643801 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4028e23f-104c-4f85-9b12-159728a7db60-utilities" (OuterVolumeSpecName: "utilities") pod "4028e23f-104c-4f85-9b12-159728a7db60" (UID: "4028e23f-104c-4f85-9b12-159728a7db60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.644566 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4028e23f-104c-4f85-9b12-159728a7db60-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.651823 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4028e23f-104c-4f85-9b12-159728a7db60-kube-api-access-lc5kg" (OuterVolumeSpecName: "kube-api-access-lc5kg") pod "4028e23f-104c-4f85-9b12-159728a7db60" (UID: "4028e23f-104c-4f85-9b12-159728a7db60"). InnerVolumeSpecName "kube-api-access-lc5kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.703301 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4028e23f-104c-4f85-9b12-159728a7db60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4028e23f-104c-4f85-9b12-159728a7db60" (UID: "4028e23f-104c-4f85-9b12-159728a7db60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.751089 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4028e23f-104c-4f85-9b12-159728a7db60-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.752231 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc5kg\" (UniqueName: \"kubernetes.io/projected/4028e23f-104c-4f85-9b12-159728a7db60-kube-api-access-lc5kg\") on node \"crc\" DevicePath \"\"" Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.936328 4736 generic.go:334] "Generic (PLEG): container finished" podID="4028e23f-104c-4f85-9b12-159728a7db60" containerID="f9884b47627d18ed1c77544449e658a759c2e1ef30e22c2930d2c2707c5f5ff3" exitCode=0 Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.936373 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjmnv" event={"ID":"4028e23f-104c-4f85-9b12-159728a7db60","Type":"ContainerDied","Data":"f9884b47627d18ed1c77544449e658a759c2e1ef30e22c2930d2c2707c5f5ff3"} Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.936404 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjmnv" event={"ID":"4028e23f-104c-4f85-9b12-159728a7db60","Type":"ContainerDied","Data":"0bfdc75f4302dd28e60d31c20d080aa6eb603688b8a7a47d42388b2a7b4fe4e8"} Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.936427 4736 scope.go:117] "RemoveContainer" containerID="f9884b47627d18ed1c77544449e658a759c2e1ef30e22c2930d2c2707c5f5ff3" Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.936566 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjmnv" Mar 16 15:50:29 crc kubenswrapper[4736]: I0316 15:50:29.973725 4736 scope.go:117] "RemoveContainer" containerID="8ea23ed4944619c835e0ba664883488fb9cb1f9d7ef09501f83ffbc680ea6673" Mar 16 15:50:30 crc kubenswrapper[4736]: I0316 15:50:30.011059 4736 scope.go:117] "RemoveContainer" containerID="8190bc55b3edf4aaa45cab11d6e94662f91a25d5b30df046b2925f4c75f716ae" Mar 16 15:50:30 crc kubenswrapper[4736]: I0316 15:50:30.011493 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jjmnv"] Mar 16 15:50:30 crc kubenswrapper[4736]: I0316 15:50:30.019859 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jjmnv"] Mar 16 15:50:30 crc kubenswrapper[4736]: I0316 15:50:30.054468 4736 scope.go:117] "RemoveContainer" containerID="f9884b47627d18ed1c77544449e658a759c2e1ef30e22c2930d2c2707c5f5ff3" Mar 16 15:50:30 crc kubenswrapper[4736]: E0316 15:50:30.055023 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9884b47627d18ed1c77544449e658a759c2e1ef30e22c2930d2c2707c5f5ff3\": container with ID starting with f9884b47627d18ed1c77544449e658a759c2e1ef30e22c2930d2c2707c5f5ff3 not found: ID does not exist" containerID="f9884b47627d18ed1c77544449e658a759c2e1ef30e22c2930d2c2707c5f5ff3" Mar 16 15:50:30 crc kubenswrapper[4736]: I0316 15:50:30.055123 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9884b47627d18ed1c77544449e658a759c2e1ef30e22c2930d2c2707c5f5ff3"} err="failed to get container status \"f9884b47627d18ed1c77544449e658a759c2e1ef30e22c2930d2c2707c5f5ff3\": rpc error: code = NotFound desc = could not find container \"f9884b47627d18ed1c77544449e658a759c2e1ef30e22c2930d2c2707c5f5ff3\": container with ID starting with f9884b47627d18ed1c77544449e658a759c2e1ef30e22c2930d2c2707c5f5ff3 not found: ID does not exist" Mar 16 15:50:30 crc kubenswrapper[4736]: I0316 15:50:30.055162 4736 scope.go:117] "RemoveContainer" containerID="8ea23ed4944619c835e0ba664883488fb9cb1f9d7ef09501f83ffbc680ea6673" Mar 16 15:50:30 crc kubenswrapper[4736]: E0316 15:50:30.055640 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ea23ed4944619c835e0ba664883488fb9cb1f9d7ef09501f83ffbc680ea6673\": container with ID starting with 8ea23ed4944619c835e0ba664883488fb9cb1f9d7ef09501f83ffbc680ea6673 not found: ID does not exist" containerID="8ea23ed4944619c835e0ba664883488fb9cb1f9d7ef09501f83ffbc680ea6673" Mar 16 15:50:30 crc kubenswrapper[4736]: I0316 15:50:30.055679 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ea23ed4944619c835e0ba664883488fb9cb1f9d7ef09501f83ffbc680ea6673"} err="failed to get container status \"8ea23ed4944619c835e0ba664883488fb9cb1f9d7ef09501f83ffbc680ea6673\": rpc error: code = NotFound desc = could not find container \"8ea23ed4944619c835e0ba664883488fb9cb1f9d7ef09501f83ffbc680ea6673\": container with ID starting with 8ea23ed4944619c835e0ba664883488fb9cb1f9d7ef09501f83ffbc680ea6673 not found: ID does not exist" Mar 16 15:50:30 crc kubenswrapper[4736]: I0316 15:50:30.055704 4736 scope.go:117] "RemoveContainer" containerID="8190bc55b3edf4aaa45cab11d6e94662f91a25d5b30df046b2925f4c75f716ae" Mar 16 15:50:30 crc kubenswrapper[4736]: E0316 15:50:30.055990 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8190bc55b3edf4aaa45cab11d6e94662f91a25d5b30df046b2925f4c75f716ae\": container with ID starting with 8190bc55b3edf4aaa45cab11d6e94662f91a25d5b30df046b2925f4c75f716ae not found: ID does not exist" containerID="8190bc55b3edf4aaa45cab11d6e94662f91a25d5b30df046b2925f4c75f716ae" Mar 16 15:50:30 crc kubenswrapper[4736]: I0316 15:50:30.056023 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8190bc55b3edf4aaa45cab11d6e94662f91a25d5b30df046b2925f4c75f716ae"} err="failed to get container status \"8190bc55b3edf4aaa45cab11d6e94662f91a25d5b30df046b2925f4c75f716ae\": rpc error: code = NotFound desc = could not find container \"8190bc55b3edf4aaa45cab11d6e94662f91a25d5b30df046b2925f4c75f716ae\": container with ID starting with 8190bc55b3edf4aaa45cab11d6e94662f91a25d5b30df046b2925f4c75f716ae not found: ID does not exist" Mar 16 15:50:30 crc kubenswrapper[4736]: I0316 15:50:30.668600 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:30 crc kubenswrapper[4736]: I0316 15:50:30.668912 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:30 crc kubenswrapper[4736]: I0316 15:50:30.731869 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:30 crc kubenswrapper[4736]: I0316 15:50:30.990700 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4028e23f-104c-4f85-9b12-159728a7db60" path="/var/lib/kubelet/pods/4028e23f-104c-4f85-9b12-159728a7db60/volumes" Mar 16 15:50:38 crc kubenswrapper[4736]: I0316 15:50:38.507765 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:50:38 crc kubenswrapper[4736]: I0316 15:50:38.508405 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:50:40 crc kubenswrapper[4736]: I0316 15:50:40.726997 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:40 crc kubenswrapper[4736]: I0316 15:50:40.785031 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hztzd"] Mar 16 15:50:41 crc kubenswrapper[4736]: I0316 15:50:41.047759 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hztzd" podUID="485547bc-37b0-4488-bf02-bcef66249018" containerName="registry-server" containerID="cri-o://6ecb48be62bbb6c3ba0ae5fc9b1080810e4f47d49123ddea6d82e747a4ad4f27" gracePeriod=2 Mar 16 15:50:41 crc kubenswrapper[4736]: I0316 15:50:41.512577 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:41 crc kubenswrapper[4736]: I0316 15:50:41.594640 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85cjd\" (UniqueName: \"kubernetes.io/projected/485547bc-37b0-4488-bf02-bcef66249018-kube-api-access-85cjd\") pod \"485547bc-37b0-4488-bf02-bcef66249018\" (UID: \"485547bc-37b0-4488-bf02-bcef66249018\") " Mar 16 15:50:41 crc kubenswrapper[4736]: I0316 15:50:41.594917 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/485547bc-37b0-4488-bf02-bcef66249018-catalog-content\") pod \"485547bc-37b0-4488-bf02-bcef66249018\" (UID: \"485547bc-37b0-4488-bf02-bcef66249018\") " Mar 16 15:50:41 crc kubenswrapper[4736]: I0316 15:50:41.594989 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/485547bc-37b0-4488-bf02-bcef66249018-utilities\") pod \"485547bc-37b0-4488-bf02-bcef66249018\" (UID: \"485547bc-37b0-4488-bf02-bcef66249018\") " Mar 16 15:50:41 crc kubenswrapper[4736]: I0316 15:50:41.596199 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/485547bc-37b0-4488-bf02-bcef66249018-utilities" (OuterVolumeSpecName: "utilities") pod "485547bc-37b0-4488-bf02-bcef66249018" (UID: "485547bc-37b0-4488-bf02-bcef66249018"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:50:41 crc kubenswrapper[4736]: I0316 15:50:41.600454 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485547bc-37b0-4488-bf02-bcef66249018-kube-api-access-85cjd" (OuterVolumeSpecName: "kube-api-access-85cjd") pod "485547bc-37b0-4488-bf02-bcef66249018" (UID: "485547bc-37b0-4488-bf02-bcef66249018"). InnerVolumeSpecName "kube-api-access-85cjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:50:41 crc kubenswrapper[4736]: I0316 15:50:41.631935 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/485547bc-37b0-4488-bf02-bcef66249018-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "485547bc-37b0-4488-bf02-bcef66249018" (UID: "485547bc-37b0-4488-bf02-bcef66249018"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:50:41 crc kubenswrapper[4736]: I0316 15:50:41.697408 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85cjd\" (UniqueName: \"kubernetes.io/projected/485547bc-37b0-4488-bf02-bcef66249018-kube-api-access-85cjd\") on node \"crc\" DevicePath \"\"" Mar 16 15:50:41 crc kubenswrapper[4736]: I0316 15:50:41.697615 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/485547bc-37b0-4488-bf02-bcef66249018-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:50:41 crc kubenswrapper[4736]: I0316 15:50:41.697697 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/485547bc-37b0-4488-bf02-bcef66249018-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.059066 4736 generic.go:334] "Generic (PLEG): container finished" podID="485547bc-37b0-4488-bf02-bcef66249018" containerID="6ecb48be62bbb6c3ba0ae5fc9b1080810e4f47d49123ddea6d82e747a4ad4f27" exitCode=0 Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.059127 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hztzd" event={"ID":"485547bc-37b0-4488-bf02-bcef66249018","Type":"ContainerDied","Data":"6ecb48be62bbb6c3ba0ae5fc9b1080810e4f47d49123ddea6d82e747a4ad4f27"} Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.059162 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hztzd" event={"ID":"485547bc-37b0-4488-bf02-bcef66249018","Type":"ContainerDied","Data":"6a439691d537f86bb8dad3836b31d78b9a319dae4c17383bc97e272294b725d3"} Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.059184 4736 scope.go:117] "RemoveContainer" containerID="6ecb48be62bbb6c3ba0ae5fc9b1080810e4f47d49123ddea6d82e747a4ad4f27" Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.059202 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hztzd" Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.098201 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hztzd"] Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.101991 4736 scope.go:117] "RemoveContainer" containerID="adfec97102ff12a1ad17cc9a6e7421ca06ce85943d2aa975615b4424708c520e" Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.105953 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hztzd"] Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.132469 4736 scope.go:117] "RemoveContainer" containerID="3bde03327ffaf92f2c9c30635211a449643510fbc9bd705ecb6985bb07d65908" Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.170485 4736 scope.go:117] "RemoveContainer" containerID="6ecb48be62bbb6c3ba0ae5fc9b1080810e4f47d49123ddea6d82e747a4ad4f27" Mar 16 15:50:42 crc kubenswrapper[4736]: E0316 15:50:42.171152 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ecb48be62bbb6c3ba0ae5fc9b1080810e4f47d49123ddea6d82e747a4ad4f27\": container with ID starting with 6ecb48be62bbb6c3ba0ae5fc9b1080810e4f47d49123ddea6d82e747a4ad4f27 not found: ID does not exist" containerID="6ecb48be62bbb6c3ba0ae5fc9b1080810e4f47d49123ddea6d82e747a4ad4f27" Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.171233 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ecb48be62bbb6c3ba0ae5fc9b1080810e4f47d49123ddea6d82e747a4ad4f27"} err="failed to get container status \"6ecb48be62bbb6c3ba0ae5fc9b1080810e4f47d49123ddea6d82e747a4ad4f27\": rpc error: code = NotFound desc = could not find container \"6ecb48be62bbb6c3ba0ae5fc9b1080810e4f47d49123ddea6d82e747a4ad4f27\": container with ID starting with 6ecb48be62bbb6c3ba0ae5fc9b1080810e4f47d49123ddea6d82e747a4ad4f27 not found: ID does not exist" Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.171272 4736 scope.go:117] "RemoveContainer" containerID="adfec97102ff12a1ad17cc9a6e7421ca06ce85943d2aa975615b4424708c520e" Mar 16 15:50:42 crc kubenswrapper[4736]: E0316 15:50:42.171735 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adfec97102ff12a1ad17cc9a6e7421ca06ce85943d2aa975615b4424708c520e\": container with ID starting with adfec97102ff12a1ad17cc9a6e7421ca06ce85943d2aa975615b4424708c520e not found: ID does not exist" containerID="adfec97102ff12a1ad17cc9a6e7421ca06ce85943d2aa975615b4424708c520e" Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.171768 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adfec97102ff12a1ad17cc9a6e7421ca06ce85943d2aa975615b4424708c520e"} err="failed to get container status \"adfec97102ff12a1ad17cc9a6e7421ca06ce85943d2aa975615b4424708c520e\": rpc error: code = NotFound desc = could not find container \"adfec97102ff12a1ad17cc9a6e7421ca06ce85943d2aa975615b4424708c520e\": container with ID starting with adfec97102ff12a1ad17cc9a6e7421ca06ce85943d2aa975615b4424708c520e not found: ID does not exist" Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.171785 4736 scope.go:117] "RemoveContainer" containerID="3bde03327ffaf92f2c9c30635211a449643510fbc9bd705ecb6985bb07d65908" Mar 16 15:50:42 crc kubenswrapper[4736]: E0316 15:50:42.172166 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bde03327ffaf92f2c9c30635211a449643510fbc9bd705ecb6985bb07d65908\": container with ID starting with 3bde03327ffaf92f2c9c30635211a449643510fbc9bd705ecb6985bb07d65908 not found: ID does not exist" containerID="3bde03327ffaf92f2c9c30635211a449643510fbc9bd705ecb6985bb07d65908" Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.172202 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bde03327ffaf92f2c9c30635211a449643510fbc9bd705ecb6985bb07d65908"} err="failed to get container status \"3bde03327ffaf92f2c9c30635211a449643510fbc9bd705ecb6985bb07d65908\": rpc error: code = NotFound desc = could not find container \"3bde03327ffaf92f2c9c30635211a449643510fbc9bd705ecb6985bb07d65908\": container with ID starting with 3bde03327ffaf92f2c9c30635211a449643510fbc9bd705ecb6985bb07d65908 not found: ID does not exist" Mar 16 15:50:42 crc kubenswrapper[4736]: I0316 15:50:42.990460 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="485547bc-37b0-4488-bf02-bcef66249018" path="/var/lib/kubelet/pods/485547bc-37b0-4488-bf02-bcef66249018/volumes" Mar 16 15:50:49 crc kubenswrapper[4736]: I0316 15:50:49.856126 4736 scope.go:117] "RemoveContainer" containerID="fe5f8c29b5871382a79fae2158a1f2272366a91776b03e7ba9a43560a946df4a" Mar 16 15:51:03 crc kubenswrapper[4736]: I0316 15:51:03.248687 4736 generic.go:334] "Generic (PLEG): container finished" podID="7f216da9-755f-42e5-8058-15af7388a669" containerID="28ace84c765a251975337cfcef7af8f0a069c2ac0a7827866955d9c715efce8e" exitCode=0 Mar 16 15:51:03 crc kubenswrapper[4736]: I0316 15:51:03.248786 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" event={"ID":"7f216da9-755f-42e5-8058-15af7388a669","Type":"ContainerDied","Data":"28ace84c765a251975337cfcef7af8f0a069c2ac0a7827866955d9c715efce8e"} Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.755617 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.875460 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-bootstrap-combined-ca-bundle\") pod \"7f216da9-755f-42e5-8058-15af7388a669\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.875502 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-repo-setup-combined-ca-bundle\") pod \"7f216da9-755f-42e5-8058-15af7388a669\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.875555 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-ovn-combined-ca-bundle\") pod \"7f216da9-755f-42e5-8058-15af7388a669\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.875633 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"7f216da9-755f-42e5-8058-15af7388a669\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.875655 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-ssh-key-openstack-edpm-ipam\") pod \"7f216da9-755f-42e5-8058-15af7388a669\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.875691 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"7f216da9-755f-42e5-8058-15af7388a669\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.876661 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-libvirt-combined-ca-bundle\") pod \"7f216da9-755f-42e5-8058-15af7388a669\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.876723 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-neutron-metadata-combined-ca-bundle\") pod \"7f216da9-755f-42e5-8058-15af7388a669\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.876805 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-ovn-default-certs-0\") pod \"7f216da9-755f-42e5-8058-15af7388a669\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.876847 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"7f216da9-755f-42e5-8058-15af7388a669\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.876872 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-inventory\") pod \"7f216da9-755f-42e5-8058-15af7388a669\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.876890 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-nova-combined-ca-bundle\") pod \"7f216da9-755f-42e5-8058-15af7388a669\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.876933 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-telemetry-combined-ca-bundle\") pod \"7f216da9-755f-42e5-8058-15af7388a669\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.876973 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9bwv\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-kube-api-access-s9bwv\") pod \"7f216da9-755f-42e5-8058-15af7388a669\" (UID: \"7f216da9-755f-42e5-8058-15af7388a669\") " Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.884027 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7f216da9-755f-42e5-8058-15af7388a669" (UID: "7f216da9-755f-42e5-8058-15af7388a669"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.883938 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-kube-api-access-s9bwv" (OuterVolumeSpecName: "kube-api-access-s9bwv") pod "7f216da9-755f-42e5-8058-15af7388a669" (UID: "7f216da9-755f-42e5-8058-15af7388a669"). InnerVolumeSpecName "kube-api-access-s9bwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.884181 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "7f216da9-755f-42e5-8058-15af7388a669" (UID: "7f216da9-755f-42e5-8058-15af7388a669"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.884228 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7f216da9-755f-42e5-8058-15af7388a669" (UID: "7f216da9-755f-42e5-8058-15af7388a669"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.887351 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7f216da9-755f-42e5-8058-15af7388a669" (UID: "7f216da9-755f-42e5-8058-15af7388a669"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.887353 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "7f216da9-755f-42e5-8058-15af7388a669" (UID: "7f216da9-755f-42e5-8058-15af7388a669"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.889491 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7f216da9-755f-42e5-8058-15af7388a669" (UID: "7f216da9-755f-42e5-8058-15af7388a669"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.889503 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7f216da9-755f-42e5-8058-15af7388a669" (UID: "7f216da9-755f-42e5-8058-15af7388a669"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.890379 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "7f216da9-755f-42e5-8058-15af7388a669" (UID: "7f216da9-755f-42e5-8058-15af7388a669"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.891939 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7f216da9-755f-42e5-8058-15af7388a669" (UID: "7f216da9-755f-42e5-8058-15af7388a669"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.894392 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "7f216da9-755f-42e5-8058-15af7388a669" (UID: "7f216da9-755f-42e5-8058-15af7388a669"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.894506 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7f216da9-755f-42e5-8058-15af7388a669" (UID: "7f216da9-755f-42e5-8058-15af7388a669"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.911683 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f216da9-755f-42e5-8058-15af7388a669" (UID: "7f216da9-755f-42e5-8058-15af7388a669"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.913653 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-inventory" (OuterVolumeSpecName: "inventory") pod "7f216da9-755f-42e5-8058-15af7388a669" (UID: "7f216da9-755f-42e5-8058-15af7388a669"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.980469 4736 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.980505 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9bwv\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-kube-api-access-s9bwv\") on node \"crc\" DevicePath \"\"" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.980519 4736 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.980531 4736 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.980556 4736 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.980568 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.980582 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.980596 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.980609 4736 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.980621 4736 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.980634 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.980645 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f216da9-755f-42e5-8058-15af7388a669-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.980658 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:51:04 crc kubenswrapper[4736]: I0316 15:51:04.980668 4736 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f216da9-755f-42e5-8058-15af7388a669-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.272976 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" event={"ID":"7f216da9-755f-42e5-8058-15af7388a669","Type":"ContainerDied","Data":"7ec242addc6b714dfda94931d4c71851396f5dbbfe67c85ffa0e2b1c164ec5c8"} Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.273033 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ec242addc6b714dfda94931d4c71851396f5dbbfe67c85ffa0e2b1c164ec5c8" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.273151 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.561246 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q"] Mar 16 15:51:05 crc kubenswrapper[4736]: E0316 15:51:05.561679 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4028e23f-104c-4f85-9b12-159728a7db60" containerName="extract-content" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.561702 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4028e23f-104c-4f85-9b12-159728a7db60" containerName="extract-content" Mar 16 15:51:05 crc kubenswrapper[4736]: E0316 15:51:05.561719 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485547bc-37b0-4488-bf02-bcef66249018" containerName="registry-server" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.561726 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="485547bc-37b0-4488-bf02-bcef66249018" containerName="registry-server" Mar 16 15:51:05 crc kubenswrapper[4736]: E0316 15:51:05.561737 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4028e23f-104c-4f85-9b12-159728a7db60" containerName="registry-server" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.561743 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4028e23f-104c-4f85-9b12-159728a7db60" containerName="registry-server" Mar 16 15:51:05 crc kubenswrapper[4736]: E0316 15:51:05.561754 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485547bc-37b0-4488-bf02-bcef66249018" containerName="extract-content" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.561760 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="485547bc-37b0-4488-bf02-bcef66249018" containerName="extract-content" Mar 16 15:51:05 crc kubenswrapper[4736]: E0316 15:51:05.561775 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485547bc-37b0-4488-bf02-bcef66249018" containerName="extract-utilities" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.561782 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="485547bc-37b0-4488-bf02-bcef66249018" containerName="extract-utilities" Mar 16 15:51:05 crc kubenswrapper[4736]: E0316 15:51:05.561803 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4028e23f-104c-4f85-9b12-159728a7db60" containerName="extract-utilities" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.561810 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4028e23f-104c-4f85-9b12-159728a7db60" containerName="extract-utilities" Mar 16 15:51:05 crc kubenswrapper[4736]: E0316 15:51:05.561842 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f216da9-755f-42e5-8058-15af7388a669" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.561851 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f216da9-755f-42e5-8058-15af7388a669" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.562050 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f216da9-755f-42e5-8058-15af7388a669" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.562069 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4028e23f-104c-4f85-9b12-159728a7db60" containerName="registry-server" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.562098 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="485547bc-37b0-4488-bf02-bcef66249018" containerName="registry-server" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.562784 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.574777 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.575053 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.577037 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.577395 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.579167 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.600455 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q"] Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.693250 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjqmn\" (UniqueName: \"kubernetes.io/projected/cd71853b-a1d3-4429-90b3-4cee241cfa21-kube-api-access-tjqmn\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.693324 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cd71853b-a1d3-4429-90b3-4cee241cfa21-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.693405 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.693530 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.693567 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.795518 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cd71853b-a1d3-4429-90b3-4cee241cfa21-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.795606 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.795638 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.795655 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.795730 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjqmn\" (UniqueName: \"kubernetes.io/projected/cd71853b-a1d3-4429-90b3-4cee241cfa21-kube-api-access-tjqmn\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.796810 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cd71853b-a1d3-4429-90b3-4cee241cfa21-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.801033 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.801065 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.807797 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.822567 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjqmn\" (UniqueName: \"kubernetes.io/projected/cd71853b-a1d3-4429-90b3-4cee241cfa21-kube-api-access-tjqmn\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xt72q\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:05 crc kubenswrapper[4736]: I0316 15:51:05.878859 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:51:06 crc kubenswrapper[4736]: I0316 15:51:06.411530 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q"] Mar 16 15:51:07 crc kubenswrapper[4736]: I0316 15:51:07.298081 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" event={"ID":"cd71853b-a1d3-4429-90b3-4cee241cfa21","Type":"ContainerStarted","Data":"5531871064d20336b0e9f2cd28769b6f6d23376f066f71d12bb7d69cee0db9da"} Mar 16 15:51:07 crc kubenswrapper[4736]: I0316 15:51:07.298460 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" event={"ID":"cd71853b-a1d3-4429-90b3-4cee241cfa21","Type":"ContainerStarted","Data":"7f994c9c2f8408f0ae366c5c3d4f5b8de868b8d78a9117f2a1c8565512621620"} Mar 16 15:51:07 crc kubenswrapper[4736]: I0316 15:51:07.330881 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" podStartSLOduration=1.9194984929999999 podStartE2EDuration="2.330845979s" podCreationTimestamp="2026-03-16 15:51:05 +0000 UTC" firstStartedPulling="2026-03-16 15:51:06.416298927 +0000 UTC m=+2268.143689214" lastFinishedPulling="2026-03-16 15:51:06.827646413 +0000 UTC m=+2268.555036700" observedRunningTime="2026-03-16 15:51:07.31991464 +0000 UTC m=+2269.047304927" watchObservedRunningTime="2026-03-16 15:51:07.330845979 +0000 UTC m=+2269.058236306" Mar 16 15:51:08 crc kubenswrapper[4736]: I0316 15:51:08.507859 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:51:08 crc kubenswrapper[4736]: I0316 15:51:08.508767 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:51:38 crc kubenswrapper[4736]: I0316 15:51:38.507897 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:51:38 crc kubenswrapper[4736]: I0316 15:51:38.508727 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:51:38 crc kubenswrapper[4736]: I0316 15:51:38.508793 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 15:51:38 crc kubenswrapper[4736]: I0316 15:51:38.510049 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 15:51:38 crc kubenswrapper[4736]: I0316 15:51:38.510339 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" gracePeriod=600 Mar 16 15:51:38 crc kubenswrapper[4736]: E0316 15:51:38.643584 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:51:39 crc kubenswrapper[4736]: I0316 15:51:39.600951 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" exitCode=0 Mar 16 15:51:39 crc kubenswrapper[4736]: I0316 15:51:39.601452 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985"} Mar 16 15:51:39 crc kubenswrapper[4736]: I0316 15:51:39.601517 4736 scope.go:117] "RemoveContainer" containerID="6064bfd54b4506f269b651af4704ab9ae454797c4e81b4b99e7443432c2e1d90" Mar 16 15:51:39 crc kubenswrapper[4736]: I0316 15:51:39.602738 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:51:39 crc kubenswrapper[4736]: E0316 15:51:39.603996 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:51:49 crc kubenswrapper[4736]: I0316 15:51:49.977999 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:51:49 crc kubenswrapper[4736]: E0316 15:51:49.979604 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:51:54 crc kubenswrapper[4736]: I0316 15:51:54.558234 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bxhxg"] Mar 16 15:51:54 crc kubenswrapper[4736]: I0316 15:51:54.561071 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:51:54 crc kubenswrapper[4736]: I0316 15:51:54.576774 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bxhxg"] Mar 16 15:51:54 crc kubenswrapper[4736]: I0316 15:51:54.623278 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc9k9\" (UniqueName: \"kubernetes.io/projected/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-kube-api-access-zc9k9\") pod \"certified-operators-bxhxg\" (UID: \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\") " pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:51:54 crc kubenswrapper[4736]: I0316 15:51:54.623373 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-utilities\") pod \"certified-operators-bxhxg\" (UID: \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\") " pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:51:54 crc kubenswrapper[4736]: I0316 15:51:54.623588 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-catalog-content\") pod \"certified-operators-bxhxg\" (UID: \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\") " pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:51:54 crc kubenswrapper[4736]: I0316 15:51:54.725677 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-utilities\") pod \"certified-operators-bxhxg\" (UID: \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\") " pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:51:54 crc kubenswrapper[4736]: I0316 15:51:54.725783 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-catalog-content\") pod \"certified-operators-bxhxg\" (UID: \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\") " pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:51:54 crc kubenswrapper[4736]: I0316 15:51:54.725889 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc9k9\" (UniqueName: \"kubernetes.io/projected/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-kube-api-access-zc9k9\") pod \"certified-operators-bxhxg\" (UID: \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\") " pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:51:54 crc kubenswrapper[4736]: I0316 15:51:54.726910 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-utilities\") pod \"certified-operators-bxhxg\" (UID: \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\") " pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:51:54 crc kubenswrapper[4736]: I0316 15:51:54.727284 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-catalog-content\") pod \"certified-operators-bxhxg\" (UID: \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\") " pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:51:54 crc kubenswrapper[4736]: I0316 15:51:54.767180 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc9k9\" (UniqueName: \"kubernetes.io/projected/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-kube-api-access-zc9k9\") pod \"certified-operators-bxhxg\" (UID: \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\") " pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:51:54 crc kubenswrapper[4736]: I0316 15:51:54.896300 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:51:55 crc kubenswrapper[4736]: I0316 15:51:55.419770 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bxhxg"] Mar 16 15:51:55 crc kubenswrapper[4736]: I0316 15:51:55.800415 4736 generic.go:334] "Generic (PLEG): container finished" podID="01184f49-f2b6-45b6-8938-a4cfaff5f6ae" containerID="b841f97f707b398612dfab667af80433f23c81bfa4d92214659f02887cf6e864" exitCode=0 Mar 16 15:51:55 crc kubenswrapper[4736]: I0316 15:51:55.800475 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxhxg" event={"ID":"01184f49-f2b6-45b6-8938-a4cfaff5f6ae","Type":"ContainerDied","Data":"b841f97f707b398612dfab667af80433f23c81bfa4d92214659f02887cf6e864"} Mar 16 15:51:55 crc kubenswrapper[4736]: I0316 15:51:55.800755 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxhxg" event={"ID":"01184f49-f2b6-45b6-8938-a4cfaff5f6ae","Type":"ContainerStarted","Data":"ddc0a85a21b5a2d9476ff32f00a97dd082439e3e95f832544660333f469ca7ef"} Mar 16 15:51:56 crc kubenswrapper[4736]: I0316 15:51:56.812231 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxhxg" event={"ID":"01184f49-f2b6-45b6-8938-a4cfaff5f6ae","Type":"ContainerStarted","Data":"443c7c37cb2665f22696121cfc3f6ec002d1b31c4e24b29c7ccb77ff9fbd62d1"} Mar 16 15:51:58 crc kubenswrapper[4736]: I0316 15:51:58.834713 4736 generic.go:334] "Generic (PLEG): container finished" podID="01184f49-f2b6-45b6-8938-a4cfaff5f6ae" containerID="443c7c37cb2665f22696121cfc3f6ec002d1b31c4e24b29c7ccb77ff9fbd62d1" exitCode=0 Mar 16 15:51:58 crc kubenswrapper[4736]: I0316 15:51:58.835178 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxhxg" event={"ID":"01184f49-f2b6-45b6-8938-a4cfaff5f6ae","Type":"ContainerDied","Data":"443c7c37cb2665f22696121cfc3f6ec002d1b31c4e24b29c7ccb77ff9fbd62d1"} Mar 16 15:51:59 crc kubenswrapper[4736]: I0316 15:51:59.853127 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxhxg" event={"ID":"01184f49-f2b6-45b6-8938-a4cfaff5f6ae","Type":"ContainerStarted","Data":"0ae6fe046a019272882a9acd0ce71ce3f592c538cded9a14d4c072b30abe62bd"} Mar 16 15:51:59 crc kubenswrapper[4736]: I0316 15:51:59.881983 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bxhxg" podStartSLOduration=2.444757914 podStartE2EDuration="5.881950744s" podCreationTimestamp="2026-03-16 15:51:54 +0000 UTC" firstStartedPulling="2026-03-16 15:51:55.802955234 +0000 UTC m=+2317.530345531" lastFinishedPulling="2026-03-16 15:51:59.240148084 +0000 UTC m=+2320.967538361" observedRunningTime="2026-03-16 15:51:59.868864047 +0000 UTC m=+2321.596254334" watchObservedRunningTime="2026-03-16 15:51:59.881950744 +0000 UTC m=+2321.609341031" Mar 16 15:52:00 crc kubenswrapper[4736]: I0316 15:52:00.145159 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561272-nqfr6"] Mar 16 15:52:00 crc kubenswrapper[4736]: I0316 15:52:00.146416 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561272-nqfr6" Mar 16 15:52:00 crc kubenswrapper[4736]: I0316 15:52:00.153508 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:52:00 crc kubenswrapper[4736]: I0316 15:52:00.153700 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:52:00 crc kubenswrapper[4736]: I0316 15:52:00.153717 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:52:00 crc kubenswrapper[4736]: I0316 15:52:00.158744 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561272-nqfr6"] Mar 16 15:52:00 crc kubenswrapper[4736]: I0316 15:52:00.287525 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s9zt\" (UniqueName: \"kubernetes.io/projected/042b14dc-bb3b-4764-a023-08f2e98279da-kube-api-access-6s9zt\") pod \"auto-csr-approver-29561272-nqfr6\" (UID: \"042b14dc-bb3b-4764-a023-08f2e98279da\") " pod="openshift-infra/auto-csr-approver-29561272-nqfr6" Mar 16 15:52:00 crc kubenswrapper[4736]: I0316 15:52:00.389218 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s9zt\" (UniqueName: \"kubernetes.io/projected/042b14dc-bb3b-4764-a023-08f2e98279da-kube-api-access-6s9zt\") pod \"auto-csr-approver-29561272-nqfr6\" (UID: \"042b14dc-bb3b-4764-a023-08f2e98279da\") " pod="openshift-infra/auto-csr-approver-29561272-nqfr6" Mar 16 15:52:00 crc kubenswrapper[4736]: I0316 15:52:00.408876 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s9zt\" (UniqueName: \"kubernetes.io/projected/042b14dc-bb3b-4764-a023-08f2e98279da-kube-api-access-6s9zt\") pod \"auto-csr-approver-29561272-nqfr6\" (UID: \"042b14dc-bb3b-4764-a023-08f2e98279da\") " pod="openshift-infra/auto-csr-approver-29561272-nqfr6" Mar 16 15:52:00 crc kubenswrapper[4736]: I0316 15:52:00.476505 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561272-nqfr6" Mar 16 15:52:00 crc kubenswrapper[4736]: I0316 15:52:00.942946 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561272-nqfr6"] Mar 16 15:52:01 crc kubenswrapper[4736]: I0316 15:52:01.870997 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561272-nqfr6" event={"ID":"042b14dc-bb3b-4764-a023-08f2e98279da","Type":"ContainerStarted","Data":"0ebf997ca2dd4d36a8cfaa6a7054f9af5077793069d0f2b7d39656a8d621ba27"} Mar 16 15:52:02 crc kubenswrapper[4736]: I0316 15:52:02.883792 4736 generic.go:334] "Generic (PLEG): container finished" podID="042b14dc-bb3b-4764-a023-08f2e98279da" containerID="01878ec3a3313765440fd72e050ba635abc31e0e8e9faacf735e28f3ca0a1099" exitCode=0 Mar 16 15:52:02 crc kubenswrapper[4736]: I0316 15:52:02.883956 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561272-nqfr6" event={"ID":"042b14dc-bb3b-4764-a023-08f2e98279da","Type":"ContainerDied","Data":"01878ec3a3313765440fd72e050ba635abc31e0e8e9faacf735e28f3ca0a1099"} Mar 16 15:52:04 crc kubenswrapper[4736]: I0316 15:52:04.265343 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561272-nqfr6" Mar 16 15:52:04 crc kubenswrapper[4736]: I0316 15:52:04.374090 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s9zt\" (UniqueName: \"kubernetes.io/projected/042b14dc-bb3b-4764-a023-08f2e98279da-kube-api-access-6s9zt\") pod \"042b14dc-bb3b-4764-a023-08f2e98279da\" (UID: \"042b14dc-bb3b-4764-a023-08f2e98279da\") " Mar 16 15:52:04 crc kubenswrapper[4736]: I0316 15:52:04.380094 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042b14dc-bb3b-4764-a023-08f2e98279da-kube-api-access-6s9zt" (OuterVolumeSpecName: "kube-api-access-6s9zt") pod "042b14dc-bb3b-4764-a023-08f2e98279da" (UID: "042b14dc-bb3b-4764-a023-08f2e98279da"). InnerVolumeSpecName "kube-api-access-6s9zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:52:04 crc kubenswrapper[4736]: I0316 15:52:04.476773 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s9zt\" (UniqueName: \"kubernetes.io/projected/042b14dc-bb3b-4764-a023-08f2e98279da-kube-api-access-6s9zt\") on node \"crc\" DevicePath \"\"" Mar 16 15:52:04 crc kubenswrapper[4736]: I0316 15:52:04.896684 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:52:04 crc kubenswrapper[4736]: I0316 15:52:04.896795 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:52:04 crc kubenswrapper[4736]: I0316 15:52:04.907337 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561272-nqfr6" event={"ID":"042b14dc-bb3b-4764-a023-08f2e98279da","Type":"ContainerDied","Data":"0ebf997ca2dd4d36a8cfaa6a7054f9af5077793069d0f2b7d39656a8d621ba27"} Mar 16 15:52:04 crc kubenswrapper[4736]: I0316 15:52:04.907374 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ebf997ca2dd4d36a8cfaa6a7054f9af5077793069d0f2b7d39656a8d621ba27" Mar 16 15:52:04 crc kubenswrapper[4736]: I0316 15:52:04.907422 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561272-nqfr6" Mar 16 15:52:04 crc kubenswrapper[4736]: I0316 15:52:04.943321 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:52:04 crc kubenswrapper[4736]: I0316 15:52:04.979030 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:52:04 crc kubenswrapper[4736]: E0316 15:52:04.979340 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:52:05 crc kubenswrapper[4736]: I0316 15:52:05.365559 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561266-9xgc7"] Mar 16 15:52:05 crc kubenswrapper[4736]: I0316 15:52:05.378231 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561266-9xgc7"] Mar 16 15:52:05 crc kubenswrapper[4736]: I0316 15:52:05.973240 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:52:06 crc kubenswrapper[4736]: I0316 15:52:06.048630 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bxhxg"] Mar 16 15:52:06 crc kubenswrapper[4736]: I0316 15:52:06.991132 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f" path="/var/lib/kubelet/pods/e0845ab0-7dca-4d2e-a0cb-55ce7fdbc40f/volumes" Mar 16 15:52:07 crc kubenswrapper[4736]: I0316 15:52:07.949856 4736 generic.go:334] "Generic (PLEG): container finished" podID="cd71853b-a1d3-4429-90b3-4cee241cfa21" containerID="5531871064d20336b0e9f2cd28769b6f6d23376f066f71d12bb7d69cee0db9da" exitCode=0 Mar 16 15:52:07 crc kubenswrapper[4736]: I0316 15:52:07.949966 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" event={"ID":"cd71853b-a1d3-4429-90b3-4cee241cfa21","Type":"ContainerDied","Data":"5531871064d20336b0e9f2cd28769b6f6d23376f066f71d12bb7d69cee0db9da"} Mar 16 15:52:07 crc kubenswrapper[4736]: I0316 15:52:07.950043 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bxhxg" podUID="01184f49-f2b6-45b6-8938-a4cfaff5f6ae" containerName="registry-server" containerID="cri-o://0ae6fe046a019272882a9acd0ce71ce3f592c538cded9a14d4c072b30abe62bd" gracePeriod=2 Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.477192 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.570389 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc9k9\" (UniqueName: \"kubernetes.io/projected/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-kube-api-access-zc9k9\") pod \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\" (UID: \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\") " Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.570504 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-utilities\") pod \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\" (UID: \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\") " Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.570547 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-catalog-content\") pod \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\" (UID: \"01184f49-f2b6-45b6-8938-a4cfaff5f6ae\") " Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.571443 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-utilities" (OuterVolumeSpecName: "utilities") pod "01184f49-f2b6-45b6-8938-a4cfaff5f6ae" (UID: "01184f49-f2b6-45b6-8938-a4cfaff5f6ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.578317 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-kube-api-access-zc9k9" (OuterVolumeSpecName: "kube-api-access-zc9k9") pod "01184f49-f2b6-45b6-8938-a4cfaff5f6ae" (UID: "01184f49-f2b6-45b6-8938-a4cfaff5f6ae"). InnerVolumeSpecName "kube-api-access-zc9k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.622605 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01184f49-f2b6-45b6-8938-a4cfaff5f6ae" (UID: "01184f49-f2b6-45b6-8938-a4cfaff5f6ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.672963 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc9k9\" (UniqueName: \"kubernetes.io/projected/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-kube-api-access-zc9k9\") on node \"crc\" DevicePath \"\"" Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.672994 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.673005 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01184f49-f2b6-45b6-8938-a4cfaff5f6ae-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.965196 4736 generic.go:334] "Generic (PLEG): container finished" podID="01184f49-f2b6-45b6-8938-a4cfaff5f6ae" containerID="0ae6fe046a019272882a9acd0ce71ce3f592c538cded9a14d4c072b30abe62bd" exitCode=0 Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.965260 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bxhxg" Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.965275 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxhxg" event={"ID":"01184f49-f2b6-45b6-8938-a4cfaff5f6ae","Type":"ContainerDied","Data":"0ae6fe046a019272882a9acd0ce71ce3f592c538cded9a14d4c072b30abe62bd"} Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.965354 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bxhxg" event={"ID":"01184f49-f2b6-45b6-8938-a4cfaff5f6ae","Type":"ContainerDied","Data":"ddc0a85a21b5a2d9476ff32f00a97dd082439e3e95f832544660333f469ca7ef"} Mar 16 15:52:08 crc kubenswrapper[4736]: I0316 15:52:08.965377 4736 scope.go:117] "RemoveContainer" containerID="0ae6fe046a019272882a9acd0ce71ce3f592c538cded9a14d4c072b30abe62bd" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.028033 4736 scope.go:117] "RemoveContainer" containerID="443c7c37cb2665f22696121cfc3f6ec002d1b31c4e24b29c7ccb77ff9fbd62d1" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.036632 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bxhxg"] Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.047181 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bxhxg"] Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.069196 4736 scope.go:117] "RemoveContainer" containerID="b841f97f707b398612dfab667af80433f23c81bfa4d92214659f02887cf6e864" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.121627 4736 scope.go:117] "RemoveContainer" containerID="0ae6fe046a019272882a9acd0ce71ce3f592c538cded9a14d4c072b30abe62bd" Mar 16 15:52:09 crc kubenswrapper[4736]: E0316 15:52:09.122202 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ae6fe046a019272882a9acd0ce71ce3f592c538cded9a14d4c072b30abe62bd\": container with ID starting with 0ae6fe046a019272882a9acd0ce71ce3f592c538cded9a14d4c072b30abe62bd not found: ID does not exist" containerID="0ae6fe046a019272882a9acd0ce71ce3f592c538cded9a14d4c072b30abe62bd" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.122264 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ae6fe046a019272882a9acd0ce71ce3f592c538cded9a14d4c072b30abe62bd"} err="failed to get container status \"0ae6fe046a019272882a9acd0ce71ce3f592c538cded9a14d4c072b30abe62bd\": rpc error: code = NotFound desc = could not find container \"0ae6fe046a019272882a9acd0ce71ce3f592c538cded9a14d4c072b30abe62bd\": container with ID starting with 0ae6fe046a019272882a9acd0ce71ce3f592c538cded9a14d4c072b30abe62bd not found: ID does not exist" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.122296 4736 scope.go:117] "RemoveContainer" containerID="443c7c37cb2665f22696121cfc3f6ec002d1b31c4e24b29c7ccb77ff9fbd62d1" Mar 16 15:52:09 crc kubenswrapper[4736]: E0316 15:52:09.122649 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"443c7c37cb2665f22696121cfc3f6ec002d1b31c4e24b29c7ccb77ff9fbd62d1\": container with ID starting with 443c7c37cb2665f22696121cfc3f6ec002d1b31c4e24b29c7ccb77ff9fbd62d1 not found: ID does not exist" containerID="443c7c37cb2665f22696121cfc3f6ec002d1b31c4e24b29c7ccb77ff9fbd62d1" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.122683 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"443c7c37cb2665f22696121cfc3f6ec002d1b31c4e24b29c7ccb77ff9fbd62d1"} err="failed to get container status \"443c7c37cb2665f22696121cfc3f6ec002d1b31c4e24b29c7ccb77ff9fbd62d1\": rpc error: code = NotFound desc = could not find container \"443c7c37cb2665f22696121cfc3f6ec002d1b31c4e24b29c7ccb77ff9fbd62d1\": container with ID starting with 443c7c37cb2665f22696121cfc3f6ec002d1b31c4e24b29c7ccb77ff9fbd62d1 not found: ID does not exist" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.122709 4736 scope.go:117] "RemoveContainer" containerID="b841f97f707b398612dfab667af80433f23c81bfa4d92214659f02887cf6e864" Mar 16 15:52:09 crc kubenswrapper[4736]: E0316 15:52:09.122992 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b841f97f707b398612dfab667af80433f23c81bfa4d92214659f02887cf6e864\": container with ID starting with b841f97f707b398612dfab667af80433f23c81bfa4d92214659f02887cf6e864 not found: ID does not exist" containerID="b841f97f707b398612dfab667af80433f23c81bfa4d92214659f02887cf6e864" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.123025 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b841f97f707b398612dfab667af80433f23c81bfa4d92214659f02887cf6e864"} err="failed to get container status \"b841f97f707b398612dfab667af80433f23c81bfa4d92214659f02887cf6e864\": rpc error: code = NotFound desc = could not find container \"b841f97f707b398612dfab667af80433f23c81bfa4d92214659f02887cf6e864\": container with ID starting with b841f97f707b398612dfab667af80433f23c81bfa4d92214659f02887cf6e864 not found: ID does not exist" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.434535 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.489949 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cd71853b-a1d3-4429-90b3-4cee241cfa21-ovncontroller-config-0\") pod \"cd71853b-a1d3-4429-90b3-4cee241cfa21\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.490412 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-ovn-combined-ca-bundle\") pod \"cd71853b-a1d3-4429-90b3-4cee241cfa21\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.490543 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-ssh-key-openstack-edpm-ipam\") pod \"cd71853b-a1d3-4429-90b3-4cee241cfa21\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.490592 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-inventory\") pod \"cd71853b-a1d3-4429-90b3-4cee241cfa21\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.490662 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjqmn\" (UniqueName: \"kubernetes.io/projected/cd71853b-a1d3-4429-90b3-4cee241cfa21-kube-api-access-tjqmn\") pod \"cd71853b-a1d3-4429-90b3-4cee241cfa21\" (UID: \"cd71853b-a1d3-4429-90b3-4cee241cfa21\") " Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.509894 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "cd71853b-a1d3-4429-90b3-4cee241cfa21" (UID: "cd71853b-a1d3-4429-90b3-4cee241cfa21"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.509919 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd71853b-a1d3-4429-90b3-4cee241cfa21-kube-api-access-tjqmn" (OuterVolumeSpecName: "kube-api-access-tjqmn") pod "cd71853b-a1d3-4429-90b3-4cee241cfa21" (UID: "cd71853b-a1d3-4429-90b3-4cee241cfa21"). InnerVolumeSpecName "kube-api-access-tjqmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.518316 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-inventory" (OuterVolumeSpecName: "inventory") pod "cd71853b-a1d3-4429-90b3-4cee241cfa21" (UID: "cd71853b-a1d3-4429-90b3-4cee241cfa21"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.526597 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cd71853b-a1d3-4429-90b3-4cee241cfa21" (UID: "cd71853b-a1d3-4429-90b3-4cee241cfa21"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.531374 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd71853b-a1d3-4429-90b3-4cee241cfa21-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "cd71853b-a1d3-4429-90b3-4cee241cfa21" (UID: "cd71853b-a1d3-4429-90b3-4cee241cfa21"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.593067 4736 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.593118 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.593131 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd71853b-a1d3-4429-90b3-4cee241cfa21-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.593140 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjqmn\" (UniqueName: \"kubernetes.io/projected/cd71853b-a1d3-4429-90b3-4cee241cfa21-kube-api-access-tjqmn\") on node \"crc\" DevicePath \"\"" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.593149 4736 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cd71853b-a1d3-4429-90b3-4cee241cfa21-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.973714 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" event={"ID":"cd71853b-a1d3-4429-90b3-4cee241cfa21","Type":"ContainerDied","Data":"7f994c9c2f8408f0ae366c5c3d4f5b8de868b8d78a9117f2a1c8565512621620"} Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.973744 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xt72q" Mar 16 15:52:09 crc kubenswrapper[4736]: I0316 15:52:09.973748 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f994c9c2f8408f0ae366c5c3d4f5b8de868b8d78a9117f2a1c8565512621620" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.065664 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x"] Mar 16 15:52:10 crc kubenswrapper[4736]: E0316 15:52:10.066229 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01184f49-f2b6-45b6-8938-a4cfaff5f6ae" containerName="extract-utilities" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.066301 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="01184f49-f2b6-45b6-8938-a4cfaff5f6ae" containerName="extract-utilities" Mar 16 15:52:10 crc kubenswrapper[4736]: E0316 15:52:10.066363 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01184f49-f2b6-45b6-8938-a4cfaff5f6ae" containerName="registry-server" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.066411 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="01184f49-f2b6-45b6-8938-a4cfaff5f6ae" containerName="registry-server" Mar 16 15:52:10 crc kubenswrapper[4736]: E0316 15:52:10.066473 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042b14dc-bb3b-4764-a023-08f2e98279da" containerName="oc" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.066523 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="042b14dc-bb3b-4764-a023-08f2e98279da" containerName="oc" Mar 16 15:52:10 crc kubenswrapper[4736]: E0316 15:52:10.066601 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01184f49-f2b6-45b6-8938-a4cfaff5f6ae" containerName="extract-content" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.066649 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="01184f49-f2b6-45b6-8938-a4cfaff5f6ae" containerName="extract-content" Mar 16 15:52:10 crc kubenswrapper[4736]: E0316 15:52:10.066702 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd71853b-a1d3-4429-90b3-4cee241cfa21" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.066751 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd71853b-a1d3-4429-90b3-4cee241cfa21" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.066961 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="01184f49-f2b6-45b6-8938-a4cfaff5f6ae" containerName="registry-server" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.067020 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd71853b-a1d3-4429-90b3-4cee241cfa21" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.067088 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="042b14dc-bb3b-4764-a023-08f2e98279da" containerName="oc" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.067815 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.070728 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.070769 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.070764 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.070955 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.071388 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.071677 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.117271 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x"] Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.204566 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.204856 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.204952 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhtpd\" (UniqueName: \"kubernetes.io/projected/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-kube-api-access-fhtpd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.205115 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.205209 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.205349 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.307429 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.308496 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.308717 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.308752 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhtpd\" (UniqueName: \"kubernetes.io/projected/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-kube-api-access-fhtpd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.308929 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.309018 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.312806 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.312894 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.313549 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.315507 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.316950 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.327573 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhtpd\" (UniqueName: \"kubernetes.io/projected/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-kube-api-access-fhtpd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.387211 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.948061 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x"] Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.989526 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01184f49-f2b6-45b6-8938-a4cfaff5f6ae" path="/var/lib/kubelet/pods/01184f49-f2b6-45b6-8938-a4cfaff5f6ae/volumes" Mar 16 15:52:10 crc kubenswrapper[4736]: I0316 15:52:10.990623 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" event={"ID":"7e4df23c-a76d-497d-b0c1-0b3264ed20ce","Type":"ContainerStarted","Data":"746fc65e4dea70f79f8c57e60815bcaa7d4ba15637ebdcfc0b40c16614b4dc2a"} Mar 16 15:52:12 crc kubenswrapper[4736]: I0316 15:52:12.001694 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" event={"ID":"7e4df23c-a76d-497d-b0c1-0b3264ed20ce","Type":"ContainerStarted","Data":"c6640cbbbbab1d215375e47f098d8b6d8ea9137fabada29450f282b2d5cc6d4a"} Mar 16 15:52:12 crc kubenswrapper[4736]: I0316 15:52:12.028925 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" podStartSLOduration=1.323495686 podStartE2EDuration="2.02890302s" podCreationTimestamp="2026-03-16 15:52:10 +0000 UTC" firstStartedPulling="2026-03-16 15:52:10.956841984 +0000 UTC m=+2332.684232281" lastFinishedPulling="2026-03-16 15:52:11.662249328 +0000 UTC m=+2333.389639615" observedRunningTime="2026-03-16 15:52:12.015739891 +0000 UTC m=+2333.743130198" watchObservedRunningTime="2026-03-16 15:52:12.02890302 +0000 UTC m=+2333.756293317" Mar 16 15:52:19 crc kubenswrapper[4736]: I0316 15:52:19.978394 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:52:19 crc kubenswrapper[4736]: E0316 15:52:19.979133 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:52:31 crc kubenswrapper[4736]: I0316 15:52:31.977541 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:52:31 crc kubenswrapper[4736]: E0316 15:52:31.978225 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:52:43 crc kubenswrapper[4736]: I0316 15:52:43.978826 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:52:43 crc kubenswrapper[4736]: E0316 15:52:43.979574 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:52:50 crc kubenswrapper[4736]: I0316 15:52:50.012529 4736 scope.go:117] "RemoveContainer" containerID="f956110bc5b24afb3302309c55739d10017f3aa05b5cec1835b0ea89e60a4c5e" Mar 16 15:52:58 crc kubenswrapper[4736]: I0316 15:52:58.985646 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:52:58 crc kubenswrapper[4736]: E0316 15:52:58.986602 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:53:01 crc kubenswrapper[4736]: I0316 15:53:01.518520 4736 generic.go:334] "Generic (PLEG): container finished" podID="7e4df23c-a76d-497d-b0c1-0b3264ed20ce" containerID="c6640cbbbbab1d215375e47f098d8b6d8ea9137fabada29450f282b2d5cc6d4a" exitCode=0 Mar 16 15:53:01 crc kubenswrapper[4736]: I0316 15:53:01.518728 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" event={"ID":"7e4df23c-a76d-497d-b0c1-0b3264ed20ce","Type":"ContainerDied","Data":"c6640cbbbbab1d215375e47f098d8b6d8ea9137fabada29450f282b2d5cc6d4a"} Mar 16 15:53:02 crc kubenswrapper[4736]: I0316 15:53:02.958211 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.001289 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-neutron-metadata-combined-ca-bundle\") pod \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.001367 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-inventory\") pod \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.001406 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-ssh-key-openstack-edpm-ipam\") pod \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.001455 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-nova-metadata-neutron-config-0\") pod \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.001572 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhtpd\" (UniqueName: \"kubernetes.io/projected/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-kube-api-access-fhtpd\") pod \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.001632 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\" (UID: \"7e4df23c-a76d-497d-b0c1-0b3264ed20ce\") " Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.006988 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7e4df23c-a76d-497d-b0c1-0b3264ed20ce" (UID: "7e4df23c-a76d-497d-b0c1-0b3264ed20ce"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.014672 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-kube-api-access-fhtpd" (OuterVolumeSpecName: "kube-api-access-fhtpd") pod "7e4df23c-a76d-497d-b0c1-0b3264ed20ce" (UID: "7e4df23c-a76d-497d-b0c1-0b3264ed20ce"). InnerVolumeSpecName "kube-api-access-fhtpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.037214 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7e4df23c-a76d-497d-b0c1-0b3264ed20ce" (UID: "7e4df23c-a76d-497d-b0c1-0b3264ed20ce"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.044988 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7e4df23c-a76d-497d-b0c1-0b3264ed20ce" (UID: "7e4df23c-a76d-497d-b0c1-0b3264ed20ce"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.055205 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-inventory" (OuterVolumeSpecName: "inventory") pod "7e4df23c-a76d-497d-b0c1-0b3264ed20ce" (UID: "7e4df23c-a76d-497d-b0c1-0b3264ed20ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.067923 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7e4df23c-a76d-497d-b0c1-0b3264ed20ce" (UID: "7e4df23c-a76d-497d-b0c1-0b3264ed20ce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.104646 4736 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.104684 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.104695 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.104706 4736 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.104715 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhtpd\" (UniqueName: \"kubernetes.io/projected/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-kube-api-access-fhtpd\") on node \"crc\" DevicePath \"\"" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.104728 4736 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7e4df23c-a76d-497d-b0c1-0b3264ed20ce-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.541192 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" event={"ID":"7e4df23c-a76d-497d-b0c1-0b3264ed20ce","Type":"ContainerDied","Data":"746fc65e4dea70f79f8c57e60815bcaa7d4ba15637ebdcfc0b40c16614b4dc2a"} Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.541229 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.541234 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="746fc65e4dea70f79f8c57e60815bcaa7d4ba15637ebdcfc0b40c16614b4dc2a" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.642651 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h"] Mar 16 15:53:03 crc kubenswrapper[4736]: E0316 15:53:03.643056 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4df23c-a76d-497d-b0c1-0b3264ed20ce" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.643075 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4df23c-a76d-497d-b0c1-0b3264ed20ce" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.643275 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4df23c-a76d-497d-b0c1-0b3264ed20ce" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.643888 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.647721 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.647907 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.647980 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.647985 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.651644 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.660373 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h"] Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.716737 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.716801 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.716902 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.716963 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.716996 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zjlt\" (UniqueName: \"kubernetes.io/projected/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-kube-api-access-6zjlt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: E0316 15:53:03.751359 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e4df23c_a76d_497d_b0c1_0b3264ed20ce.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e4df23c_a76d_497d_b0c1_0b3264ed20ce.slice/crio-746fc65e4dea70f79f8c57e60815bcaa7d4ba15637ebdcfc0b40c16614b4dc2a\": RecentStats: unable to find data in memory cache]" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.818262 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.818312 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.818384 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.818431 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.818455 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zjlt\" (UniqueName: \"kubernetes.io/projected/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-kube-api-access-6zjlt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.822041 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.823048 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.823187 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.823901 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.837351 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zjlt\" (UniqueName: \"kubernetes.io/projected/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-kube-api-access-6zjlt\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:03 crc kubenswrapper[4736]: I0316 15:53:03.967919 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:53:04 crc kubenswrapper[4736]: I0316 15:53:04.550674 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h"] Mar 16 15:53:05 crc kubenswrapper[4736]: I0316 15:53:05.559168 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" event={"ID":"97bb28be-aaed-4b82-9df1-cb24c9dd48e3","Type":"ContainerStarted","Data":"f392829fbe481c12ee13e19e5bf15b42ecfa1df6169a3448f914132f46ab0ac2"} Mar 16 15:53:06 crc kubenswrapper[4736]: I0316 15:53:06.571976 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" event={"ID":"97bb28be-aaed-4b82-9df1-cb24c9dd48e3","Type":"ContainerStarted","Data":"28b1b8b9d0ef78a91d5f6ca5f5e5904739b47189d88f3b55375029b324fa63b7"} Mar 16 15:53:06 crc kubenswrapper[4736]: I0316 15:53:06.591174 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" podStartSLOduration=2.827633618 podStartE2EDuration="3.591154395s" podCreationTimestamp="2026-03-16 15:53:03 +0000 UTC" firstStartedPulling="2026-03-16 15:53:04.57055964 +0000 UTC m=+2386.297949927" lastFinishedPulling="2026-03-16 15:53:05.334080417 +0000 UTC m=+2387.061470704" observedRunningTime="2026-03-16 15:53:06.587630379 +0000 UTC m=+2388.315020676" watchObservedRunningTime="2026-03-16 15:53:06.591154395 +0000 UTC m=+2388.318544682" Mar 16 15:53:10 crc kubenswrapper[4736]: I0316 15:53:10.978543 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:53:10 crc kubenswrapper[4736]: E0316 15:53:10.979151 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:53:21 crc kubenswrapper[4736]: I0316 15:53:21.977850 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:53:21 crc kubenswrapper[4736]: E0316 15:53:21.978995 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:53:33 crc kubenswrapper[4736]: I0316 15:53:33.978777 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:53:33 crc kubenswrapper[4736]: E0316 15:53:33.979702 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:53:47 crc kubenswrapper[4736]: I0316 15:53:47.979157 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:53:47 crc kubenswrapper[4736]: E0316 15:53:47.980030 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:53:58 crc kubenswrapper[4736]: I0316 15:53:58.986712 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:53:58 crc kubenswrapper[4736]: E0316 15:53:58.987477 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:54:00 crc kubenswrapper[4736]: I0316 15:54:00.157253 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561274-tdh84"] Mar 16 15:54:00 crc kubenswrapper[4736]: I0316 15:54:00.159472 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561274-tdh84" Mar 16 15:54:00 crc kubenswrapper[4736]: I0316 15:54:00.168243 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:54:00 crc kubenswrapper[4736]: I0316 15:54:00.168551 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:54:00 crc kubenswrapper[4736]: I0316 15:54:00.168744 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:54:00 crc kubenswrapper[4736]: I0316 15:54:00.172813 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kfrg\" (UniqueName: \"kubernetes.io/projected/35a0f0fa-94fe-450e-9800-565c0f5b66e1-kube-api-access-7kfrg\") pod \"auto-csr-approver-29561274-tdh84\" (UID: \"35a0f0fa-94fe-450e-9800-565c0f5b66e1\") " pod="openshift-infra/auto-csr-approver-29561274-tdh84" Mar 16 15:54:00 crc kubenswrapper[4736]: I0316 15:54:00.194151 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561274-tdh84"] Mar 16 15:54:00 crc kubenswrapper[4736]: I0316 15:54:00.275451 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kfrg\" (UniqueName: \"kubernetes.io/projected/35a0f0fa-94fe-450e-9800-565c0f5b66e1-kube-api-access-7kfrg\") pod \"auto-csr-approver-29561274-tdh84\" (UID: \"35a0f0fa-94fe-450e-9800-565c0f5b66e1\") " pod="openshift-infra/auto-csr-approver-29561274-tdh84" Mar 16 15:54:00 crc kubenswrapper[4736]: I0316 15:54:00.300395 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kfrg\" (UniqueName: \"kubernetes.io/projected/35a0f0fa-94fe-450e-9800-565c0f5b66e1-kube-api-access-7kfrg\") pod \"auto-csr-approver-29561274-tdh84\" (UID: \"35a0f0fa-94fe-450e-9800-565c0f5b66e1\") " pod="openshift-infra/auto-csr-approver-29561274-tdh84" Mar 16 15:54:00 crc kubenswrapper[4736]: I0316 15:54:00.499954 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561274-tdh84" Mar 16 15:54:01 crc kubenswrapper[4736]: I0316 15:54:00.999535 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561274-tdh84"] Mar 16 15:54:01 crc kubenswrapper[4736]: I0316 15:54:01.066316 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561274-tdh84" event={"ID":"35a0f0fa-94fe-450e-9800-565c0f5b66e1","Type":"ContainerStarted","Data":"ae56e18f45769be85785d3ded7073f55ce115bb4118d5cb47f6dd523ee117a83"} Mar 16 15:54:03 crc kubenswrapper[4736]: I0316 15:54:03.083021 4736 generic.go:334] "Generic (PLEG): container finished" podID="35a0f0fa-94fe-450e-9800-565c0f5b66e1" containerID="19ccce3bd13bcee9c4bea214ebe6034c8d2139fce8eb1204d3d9993e69b77a2b" exitCode=0 Mar 16 15:54:03 crc kubenswrapper[4736]: I0316 15:54:03.083078 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561274-tdh84" event={"ID":"35a0f0fa-94fe-450e-9800-565c0f5b66e1","Type":"ContainerDied","Data":"19ccce3bd13bcee9c4bea214ebe6034c8d2139fce8eb1204d3d9993e69b77a2b"} Mar 16 15:54:04 crc kubenswrapper[4736]: I0316 15:54:04.490826 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561274-tdh84" Mar 16 15:54:04 crc kubenswrapper[4736]: I0316 15:54:04.650350 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kfrg\" (UniqueName: \"kubernetes.io/projected/35a0f0fa-94fe-450e-9800-565c0f5b66e1-kube-api-access-7kfrg\") pod \"35a0f0fa-94fe-450e-9800-565c0f5b66e1\" (UID: \"35a0f0fa-94fe-450e-9800-565c0f5b66e1\") " Mar 16 15:54:04 crc kubenswrapper[4736]: I0316 15:54:04.655641 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a0f0fa-94fe-450e-9800-565c0f5b66e1-kube-api-access-7kfrg" (OuterVolumeSpecName: "kube-api-access-7kfrg") pod "35a0f0fa-94fe-450e-9800-565c0f5b66e1" (UID: "35a0f0fa-94fe-450e-9800-565c0f5b66e1"). InnerVolumeSpecName "kube-api-access-7kfrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:54:04 crc kubenswrapper[4736]: I0316 15:54:04.758004 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kfrg\" (UniqueName: \"kubernetes.io/projected/35a0f0fa-94fe-450e-9800-565c0f5b66e1-kube-api-access-7kfrg\") on node \"crc\" DevicePath \"\"" Mar 16 15:54:05 crc kubenswrapper[4736]: I0316 15:54:05.101394 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561274-tdh84" event={"ID":"35a0f0fa-94fe-450e-9800-565c0f5b66e1","Type":"ContainerDied","Data":"ae56e18f45769be85785d3ded7073f55ce115bb4118d5cb47f6dd523ee117a83"} Mar 16 15:54:05 crc kubenswrapper[4736]: I0316 15:54:05.101440 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae56e18f45769be85785d3ded7073f55ce115bb4118d5cb47f6dd523ee117a83" Mar 16 15:54:05 crc kubenswrapper[4736]: I0316 15:54:05.101894 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561274-tdh84" Mar 16 15:54:05 crc kubenswrapper[4736]: I0316 15:54:05.584996 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561268-pr7tq"] Mar 16 15:54:05 crc kubenswrapper[4736]: I0316 15:54:05.599485 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561268-pr7tq"] Mar 16 15:54:06 crc kubenswrapper[4736]: I0316 15:54:06.990638 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80cd2bef-f6eb-4e45-8c17-b74fab484d78" path="/var/lib/kubelet/pods/80cd2bef-f6eb-4e45-8c17-b74fab484d78/volumes" Mar 16 15:54:11 crc kubenswrapper[4736]: I0316 15:54:11.978553 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:54:11 crc kubenswrapper[4736]: E0316 15:54:11.979405 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:54:22 crc kubenswrapper[4736]: I0316 15:54:22.978930 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:54:22 crc kubenswrapper[4736]: E0316 15:54:22.980383 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:54:34 crc kubenswrapper[4736]: I0316 15:54:34.978810 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:54:34 crc kubenswrapper[4736]: E0316 15:54:34.979660 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:54:45 crc kubenswrapper[4736]: I0316 15:54:45.979134 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:54:45 crc kubenswrapper[4736]: E0316 15:54:45.980255 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:54:50 crc kubenswrapper[4736]: I0316 15:54:50.125611 4736 scope.go:117] "RemoveContainer" containerID="e206711ee14c56b94a7025a8ffc1f49637c871d58337d4409e1b2b74fe951b0e" Mar 16 15:55:00 crc kubenswrapper[4736]: I0316 15:55:00.979066 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:55:00 crc kubenswrapper[4736]: E0316 15:55:00.980658 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:55:11 crc kubenswrapper[4736]: I0316 15:55:11.978663 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:55:11 crc kubenswrapper[4736]: E0316 15:55:11.981702 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.363803 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lsshg"] Mar 16 15:55:19 crc kubenswrapper[4736]: E0316 15:55:19.365033 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a0f0fa-94fe-450e-9800-565c0f5b66e1" containerName="oc" Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.365051 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a0f0fa-94fe-450e-9800-565c0f5b66e1" containerName="oc" Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.365294 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a0f0fa-94fe-450e-9800-565c0f5b66e1" containerName="oc" Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.367144 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.388145 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lsshg"] Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.502362 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxkk9\" (UniqueName: \"kubernetes.io/projected/c4908205-8195-4ba9-8c91-7ecbc1e530e2-kube-api-access-dxkk9\") pod \"redhat-operators-lsshg\" (UID: \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\") " pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.502412 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4908205-8195-4ba9-8c91-7ecbc1e530e2-catalog-content\") pod \"redhat-operators-lsshg\" (UID: \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\") " pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.502573 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4908205-8195-4ba9-8c91-7ecbc1e530e2-utilities\") pod \"redhat-operators-lsshg\" (UID: \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\") " pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.604157 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4908205-8195-4ba9-8c91-7ecbc1e530e2-utilities\") pod \"redhat-operators-lsshg\" (UID: \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\") " pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.604256 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxkk9\" (UniqueName: \"kubernetes.io/projected/c4908205-8195-4ba9-8c91-7ecbc1e530e2-kube-api-access-dxkk9\") pod \"redhat-operators-lsshg\" (UID: \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\") " pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.604284 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4908205-8195-4ba9-8c91-7ecbc1e530e2-catalog-content\") pod \"redhat-operators-lsshg\" (UID: \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\") " pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.604792 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4908205-8195-4ba9-8c91-7ecbc1e530e2-utilities\") pod \"redhat-operators-lsshg\" (UID: \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\") " pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.605266 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4908205-8195-4ba9-8c91-7ecbc1e530e2-catalog-content\") pod \"redhat-operators-lsshg\" (UID: \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\") " pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.622736 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxkk9\" (UniqueName: \"kubernetes.io/projected/c4908205-8195-4ba9-8c91-7ecbc1e530e2-kube-api-access-dxkk9\") pod \"redhat-operators-lsshg\" (UID: \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\") " pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:55:19 crc kubenswrapper[4736]: I0316 15:55:19.708681 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:55:20 crc kubenswrapper[4736]: I0316 15:55:20.244063 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lsshg"] Mar 16 15:55:20 crc kubenswrapper[4736]: I0316 15:55:20.913335 4736 generic.go:334] "Generic (PLEG): container finished" podID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" containerID="a95a48e495ab6628c7c94a4c11b46736deff8eaefae7694e3ee33ca4094ba8fe" exitCode=0 Mar 16 15:55:20 crc kubenswrapper[4736]: I0316 15:55:20.913576 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsshg" event={"ID":"c4908205-8195-4ba9-8c91-7ecbc1e530e2","Type":"ContainerDied","Data":"a95a48e495ab6628c7c94a4c11b46736deff8eaefae7694e3ee33ca4094ba8fe"} Mar 16 15:55:20 crc kubenswrapper[4736]: I0316 15:55:20.913671 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsshg" event={"ID":"c4908205-8195-4ba9-8c91-7ecbc1e530e2","Type":"ContainerStarted","Data":"715e5c5a965e0f1dd8591aeaf1e447e63ff960c4bfce5cfa5c4f683b74aecbd5"} Mar 16 15:55:20 crc kubenswrapper[4736]: I0316 15:55:20.916322 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 15:55:21 crc kubenswrapper[4736]: I0316 15:55:21.924126 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsshg" event={"ID":"c4908205-8195-4ba9-8c91-7ecbc1e530e2","Type":"ContainerStarted","Data":"1d6ee2db149cbbafa5d59057693cbe35f0c4f71ef669bf59c7a4ee90c96d300f"} Mar 16 15:55:24 crc kubenswrapper[4736]: I0316 15:55:24.978915 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:55:24 crc kubenswrapper[4736]: E0316 15:55:24.979925 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:55:26 crc kubenswrapper[4736]: I0316 15:55:26.966638 4736 generic.go:334] "Generic (PLEG): container finished" podID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" containerID="1d6ee2db149cbbafa5d59057693cbe35f0c4f71ef669bf59c7a4ee90c96d300f" exitCode=0 Mar 16 15:55:26 crc kubenswrapper[4736]: I0316 15:55:26.966715 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsshg" event={"ID":"c4908205-8195-4ba9-8c91-7ecbc1e530e2","Type":"ContainerDied","Data":"1d6ee2db149cbbafa5d59057693cbe35f0c4f71ef669bf59c7a4ee90c96d300f"} Mar 16 15:55:27 crc kubenswrapper[4736]: I0316 15:55:27.976550 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsshg" event={"ID":"c4908205-8195-4ba9-8c91-7ecbc1e530e2","Type":"ContainerStarted","Data":"131ed96888a9c9d2fcdd55651ae2ec37985a43017735c607f6a724d79e5dad2e"} Mar 16 15:55:29 crc kubenswrapper[4736]: I0316 15:55:29.710090 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:55:29 crc kubenswrapper[4736]: I0316 15:55:29.710441 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:55:30 crc kubenswrapper[4736]: I0316 15:55:30.773984 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lsshg" podUID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" containerName="registry-server" probeResult="failure" output=< Mar 16 15:55:30 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 15:55:30 crc kubenswrapper[4736]: > Mar 16 15:55:37 crc kubenswrapper[4736]: I0316 15:55:37.978615 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:55:37 crc kubenswrapper[4736]: E0316 15:55:37.979451 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:55:40 crc kubenswrapper[4736]: I0316 15:55:40.789950 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lsshg" podUID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" containerName="registry-server" probeResult="failure" output=< Mar 16 15:55:40 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 15:55:40 crc kubenswrapper[4736]: > Mar 16 15:55:50 crc kubenswrapper[4736]: I0316 15:55:50.775829 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lsshg" podUID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" containerName="registry-server" probeResult="failure" output=< Mar 16 15:55:50 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 15:55:50 crc kubenswrapper[4736]: > Mar 16 15:55:51 crc kubenswrapper[4736]: I0316 15:55:51.977982 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:55:51 crc kubenswrapper[4736]: E0316 15:55:51.978682 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:55:59 crc kubenswrapper[4736]: I0316 15:55:59.775887 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:55:59 crc kubenswrapper[4736]: I0316 15:55:59.808937 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lsshg" podStartSLOduration=34.326304602 podStartE2EDuration="40.808918775s" podCreationTimestamp="2026-03-16 15:55:19 +0000 UTC" firstStartedPulling="2026-03-16 15:55:20.916013377 +0000 UTC m=+2522.643403664" lastFinishedPulling="2026-03-16 15:55:27.39862755 +0000 UTC m=+2529.126017837" observedRunningTime="2026-03-16 15:55:28.002614019 +0000 UTC m=+2529.730004306" watchObservedRunningTime="2026-03-16 15:55:59.808918775 +0000 UTC m=+2561.536309082" Mar 16 15:55:59 crc kubenswrapper[4736]: I0316 15:55:59.828863 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:56:00 crc kubenswrapper[4736]: I0316 15:56:00.019940 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lsshg"] Mar 16 15:56:00 crc kubenswrapper[4736]: I0316 15:56:00.161937 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561276-n47zt"] Mar 16 15:56:00 crc kubenswrapper[4736]: I0316 15:56:00.164294 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561276-n47zt" Mar 16 15:56:00 crc kubenswrapper[4736]: I0316 15:56:00.167089 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:56:00 crc kubenswrapper[4736]: I0316 15:56:00.167983 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:56:00 crc kubenswrapper[4736]: I0316 15:56:00.168555 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:56:00 crc kubenswrapper[4736]: I0316 15:56:00.205879 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs277\" (UniqueName: \"kubernetes.io/projected/80de45db-7b56-43ff-a1eb-e35f34e5de64-kube-api-access-gs277\") pod \"auto-csr-approver-29561276-n47zt\" (UID: \"80de45db-7b56-43ff-a1eb-e35f34e5de64\") " pod="openshift-infra/auto-csr-approver-29561276-n47zt" Mar 16 15:56:00 crc kubenswrapper[4736]: I0316 15:56:00.207098 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561276-n47zt"] Mar 16 15:56:00 crc kubenswrapper[4736]: I0316 15:56:00.309622 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs277\" (UniqueName: \"kubernetes.io/projected/80de45db-7b56-43ff-a1eb-e35f34e5de64-kube-api-access-gs277\") pod \"auto-csr-approver-29561276-n47zt\" (UID: \"80de45db-7b56-43ff-a1eb-e35f34e5de64\") " pod="openshift-infra/auto-csr-approver-29561276-n47zt" Mar 16 15:56:00 crc kubenswrapper[4736]: I0316 15:56:00.342943 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs277\" (UniqueName: \"kubernetes.io/projected/80de45db-7b56-43ff-a1eb-e35f34e5de64-kube-api-access-gs277\") pod \"auto-csr-approver-29561276-n47zt\" (UID: \"80de45db-7b56-43ff-a1eb-e35f34e5de64\") " pod="openshift-infra/auto-csr-approver-29561276-n47zt" Mar 16 15:56:00 crc kubenswrapper[4736]: I0316 15:56:00.522316 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561276-n47zt" Mar 16 15:56:00 crc kubenswrapper[4736]: I0316 15:56:00.964373 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561276-n47zt"] Mar 16 15:56:01 crc kubenswrapper[4736]: I0316 15:56:01.292433 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561276-n47zt" event={"ID":"80de45db-7b56-43ff-a1eb-e35f34e5de64","Type":"ContainerStarted","Data":"30f129b2dbc4d4941aa593b533483a38f51093e20d07dc7d3d19033857a97c44"} Mar 16 15:56:01 crc kubenswrapper[4736]: I0316 15:56:01.292576 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lsshg" podUID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" containerName="registry-server" containerID="cri-o://131ed96888a9c9d2fcdd55651ae2ec37985a43017735c607f6a724d79e5dad2e" gracePeriod=2 Mar 16 15:56:01 crc kubenswrapper[4736]: I0316 15:56:01.793345 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:56:01 crc kubenswrapper[4736]: I0316 15:56:01.849829 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4908205-8195-4ba9-8c91-7ecbc1e530e2-utilities\") pod \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\" (UID: \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\") " Mar 16 15:56:01 crc kubenswrapper[4736]: I0316 15:56:01.850142 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4908205-8195-4ba9-8c91-7ecbc1e530e2-catalog-content\") pod \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\" (UID: \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\") " Mar 16 15:56:01 crc kubenswrapper[4736]: I0316 15:56:01.850199 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxkk9\" (UniqueName: \"kubernetes.io/projected/c4908205-8195-4ba9-8c91-7ecbc1e530e2-kube-api-access-dxkk9\") pod \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\" (UID: \"c4908205-8195-4ba9-8c91-7ecbc1e530e2\") " Mar 16 15:56:01 crc kubenswrapper[4736]: I0316 15:56:01.850883 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4908205-8195-4ba9-8c91-7ecbc1e530e2-utilities" (OuterVolumeSpecName: "utilities") pod "c4908205-8195-4ba9-8c91-7ecbc1e530e2" (UID: "c4908205-8195-4ba9-8c91-7ecbc1e530e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:56:01 crc kubenswrapper[4736]: I0316 15:56:01.857285 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4908205-8195-4ba9-8c91-7ecbc1e530e2-kube-api-access-dxkk9" (OuterVolumeSpecName: "kube-api-access-dxkk9") pod "c4908205-8195-4ba9-8c91-7ecbc1e530e2" (UID: "c4908205-8195-4ba9-8c91-7ecbc1e530e2"). InnerVolumeSpecName "kube-api-access-dxkk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:56:01 crc kubenswrapper[4736]: I0316 15:56:01.952262 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxkk9\" (UniqueName: \"kubernetes.io/projected/c4908205-8195-4ba9-8c91-7ecbc1e530e2-kube-api-access-dxkk9\") on node \"crc\" DevicePath \"\"" Mar 16 15:56:01 crc kubenswrapper[4736]: I0316 15:56:01.952292 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4908205-8195-4ba9-8c91-7ecbc1e530e2-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 15:56:01 crc kubenswrapper[4736]: I0316 15:56:01.986046 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4908205-8195-4ba9-8c91-7ecbc1e530e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4908205-8195-4ba9-8c91-7ecbc1e530e2" (UID: "c4908205-8195-4ba9-8c91-7ecbc1e530e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.057766 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4908205-8195-4ba9-8c91-7ecbc1e530e2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.304178 4736 generic.go:334] "Generic (PLEG): container finished" podID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" containerID="131ed96888a9c9d2fcdd55651ae2ec37985a43017735c607f6a724d79e5dad2e" exitCode=0 Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.304241 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lsshg" Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.304263 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsshg" event={"ID":"c4908205-8195-4ba9-8c91-7ecbc1e530e2","Type":"ContainerDied","Data":"131ed96888a9c9d2fcdd55651ae2ec37985a43017735c607f6a724d79e5dad2e"} Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.304800 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lsshg" event={"ID":"c4908205-8195-4ba9-8c91-7ecbc1e530e2","Type":"ContainerDied","Data":"715e5c5a965e0f1dd8591aeaf1e447e63ff960c4bfce5cfa5c4f683b74aecbd5"} Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.304817 4736 scope.go:117] "RemoveContainer" containerID="131ed96888a9c9d2fcdd55651ae2ec37985a43017735c607f6a724d79e5dad2e" Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.326231 4736 scope.go:117] "RemoveContainer" containerID="1d6ee2db149cbbafa5d59057693cbe35f0c4f71ef669bf59c7a4ee90c96d300f" Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.353200 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lsshg"] Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.363605 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lsshg"] Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.368569 4736 scope.go:117] "RemoveContainer" containerID="a95a48e495ab6628c7c94a4c11b46736deff8eaefae7694e3ee33ca4094ba8fe" Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.417074 4736 scope.go:117] "RemoveContainer" containerID="131ed96888a9c9d2fcdd55651ae2ec37985a43017735c607f6a724d79e5dad2e" Mar 16 15:56:02 crc kubenswrapper[4736]: E0316 15:56:02.417859 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"131ed96888a9c9d2fcdd55651ae2ec37985a43017735c607f6a724d79e5dad2e\": container with ID starting with 131ed96888a9c9d2fcdd55651ae2ec37985a43017735c607f6a724d79e5dad2e not found: ID does not exist" containerID="131ed96888a9c9d2fcdd55651ae2ec37985a43017735c607f6a724d79e5dad2e" Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.417921 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"131ed96888a9c9d2fcdd55651ae2ec37985a43017735c607f6a724d79e5dad2e"} err="failed to get container status \"131ed96888a9c9d2fcdd55651ae2ec37985a43017735c607f6a724d79e5dad2e\": rpc error: code = NotFound desc = could not find container \"131ed96888a9c9d2fcdd55651ae2ec37985a43017735c607f6a724d79e5dad2e\": container with ID starting with 131ed96888a9c9d2fcdd55651ae2ec37985a43017735c607f6a724d79e5dad2e not found: ID does not exist" Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.417951 4736 scope.go:117] "RemoveContainer" containerID="1d6ee2db149cbbafa5d59057693cbe35f0c4f71ef669bf59c7a4ee90c96d300f" Mar 16 15:56:02 crc kubenswrapper[4736]: E0316 15:56:02.418413 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d6ee2db149cbbafa5d59057693cbe35f0c4f71ef669bf59c7a4ee90c96d300f\": container with ID starting with 1d6ee2db149cbbafa5d59057693cbe35f0c4f71ef669bf59c7a4ee90c96d300f not found: ID does not exist" containerID="1d6ee2db149cbbafa5d59057693cbe35f0c4f71ef669bf59c7a4ee90c96d300f" Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.418443 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d6ee2db149cbbafa5d59057693cbe35f0c4f71ef669bf59c7a4ee90c96d300f"} err="failed to get container status \"1d6ee2db149cbbafa5d59057693cbe35f0c4f71ef669bf59c7a4ee90c96d300f\": rpc error: code = NotFound desc = could not find container \"1d6ee2db149cbbafa5d59057693cbe35f0c4f71ef669bf59c7a4ee90c96d300f\": container with ID starting with 1d6ee2db149cbbafa5d59057693cbe35f0c4f71ef669bf59c7a4ee90c96d300f not found: ID does not exist" Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.418465 4736 scope.go:117] "RemoveContainer" containerID="a95a48e495ab6628c7c94a4c11b46736deff8eaefae7694e3ee33ca4094ba8fe" Mar 16 15:56:02 crc kubenswrapper[4736]: E0316 15:56:02.418959 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a95a48e495ab6628c7c94a4c11b46736deff8eaefae7694e3ee33ca4094ba8fe\": container with ID starting with a95a48e495ab6628c7c94a4c11b46736deff8eaefae7694e3ee33ca4094ba8fe not found: ID does not exist" containerID="a95a48e495ab6628c7c94a4c11b46736deff8eaefae7694e3ee33ca4094ba8fe" Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.419016 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a95a48e495ab6628c7c94a4c11b46736deff8eaefae7694e3ee33ca4094ba8fe"} err="failed to get container status \"a95a48e495ab6628c7c94a4c11b46736deff8eaefae7694e3ee33ca4094ba8fe\": rpc error: code = NotFound desc = could not find container \"a95a48e495ab6628c7c94a4c11b46736deff8eaefae7694e3ee33ca4094ba8fe\": container with ID starting with a95a48e495ab6628c7c94a4c11b46736deff8eaefae7694e3ee33ca4094ba8fe not found: ID does not exist" Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.977733 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:56:02 crc kubenswrapper[4736]: E0316 15:56:02.978623 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:56:02 crc kubenswrapper[4736]: I0316 15:56:02.990071 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" path="/var/lib/kubelet/pods/c4908205-8195-4ba9-8c91-7ecbc1e530e2/volumes" Mar 16 15:56:03 crc kubenswrapper[4736]: I0316 15:56:03.316658 4736 generic.go:334] "Generic (PLEG): container finished" podID="80de45db-7b56-43ff-a1eb-e35f34e5de64" containerID="9cc0ffdd03fd069318084af0a58cd54e789774ae113bc2be1567ebbc91ce708d" exitCode=0 Mar 16 15:56:03 crc kubenswrapper[4736]: I0316 15:56:03.316695 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561276-n47zt" event={"ID":"80de45db-7b56-43ff-a1eb-e35f34e5de64","Type":"ContainerDied","Data":"9cc0ffdd03fd069318084af0a58cd54e789774ae113bc2be1567ebbc91ce708d"} Mar 16 15:56:04 crc kubenswrapper[4736]: I0316 15:56:04.694686 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561276-n47zt" Mar 16 15:56:04 crc kubenswrapper[4736]: I0316 15:56:04.806281 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs277\" (UniqueName: \"kubernetes.io/projected/80de45db-7b56-43ff-a1eb-e35f34e5de64-kube-api-access-gs277\") pod \"80de45db-7b56-43ff-a1eb-e35f34e5de64\" (UID: \"80de45db-7b56-43ff-a1eb-e35f34e5de64\") " Mar 16 15:56:04 crc kubenswrapper[4736]: I0316 15:56:04.836812 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80de45db-7b56-43ff-a1eb-e35f34e5de64-kube-api-access-gs277" (OuterVolumeSpecName: "kube-api-access-gs277") pod "80de45db-7b56-43ff-a1eb-e35f34e5de64" (UID: "80de45db-7b56-43ff-a1eb-e35f34e5de64"). InnerVolumeSpecName "kube-api-access-gs277". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:56:04 crc kubenswrapper[4736]: I0316 15:56:04.909282 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs277\" (UniqueName: \"kubernetes.io/projected/80de45db-7b56-43ff-a1eb-e35f34e5de64-kube-api-access-gs277\") on node \"crc\" DevicePath \"\"" Mar 16 15:56:05 crc kubenswrapper[4736]: I0316 15:56:05.341750 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561276-n47zt" event={"ID":"80de45db-7b56-43ff-a1eb-e35f34e5de64","Type":"ContainerDied","Data":"30f129b2dbc4d4941aa593b533483a38f51093e20d07dc7d3d19033857a97c44"} Mar 16 15:56:05 crc kubenswrapper[4736]: I0316 15:56:05.342348 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30f129b2dbc4d4941aa593b533483a38f51093e20d07dc7d3d19033857a97c44" Mar 16 15:56:05 crc kubenswrapper[4736]: I0316 15:56:05.341813 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561276-n47zt" Mar 16 15:56:05 crc kubenswrapper[4736]: I0316 15:56:05.779698 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561270-8hwgt"] Mar 16 15:56:05 crc kubenswrapper[4736]: I0316 15:56:05.788381 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561270-8hwgt"] Mar 16 15:56:06 crc kubenswrapper[4736]: I0316 15:56:06.991635 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8409a279-a5c8-4f8f-b208-e741e0ecb7d9" path="/var/lib/kubelet/pods/8409a279-a5c8-4f8f-b208-e741e0ecb7d9/volumes" Mar 16 15:56:15 crc kubenswrapper[4736]: I0316 15:56:15.978992 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:56:15 crc kubenswrapper[4736]: E0316 15:56:15.980285 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:56:26 crc kubenswrapper[4736]: I0316 15:56:26.978076 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:56:26 crc kubenswrapper[4736]: E0316 15:56:26.978828 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 15:56:41 crc kubenswrapper[4736]: I0316 15:56:41.980180 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 15:56:42 crc kubenswrapper[4736]: I0316 15:56:42.705903 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"b210602e9eee8800bde2e5397f030a8b6bfc604c9741a0e2ad86494bc7ece957"} Mar 16 15:56:50 crc kubenswrapper[4736]: I0316 15:56:50.208085 4736 scope.go:117] "RemoveContainer" containerID="5c44dcf3243512af820d90e18a8790a66e606735ba314734e1ecfd97c8647636" Mar 16 15:57:04 crc kubenswrapper[4736]: I0316 15:57:04.968591 4736 generic.go:334] "Generic (PLEG): container finished" podID="97bb28be-aaed-4b82-9df1-cb24c9dd48e3" containerID="28b1b8b9d0ef78a91d5f6ca5f5e5904739b47189d88f3b55375029b324fa63b7" exitCode=0 Mar 16 15:57:04 crc kubenswrapper[4736]: I0316 15:57:04.968655 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" event={"ID":"97bb28be-aaed-4b82-9df1-cb24c9dd48e3","Type":"ContainerDied","Data":"28b1b8b9d0ef78a91d5f6ca5f5e5904739b47189d88f3b55375029b324fa63b7"} Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.425844 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.548006 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-libvirt-secret-0\") pod \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.548056 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zjlt\" (UniqueName: \"kubernetes.io/projected/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-kube-api-access-6zjlt\") pod \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.548131 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-inventory\") pod \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.548155 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-ssh-key-openstack-edpm-ipam\") pod \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.548220 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-libvirt-combined-ca-bundle\") pod \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\" (UID: \"97bb28be-aaed-4b82-9df1-cb24c9dd48e3\") " Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.557250 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "97bb28be-aaed-4b82-9df1-cb24c9dd48e3" (UID: "97bb28be-aaed-4b82-9df1-cb24c9dd48e3"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.557354 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-kube-api-access-6zjlt" (OuterVolumeSpecName: "kube-api-access-6zjlt") pod "97bb28be-aaed-4b82-9df1-cb24c9dd48e3" (UID: "97bb28be-aaed-4b82-9df1-cb24c9dd48e3"). InnerVolumeSpecName "kube-api-access-6zjlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.581970 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-inventory" (OuterVolumeSpecName: "inventory") pod "97bb28be-aaed-4b82-9df1-cb24c9dd48e3" (UID: "97bb28be-aaed-4b82-9df1-cb24c9dd48e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.585486 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "97bb28be-aaed-4b82-9df1-cb24c9dd48e3" (UID: "97bb28be-aaed-4b82-9df1-cb24c9dd48e3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.585558 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "97bb28be-aaed-4b82-9df1-cb24c9dd48e3" (UID: "97bb28be-aaed-4b82-9df1-cb24c9dd48e3"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.650355 4736 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.650416 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zjlt\" (UniqueName: \"kubernetes.io/projected/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-kube-api-access-6zjlt\") on node \"crc\" DevicePath \"\"" Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.650439 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.650457 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.650475 4736 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97bb28be-aaed-4b82-9df1-cb24c9dd48e3-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.998272 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.998621 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h" event={"ID":"97bb28be-aaed-4b82-9df1-cb24c9dd48e3","Type":"ContainerDied","Data":"f392829fbe481c12ee13e19e5bf15b42ecfa1df6169a3448f914132f46ab0ac2"} Mar 16 15:57:06 crc kubenswrapper[4736]: I0316 15:57:06.998658 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f392829fbe481c12ee13e19e5bf15b42ecfa1df6169a3448f914132f46ab0ac2" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.148236 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7"] Mar 16 15:57:07 crc kubenswrapper[4736]: E0316 15:57:07.148627 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" containerName="registry-server" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.148646 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" containerName="registry-server" Mar 16 15:57:07 crc kubenswrapper[4736]: E0316 15:57:07.148671 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80de45db-7b56-43ff-a1eb-e35f34e5de64" containerName="oc" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.148679 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="80de45db-7b56-43ff-a1eb-e35f34e5de64" containerName="oc" Mar 16 15:57:07 crc kubenswrapper[4736]: E0316 15:57:07.148693 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" containerName="extract-utilities" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.148700 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" containerName="extract-utilities" Mar 16 15:57:07 crc kubenswrapper[4736]: E0316 15:57:07.148729 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bb28be-aaed-4b82-9df1-cb24c9dd48e3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.148738 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bb28be-aaed-4b82-9df1-cb24c9dd48e3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 16 15:57:07 crc kubenswrapper[4736]: E0316 15:57:07.148755 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" containerName="extract-content" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.148777 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" containerName="extract-content" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.148972 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="97bb28be-aaed-4b82-9df1-cb24c9dd48e3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.149018 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4908205-8195-4ba9-8c91-7ecbc1e530e2" containerName="registry-server" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.149040 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="80de45db-7b56-43ff-a1eb-e35f34e5de64" containerName="oc" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.149741 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.152321 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.152455 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.152750 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.152956 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.163150 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.163278 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.164461 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.164483 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7"] Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.260914 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.261250 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.261380 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.261508 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.261631 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.261774 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.261926 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.262030 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4020b89b-a736-4914-9ea6-969e75a9b526-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.262152 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.262308 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wqtt\" (UniqueName: \"kubernetes.io/projected/4020b89b-a736-4914-9ea6-969e75a9b526-kube-api-access-5wqtt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.262429 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.363715 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wqtt\" (UniqueName: \"kubernetes.io/projected/4020b89b-a736-4914-9ea6-969e75a9b526-kube-api-access-5wqtt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.363960 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.364153 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.364305 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.364405 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.364524 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.364641 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.364750 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.364894 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.364985 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4020b89b-a736-4914-9ea6-969e75a9b526-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.365076 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.369280 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.369353 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.370250 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4020b89b-a736-4914-9ea6-969e75a9b526-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.370430 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.370572 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.370771 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.371378 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.371757 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.373729 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.378169 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.380455 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wqtt\" (UniqueName: \"kubernetes.io/projected/4020b89b-a736-4914-9ea6-969e75a9b526-kube-api-access-5wqtt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bbqh7\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.471885 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:57:07 crc kubenswrapper[4736]: I0316 15:57:07.993163 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7"] Mar 16 15:57:09 crc kubenswrapper[4736]: I0316 15:57:09.020611 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" event={"ID":"4020b89b-a736-4914-9ea6-969e75a9b526","Type":"ContainerStarted","Data":"b06821119d5e2c51804e44b1bb935d6d400b8043d6bb51c5e936f92ee55377e5"} Mar 16 15:57:09 crc kubenswrapper[4736]: I0316 15:57:09.021244 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" event={"ID":"4020b89b-a736-4914-9ea6-969e75a9b526","Type":"ContainerStarted","Data":"6e211c9939f329ead5c713f6de6eb36922d1a690679fa748b000a922d802395c"} Mar 16 15:57:09 crc kubenswrapper[4736]: I0316 15:57:09.039623 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" podStartSLOduration=1.549573305 podStartE2EDuration="2.039601849s" podCreationTimestamp="2026-03-16 15:57:07 +0000 UTC" firstStartedPulling="2026-03-16 15:57:08.006814174 +0000 UTC m=+2629.734204461" lastFinishedPulling="2026-03-16 15:57:08.496842718 +0000 UTC m=+2630.224233005" observedRunningTime="2026-03-16 15:57:09.036774542 +0000 UTC m=+2630.764164829" watchObservedRunningTime="2026-03-16 15:57:09.039601849 +0000 UTC m=+2630.766992136" Mar 16 15:58:00 crc kubenswrapper[4736]: I0316 15:58:00.152749 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561278-kk4fq"] Mar 16 15:58:00 crc kubenswrapper[4736]: I0316 15:58:00.155483 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561278-kk4fq" Mar 16 15:58:00 crc kubenswrapper[4736]: I0316 15:58:00.162642 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 15:58:00 crc kubenswrapper[4736]: I0316 15:58:00.162851 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 15:58:00 crc kubenswrapper[4736]: I0316 15:58:00.164802 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 15:58:00 crc kubenswrapper[4736]: I0316 15:58:00.165326 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561278-kk4fq"] Mar 16 15:58:00 crc kubenswrapper[4736]: I0316 15:58:00.310820 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf4gk\" (UniqueName: \"kubernetes.io/projected/30f78a5b-48f7-4120-a8d4-5537e8378c0d-kube-api-access-nf4gk\") pod \"auto-csr-approver-29561278-kk4fq\" (UID: \"30f78a5b-48f7-4120-a8d4-5537e8378c0d\") " pod="openshift-infra/auto-csr-approver-29561278-kk4fq" Mar 16 15:58:00 crc kubenswrapper[4736]: I0316 15:58:00.413434 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4gk\" (UniqueName: \"kubernetes.io/projected/30f78a5b-48f7-4120-a8d4-5537e8378c0d-kube-api-access-nf4gk\") pod \"auto-csr-approver-29561278-kk4fq\" (UID: \"30f78a5b-48f7-4120-a8d4-5537e8378c0d\") " pod="openshift-infra/auto-csr-approver-29561278-kk4fq" Mar 16 15:58:00 crc kubenswrapper[4736]: I0316 15:58:00.433857 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf4gk\" (UniqueName: \"kubernetes.io/projected/30f78a5b-48f7-4120-a8d4-5537e8378c0d-kube-api-access-nf4gk\") pod \"auto-csr-approver-29561278-kk4fq\" (UID: \"30f78a5b-48f7-4120-a8d4-5537e8378c0d\") " pod="openshift-infra/auto-csr-approver-29561278-kk4fq" Mar 16 15:58:00 crc kubenswrapper[4736]: I0316 15:58:00.483248 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561278-kk4fq" Mar 16 15:58:00 crc kubenswrapper[4736]: I0316 15:58:00.993181 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561278-kk4fq"] Mar 16 15:58:01 crc kubenswrapper[4736]: I0316 15:58:01.543397 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561278-kk4fq" event={"ID":"30f78a5b-48f7-4120-a8d4-5537e8378c0d","Type":"ContainerStarted","Data":"a1958363e970f63adbe5781950de3b64089b938339f9b981d5c7f4eac020f4a5"} Mar 16 15:58:02 crc kubenswrapper[4736]: I0316 15:58:02.556878 4736 generic.go:334] "Generic (PLEG): container finished" podID="30f78a5b-48f7-4120-a8d4-5537e8378c0d" containerID="e526daeae278e39b64be1085317bf676310ee4121b8127c32c4ae808e5b93f2b" exitCode=0 Mar 16 15:58:02 crc kubenswrapper[4736]: I0316 15:58:02.557230 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561278-kk4fq" event={"ID":"30f78a5b-48f7-4120-a8d4-5537e8378c0d","Type":"ContainerDied","Data":"e526daeae278e39b64be1085317bf676310ee4121b8127c32c4ae808e5b93f2b"} Mar 16 15:58:03 crc kubenswrapper[4736]: I0316 15:58:03.984351 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561278-kk4fq" Mar 16 15:58:04 crc kubenswrapper[4736]: I0316 15:58:04.108075 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf4gk\" (UniqueName: \"kubernetes.io/projected/30f78a5b-48f7-4120-a8d4-5537e8378c0d-kube-api-access-nf4gk\") pod \"30f78a5b-48f7-4120-a8d4-5537e8378c0d\" (UID: \"30f78a5b-48f7-4120-a8d4-5537e8378c0d\") " Mar 16 15:58:04 crc kubenswrapper[4736]: I0316 15:58:04.120711 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f78a5b-48f7-4120-a8d4-5537e8378c0d-kube-api-access-nf4gk" (OuterVolumeSpecName: "kube-api-access-nf4gk") pod "30f78a5b-48f7-4120-a8d4-5537e8378c0d" (UID: "30f78a5b-48f7-4120-a8d4-5537e8378c0d"). InnerVolumeSpecName "kube-api-access-nf4gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:58:04 crc kubenswrapper[4736]: I0316 15:58:04.210753 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf4gk\" (UniqueName: \"kubernetes.io/projected/30f78a5b-48f7-4120-a8d4-5537e8378c0d-kube-api-access-nf4gk\") on node \"crc\" DevicePath \"\"" Mar 16 15:58:04 crc kubenswrapper[4736]: I0316 15:58:04.584545 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561278-kk4fq" event={"ID":"30f78a5b-48f7-4120-a8d4-5537e8378c0d","Type":"ContainerDied","Data":"a1958363e970f63adbe5781950de3b64089b938339f9b981d5c7f4eac020f4a5"} Mar 16 15:58:04 crc kubenswrapper[4736]: I0316 15:58:04.584889 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1958363e970f63adbe5781950de3b64089b938339f9b981d5c7f4eac020f4a5" Mar 16 15:58:04 crc kubenswrapper[4736]: I0316 15:58:04.584578 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561278-kk4fq" Mar 16 15:58:05 crc kubenswrapper[4736]: I0316 15:58:05.079514 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561272-nqfr6"] Mar 16 15:58:05 crc kubenswrapper[4736]: I0316 15:58:05.091366 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561272-nqfr6"] Mar 16 15:58:06 crc kubenswrapper[4736]: I0316 15:58:06.991449 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="042b14dc-bb3b-4764-a023-08f2e98279da" path="/var/lib/kubelet/pods/042b14dc-bb3b-4764-a023-08f2e98279da/volumes" Mar 16 15:58:50 crc kubenswrapper[4736]: I0316 15:58:50.335048 4736 scope.go:117] "RemoveContainer" containerID="01878ec3a3313765440fd72e050ba635abc31e0e8e9faacf735e28f3ca0a1099" Mar 16 15:59:08 crc kubenswrapper[4736]: I0316 15:59:08.508941 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:59:08 crc kubenswrapper[4736]: I0316 15:59:08.509650 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:59:36 crc kubenswrapper[4736]: I0316 15:59:36.448812 4736 generic.go:334] "Generic (PLEG): container finished" podID="4020b89b-a736-4914-9ea6-969e75a9b526" containerID="b06821119d5e2c51804e44b1bb935d6d400b8043d6bb51c5e936f92ee55377e5" exitCode=0 Mar 16 15:59:36 crc kubenswrapper[4736]: I0316 15:59:36.448889 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" event={"ID":"4020b89b-a736-4914-9ea6-969e75a9b526","Type":"ContainerDied","Data":"b06821119d5e2c51804e44b1bb935d6d400b8043d6bb51c5e936f92ee55377e5"} Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.879901 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.956986 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-1\") pod \"4020b89b-a736-4914-9ea6-969e75a9b526\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.957087 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-ssh-key-openstack-edpm-ipam\") pod \"4020b89b-a736-4914-9ea6-969e75a9b526\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.957162 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-combined-ca-bundle\") pod \"4020b89b-a736-4914-9ea6-969e75a9b526\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.957233 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-inventory\") pod \"4020b89b-a736-4914-9ea6-969e75a9b526\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.957288 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-migration-ssh-key-0\") pod \"4020b89b-a736-4914-9ea6-969e75a9b526\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.957319 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wqtt\" (UniqueName: \"kubernetes.io/projected/4020b89b-a736-4914-9ea6-969e75a9b526-kube-api-access-5wqtt\") pod \"4020b89b-a736-4914-9ea6-969e75a9b526\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.957367 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-migration-ssh-key-1\") pod \"4020b89b-a736-4914-9ea6-969e75a9b526\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.957408 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4020b89b-a736-4914-9ea6-969e75a9b526-nova-extra-config-0\") pod \"4020b89b-a736-4914-9ea6-969e75a9b526\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.957426 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-3\") pod \"4020b89b-a736-4914-9ea6-969e75a9b526\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.957446 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-0\") pod \"4020b89b-a736-4914-9ea6-969e75a9b526\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.957469 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-2\") pod \"4020b89b-a736-4914-9ea6-969e75a9b526\" (UID: \"4020b89b-a736-4914-9ea6-969e75a9b526\") " Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.965014 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "4020b89b-a736-4914-9ea6-969e75a9b526" (UID: "4020b89b-a736-4914-9ea6-969e75a9b526"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.970781 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4020b89b-a736-4914-9ea6-969e75a9b526-kube-api-access-5wqtt" (OuterVolumeSpecName: "kube-api-access-5wqtt") pod "4020b89b-a736-4914-9ea6-969e75a9b526" (UID: "4020b89b-a736-4914-9ea6-969e75a9b526"). InnerVolumeSpecName "kube-api-access-5wqtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.988065 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "4020b89b-a736-4914-9ea6-969e75a9b526" (UID: "4020b89b-a736-4914-9ea6-969e75a9b526"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.992315 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "4020b89b-a736-4914-9ea6-969e75a9b526" (UID: "4020b89b-a736-4914-9ea6-969e75a9b526"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.994694 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-inventory" (OuterVolumeSpecName: "inventory") pod "4020b89b-a736-4914-9ea6-969e75a9b526" (UID: "4020b89b-a736-4914-9ea6-969e75a9b526"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:59:37 crc kubenswrapper[4736]: I0316 15:59:37.996062 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "4020b89b-a736-4914-9ea6-969e75a9b526" (UID: "4020b89b-a736-4914-9ea6-969e75a9b526"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.006094 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4020b89b-a736-4914-9ea6-969e75a9b526-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "4020b89b-a736-4914-9ea6-969e75a9b526" (UID: "4020b89b-a736-4914-9ea6-969e75a9b526"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.011874 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "4020b89b-a736-4914-9ea6-969e75a9b526" (UID: "4020b89b-a736-4914-9ea6-969e75a9b526"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.012727 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4020b89b-a736-4914-9ea6-969e75a9b526" (UID: "4020b89b-a736-4914-9ea6-969e75a9b526"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.016697 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "4020b89b-a736-4914-9ea6-969e75a9b526" (UID: "4020b89b-a736-4914-9ea6-969e75a9b526"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.018580 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "4020b89b-a736-4914-9ea6-969e75a9b526" (UID: "4020b89b-a736-4914-9ea6-969e75a9b526"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.059544 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.059575 4736 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.059586 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wqtt\" (UniqueName: \"kubernetes.io/projected/4020b89b-a736-4914-9ea6-969e75a9b526-kube-api-access-5wqtt\") on node \"crc\" DevicePath \"\"" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.059596 4736 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.059606 4736 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4020b89b-a736-4914-9ea6-969e75a9b526-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.059616 4736 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.059625 4736 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.059634 4736 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.059641 4736 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.059651 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.059659 4736 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4020b89b-a736-4914-9ea6-969e75a9b526-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.470944 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" event={"ID":"4020b89b-a736-4914-9ea6-969e75a9b526","Type":"ContainerDied","Data":"6e211c9939f329ead5c713f6de6eb36922d1a690679fa748b000a922d802395c"} Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.470985 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bbqh7" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.471001 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e211c9939f329ead5c713f6de6eb36922d1a690679fa748b000a922d802395c" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.509177 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.509231 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.647813 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl"] Mar 16 15:59:38 crc kubenswrapper[4736]: E0316 15:59:38.648313 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4020b89b-a736-4914-9ea6-969e75a9b526" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.648341 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4020b89b-a736-4914-9ea6-969e75a9b526" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 16 15:59:38 crc kubenswrapper[4736]: E0316 15:59:38.648373 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f78a5b-48f7-4120-a8d4-5537e8378c0d" containerName="oc" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.648383 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f78a5b-48f7-4120-a8d4-5537e8378c0d" containerName="oc" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.648624 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4020b89b-a736-4914-9ea6-969e75a9b526" containerName="nova-edpm-deployment-openstack-edpm-ipam" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.648661 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f78a5b-48f7-4120-a8d4-5537e8378c0d" containerName="oc" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.649485 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.656090 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.656091 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-nrm5l" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.656456 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.656589 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.657140 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.665836 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl"] Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.772520 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.772953 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.772988 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.773089 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.773134 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.773168 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfhz\" (UniqueName: \"kubernetes.io/projected/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-kube-api-access-jdfhz\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.773198 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.875652 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.875761 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdfhz\" (UniqueName: \"kubernetes.io/projected/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-kube-api-access-jdfhz\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.875907 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.875988 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.876176 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.876242 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.876289 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.880432 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.881786 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.882252 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.882995 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.884466 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.885155 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.899955 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdfhz\" (UniqueName: \"kubernetes.io/projected/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-kube-api-access-jdfhz\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:38 crc kubenswrapper[4736]: I0316 15:59:38.966714 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 15:59:39 crc kubenswrapper[4736]: I0316 15:59:39.498508 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl"] Mar 16 15:59:40 crc kubenswrapper[4736]: I0316 15:59:40.490452 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" event={"ID":"bf51b7ea-25d5-4fa2-9abe-db781c31f96f","Type":"ContainerStarted","Data":"8996351b88c6881c6dcac9a315a08344d8069b9eb6c8ef5ed7b4fc3faac76108"} Mar 16 15:59:41 crc kubenswrapper[4736]: I0316 15:59:41.504550 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" event={"ID":"bf51b7ea-25d5-4fa2-9abe-db781c31f96f","Type":"ContainerStarted","Data":"5e2d317568390de919777c35957f08ccb5a52264be995408f561145e7dd78912"} Mar 16 15:59:41 crc kubenswrapper[4736]: I0316 15:59:41.538796 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" podStartSLOduration=2.793877872 podStartE2EDuration="3.538772652s" podCreationTimestamp="2026-03-16 15:59:38 +0000 UTC" firstStartedPulling="2026-03-16 15:59:39.504291929 +0000 UTC m=+2781.231682236" lastFinishedPulling="2026-03-16 15:59:40.249186729 +0000 UTC m=+2781.976577016" observedRunningTime="2026-03-16 15:59:41.530764974 +0000 UTC m=+2783.258155271" watchObservedRunningTime="2026-03-16 15:59:41.538772652 +0000 UTC m=+2783.266162949" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.145942 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561280-j4bz6"] Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.148079 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561280-j4bz6" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.150752 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.150822 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.150908 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.157622 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561280-j4bz6"] Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.206060 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9qw7\" (UniqueName: \"kubernetes.io/projected/41929956-3d26-4a51-9444-5011e999a62d-kube-api-access-l9qw7\") pod \"auto-csr-approver-29561280-j4bz6\" (UID: \"41929956-3d26-4a51-9444-5011e999a62d\") " pod="openshift-infra/auto-csr-approver-29561280-j4bz6" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.249713 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9"] Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.251243 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.253850 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.254060 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.258055 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9"] Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.308656 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9qw7\" (UniqueName: \"kubernetes.io/projected/41929956-3d26-4a51-9444-5011e999a62d-kube-api-access-l9qw7\") pod \"auto-csr-approver-29561280-j4bz6\" (UID: \"41929956-3d26-4a51-9444-5011e999a62d\") " pod="openshift-infra/auto-csr-approver-29561280-j4bz6" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.308706 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/949c9ebb-dba5-4681-9252-c327b26a00e6-secret-volume\") pod \"collect-profiles-29561280-6nbs9\" (UID: \"949c9ebb-dba5-4681-9252-c327b26a00e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.308723 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/949c9ebb-dba5-4681-9252-c327b26a00e6-config-volume\") pod \"collect-profiles-29561280-6nbs9\" (UID: \"949c9ebb-dba5-4681-9252-c327b26a00e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.308774 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6phj\" (UniqueName: \"kubernetes.io/projected/949c9ebb-dba5-4681-9252-c327b26a00e6-kube-api-access-p6phj\") pod \"collect-profiles-29561280-6nbs9\" (UID: \"949c9ebb-dba5-4681-9252-c327b26a00e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.325947 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9qw7\" (UniqueName: \"kubernetes.io/projected/41929956-3d26-4a51-9444-5011e999a62d-kube-api-access-l9qw7\") pod \"auto-csr-approver-29561280-j4bz6\" (UID: \"41929956-3d26-4a51-9444-5011e999a62d\") " pod="openshift-infra/auto-csr-approver-29561280-j4bz6" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.410386 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/949c9ebb-dba5-4681-9252-c327b26a00e6-secret-volume\") pod \"collect-profiles-29561280-6nbs9\" (UID: \"949c9ebb-dba5-4681-9252-c327b26a00e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.410427 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/949c9ebb-dba5-4681-9252-c327b26a00e6-config-volume\") pod \"collect-profiles-29561280-6nbs9\" (UID: \"949c9ebb-dba5-4681-9252-c327b26a00e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.410494 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6phj\" (UniqueName: \"kubernetes.io/projected/949c9ebb-dba5-4681-9252-c327b26a00e6-kube-api-access-p6phj\") pod \"collect-profiles-29561280-6nbs9\" (UID: \"949c9ebb-dba5-4681-9252-c327b26a00e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.411556 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/949c9ebb-dba5-4681-9252-c327b26a00e6-config-volume\") pod \"collect-profiles-29561280-6nbs9\" (UID: \"949c9ebb-dba5-4681-9252-c327b26a00e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.413793 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/949c9ebb-dba5-4681-9252-c327b26a00e6-secret-volume\") pod \"collect-profiles-29561280-6nbs9\" (UID: \"949c9ebb-dba5-4681-9252-c327b26a00e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.427074 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6phj\" (UniqueName: \"kubernetes.io/projected/949c9ebb-dba5-4681-9252-c327b26a00e6-kube-api-access-p6phj\") pod \"collect-profiles-29561280-6nbs9\" (UID: \"949c9ebb-dba5-4681-9252-c327b26a00e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.473426 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561280-j4bz6" Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.568387 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" Mar 16 16:00:00 crc kubenswrapper[4736]: W0316 16:00:00.942838 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41929956_3d26_4a51_9444_5011e999a62d.slice/crio-1fa92ebf49077de96c30beec11914a104b51ad8274ec87916b83edb32f999a80 WatchSource:0}: Error finding container 1fa92ebf49077de96c30beec11914a104b51ad8274ec87916b83edb32f999a80: Status 404 returned error can't find the container with id 1fa92ebf49077de96c30beec11914a104b51ad8274ec87916b83edb32f999a80 Mar 16 16:00:00 crc kubenswrapper[4736]: I0316 16:00:00.943166 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561280-j4bz6"] Mar 16 16:00:01 crc kubenswrapper[4736]: W0316 16:00:01.074023 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod949c9ebb_dba5_4681_9252_c327b26a00e6.slice/crio-2aa3976313bf4aa428dcf64de4309a11b151f470116d1a5c866f3b61124b60ff WatchSource:0}: Error finding container 2aa3976313bf4aa428dcf64de4309a11b151f470116d1a5c866f3b61124b60ff: Status 404 returned error can't find the container with id 2aa3976313bf4aa428dcf64de4309a11b151f470116d1a5c866f3b61124b60ff Mar 16 16:00:01 crc kubenswrapper[4736]: I0316 16:00:01.075868 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9"] Mar 16 16:00:01 crc kubenswrapper[4736]: I0316 16:00:01.691652 4736 generic.go:334] "Generic (PLEG): container finished" podID="949c9ebb-dba5-4681-9252-c327b26a00e6" containerID="73fd8b8407b09a866bb4f1ce7c7c3566d00c6a64b3ff46e8f3947878e5097cd8" exitCode=0 Mar 16 16:00:01 crc kubenswrapper[4736]: I0316 16:00:01.691752 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" event={"ID":"949c9ebb-dba5-4681-9252-c327b26a00e6","Type":"ContainerDied","Data":"73fd8b8407b09a866bb4f1ce7c7c3566d00c6a64b3ff46e8f3947878e5097cd8"} Mar 16 16:00:01 crc kubenswrapper[4736]: I0316 16:00:01.692132 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" event={"ID":"949c9ebb-dba5-4681-9252-c327b26a00e6","Type":"ContainerStarted","Data":"2aa3976313bf4aa428dcf64de4309a11b151f470116d1a5c866f3b61124b60ff"} Mar 16 16:00:01 crc kubenswrapper[4736]: I0316 16:00:01.693246 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561280-j4bz6" event={"ID":"41929956-3d26-4a51-9444-5011e999a62d","Type":"ContainerStarted","Data":"1fa92ebf49077de96c30beec11914a104b51ad8274ec87916b83edb32f999a80"} Mar 16 16:00:03 crc kubenswrapper[4736]: I0316 16:00:03.057948 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" Mar 16 16:00:03 crc kubenswrapper[4736]: I0316 16:00:03.256912 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/949c9ebb-dba5-4681-9252-c327b26a00e6-config-volume\") pod \"949c9ebb-dba5-4681-9252-c327b26a00e6\" (UID: \"949c9ebb-dba5-4681-9252-c327b26a00e6\") " Mar 16 16:00:03 crc kubenswrapper[4736]: I0316 16:00:03.256981 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/949c9ebb-dba5-4681-9252-c327b26a00e6-secret-volume\") pod \"949c9ebb-dba5-4681-9252-c327b26a00e6\" (UID: \"949c9ebb-dba5-4681-9252-c327b26a00e6\") " Mar 16 16:00:03 crc kubenswrapper[4736]: I0316 16:00:03.257120 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6phj\" (UniqueName: \"kubernetes.io/projected/949c9ebb-dba5-4681-9252-c327b26a00e6-kube-api-access-p6phj\") pod \"949c9ebb-dba5-4681-9252-c327b26a00e6\" (UID: \"949c9ebb-dba5-4681-9252-c327b26a00e6\") " Mar 16 16:00:03 crc kubenswrapper[4736]: I0316 16:00:03.257635 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/949c9ebb-dba5-4681-9252-c327b26a00e6-config-volume" (OuterVolumeSpecName: "config-volume") pod "949c9ebb-dba5-4681-9252-c327b26a00e6" (UID: "949c9ebb-dba5-4681-9252-c327b26a00e6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 16:00:03 crc kubenswrapper[4736]: I0316 16:00:03.261896 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949c9ebb-dba5-4681-9252-c327b26a00e6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "949c9ebb-dba5-4681-9252-c327b26a00e6" (UID: "949c9ebb-dba5-4681-9252-c327b26a00e6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 16:00:03 crc kubenswrapper[4736]: I0316 16:00:03.262903 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949c9ebb-dba5-4681-9252-c327b26a00e6-kube-api-access-p6phj" (OuterVolumeSpecName: "kube-api-access-p6phj") pod "949c9ebb-dba5-4681-9252-c327b26a00e6" (UID: "949c9ebb-dba5-4681-9252-c327b26a00e6"). InnerVolumeSpecName "kube-api-access-p6phj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:00:03 crc kubenswrapper[4736]: I0316 16:00:03.365021 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6phj\" (UniqueName: \"kubernetes.io/projected/949c9ebb-dba5-4681-9252-c327b26a00e6-kube-api-access-p6phj\") on node \"crc\" DevicePath \"\"" Mar 16 16:00:03 crc kubenswrapper[4736]: I0316 16:00:03.365058 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/949c9ebb-dba5-4681-9252-c327b26a00e6-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 16:00:03 crc kubenswrapper[4736]: I0316 16:00:03.365069 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/949c9ebb-dba5-4681-9252-c327b26a00e6-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 16:00:03 crc kubenswrapper[4736]: I0316 16:00:03.709742 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" event={"ID":"949c9ebb-dba5-4681-9252-c327b26a00e6","Type":"ContainerDied","Data":"2aa3976313bf4aa428dcf64de4309a11b151f470116d1a5c866f3b61124b60ff"} Mar 16 16:00:03 crc kubenswrapper[4736]: I0316 16:00:03.710150 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa3976313bf4aa428dcf64de4309a11b151f470116d1a5c866f3b61124b60ff" Mar 16 16:00:03 crc kubenswrapper[4736]: I0316 16:00:03.709784 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9" Mar 16 16:00:04 crc kubenswrapper[4736]: I0316 16:00:04.166922 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89"] Mar 16 16:00:04 crc kubenswrapper[4736]: I0316 16:00:04.179200 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561235-7vr89"] Mar 16 16:00:04 crc kubenswrapper[4736]: I0316 16:00:04.722353 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561280-j4bz6" event={"ID":"41929956-3d26-4a51-9444-5011e999a62d","Type":"ContainerStarted","Data":"1a60fd41b672fdf85125d6607ce0feae8fdb9ac4f2488a72e65fc4a69fb5f52b"} Mar 16 16:00:04 crc kubenswrapper[4736]: I0316 16:00:04.746937 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561280-j4bz6" podStartSLOduration=1.356429014 podStartE2EDuration="4.746913617s" podCreationTimestamp="2026-03-16 16:00:00 +0000 UTC" firstStartedPulling="2026-03-16 16:00:00.946918961 +0000 UTC m=+2802.674309248" lastFinishedPulling="2026-03-16 16:00:04.337403564 +0000 UTC m=+2806.064793851" observedRunningTime="2026-03-16 16:00:04.739137217 +0000 UTC m=+2806.466527514" watchObservedRunningTime="2026-03-16 16:00:04.746913617 +0000 UTC m=+2806.474303914" Mar 16 16:00:04 crc kubenswrapper[4736]: I0316 16:00:04.988802 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e95b535a-2e38-4797-97c6-5ab54160b983" path="/var/lib/kubelet/pods/e95b535a-2e38-4797-97c6-5ab54160b983/volumes" Mar 16 16:00:05 crc kubenswrapper[4736]: I0316 16:00:05.734795 4736 generic.go:334] "Generic (PLEG): container finished" podID="41929956-3d26-4a51-9444-5011e999a62d" containerID="1a60fd41b672fdf85125d6607ce0feae8fdb9ac4f2488a72e65fc4a69fb5f52b" exitCode=0 Mar 16 16:00:05 crc kubenswrapper[4736]: I0316 16:00:05.734845 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561280-j4bz6" event={"ID":"41929956-3d26-4a51-9444-5011e999a62d","Type":"ContainerDied","Data":"1a60fd41b672fdf85125d6607ce0feae8fdb9ac4f2488a72e65fc4a69fb5f52b"} Mar 16 16:00:07 crc kubenswrapper[4736]: I0316 16:00:07.173273 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561280-j4bz6" Mar 16 16:00:07 crc kubenswrapper[4736]: I0316 16:00:07.344724 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9qw7\" (UniqueName: \"kubernetes.io/projected/41929956-3d26-4a51-9444-5011e999a62d-kube-api-access-l9qw7\") pod \"41929956-3d26-4a51-9444-5011e999a62d\" (UID: \"41929956-3d26-4a51-9444-5011e999a62d\") " Mar 16 16:00:07 crc kubenswrapper[4736]: I0316 16:00:07.352329 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41929956-3d26-4a51-9444-5011e999a62d-kube-api-access-l9qw7" (OuterVolumeSpecName: "kube-api-access-l9qw7") pod "41929956-3d26-4a51-9444-5011e999a62d" (UID: "41929956-3d26-4a51-9444-5011e999a62d"). InnerVolumeSpecName "kube-api-access-l9qw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:00:07 crc kubenswrapper[4736]: I0316 16:00:07.447537 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9qw7\" (UniqueName: \"kubernetes.io/projected/41929956-3d26-4a51-9444-5011e999a62d-kube-api-access-l9qw7\") on node \"crc\" DevicePath \"\"" Mar 16 16:00:07 crc kubenswrapper[4736]: I0316 16:00:07.755322 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561280-j4bz6" event={"ID":"41929956-3d26-4a51-9444-5011e999a62d","Type":"ContainerDied","Data":"1fa92ebf49077de96c30beec11914a104b51ad8274ec87916b83edb32f999a80"} Mar 16 16:00:07 crc kubenswrapper[4736]: I0316 16:00:07.755623 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fa92ebf49077de96c30beec11914a104b51ad8274ec87916b83edb32f999a80" Mar 16 16:00:07 crc kubenswrapper[4736]: I0316 16:00:07.755389 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561280-j4bz6" Mar 16 16:00:07 crc kubenswrapper[4736]: I0316 16:00:07.812886 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561274-tdh84"] Mar 16 16:00:07 crc kubenswrapper[4736]: I0316 16:00:07.822256 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561274-tdh84"] Mar 16 16:00:08 crc kubenswrapper[4736]: I0316 16:00:08.508515 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:00:08 crc kubenswrapper[4736]: I0316 16:00:08.508565 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:00:08 crc kubenswrapper[4736]: I0316 16:00:08.508605 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 16:00:08 crc kubenswrapper[4736]: I0316 16:00:08.509339 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b210602e9eee8800bde2e5397f030a8b6bfc604c9741a0e2ad86494bc7ece957"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 16:00:08 crc kubenswrapper[4736]: I0316 16:00:08.509391 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://b210602e9eee8800bde2e5397f030a8b6bfc604c9741a0e2ad86494bc7ece957" gracePeriod=600 Mar 16 16:00:08 crc kubenswrapper[4736]: I0316 16:00:08.764532 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="b210602e9eee8800bde2e5397f030a8b6bfc604c9741a0e2ad86494bc7ece957" exitCode=0 Mar 16 16:00:08 crc kubenswrapper[4736]: I0316 16:00:08.764593 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"b210602e9eee8800bde2e5397f030a8b6bfc604c9741a0e2ad86494bc7ece957"} Mar 16 16:00:08 crc kubenswrapper[4736]: I0316 16:00:08.764851 4736 scope.go:117] "RemoveContainer" containerID="0d6dcab2b223aa3471faf151d1a54e3e2d9584e7cc1264435fe77965e8aca985" Mar 16 16:00:08 crc kubenswrapper[4736]: I0316 16:00:08.987538 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a0f0fa-94fe-450e-9800-565c0f5b66e1" path="/var/lib/kubelet/pods/35a0f0fa-94fe-450e-9800-565c0f5b66e1/volumes" Mar 16 16:00:09 crc kubenswrapper[4736]: I0316 16:00:09.780254 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8"} Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.221540 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ftntd"] Mar 16 16:00:42 crc kubenswrapper[4736]: E0316 16:00:42.222469 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41929956-3d26-4a51-9444-5011e999a62d" containerName="oc" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.222484 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="41929956-3d26-4a51-9444-5011e999a62d" containerName="oc" Mar 16 16:00:42 crc kubenswrapper[4736]: E0316 16:00:42.222511 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949c9ebb-dba5-4681-9252-c327b26a00e6" containerName="collect-profiles" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.222519 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="949c9ebb-dba5-4681-9252-c327b26a00e6" containerName="collect-profiles" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.222740 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="949c9ebb-dba5-4681-9252-c327b26a00e6" containerName="collect-profiles" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.222770 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="41929956-3d26-4a51-9444-5011e999a62d" containerName="oc" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.225310 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.241674 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftntd"] Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.264324 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44843712-11b8-4d11-b61f-00678e344b30-utilities\") pod \"community-operators-ftntd\" (UID: \"44843712-11b8-4d11-b61f-00678e344b30\") " pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.265099 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44843712-11b8-4d11-b61f-00678e344b30-catalog-content\") pod \"community-operators-ftntd\" (UID: \"44843712-11b8-4d11-b61f-00678e344b30\") " pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.265217 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhwcl\" (UniqueName: \"kubernetes.io/projected/44843712-11b8-4d11-b61f-00678e344b30-kube-api-access-jhwcl\") pod \"community-operators-ftntd\" (UID: \"44843712-11b8-4d11-b61f-00678e344b30\") " pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.366705 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44843712-11b8-4d11-b61f-00678e344b30-catalog-content\") pod \"community-operators-ftntd\" (UID: \"44843712-11b8-4d11-b61f-00678e344b30\") " pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.366780 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhwcl\" (UniqueName: \"kubernetes.io/projected/44843712-11b8-4d11-b61f-00678e344b30-kube-api-access-jhwcl\") pod \"community-operators-ftntd\" (UID: \"44843712-11b8-4d11-b61f-00678e344b30\") " pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.366840 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44843712-11b8-4d11-b61f-00678e344b30-utilities\") pod \"community-operators-ftntd\" (UID: \"44843712-11b8-4d11-b61f-00678e344b30\") " pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.367289 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44843712-11b8-4d11-b61f-00678e344b30-catalog-content\") pod \"community-operators-ftntd\" (UID: \"44843712-11b8-4d11-b61f-00678e344b30\") " pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.367412 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44843712-11b8-4d11-b61f-00678e344b30-utilities\") pod \"community-operators-ftntd\" (UID: \"44843712-11b8-4d11-b61f-00678e344b30\") " pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.387922 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhwcl\" (UniqueName: \"kubernetes.io/projected/44843712-11b8-4d11-b61f-00678e344b30-kube-api-access-jhwcl\") pod \"community-operators-ftntd\" (UID: \"44843712-11b8-4d11-b61f-00678e344b30\") " pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.412956 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w22pd"] Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.421573 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.447072 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w22pd"] Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.566496 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.577560 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbm6r\" (UniqueName: \"kubernetes.io/projected/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-kube-api-access-sbm6r\") pod \"redhat-marketplace-w22pd\" (UID: \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\") " pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.577874 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-catalog-content\") pod \"redhat-marketplace-w22pd\" (UID: \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\") " pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.577980 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-utilities\") pod \"redhat-marketplace-w22pd\" (UID: \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\") " pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.679789 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbm6r\" (UniqueName: \"kubernetes.io/projected/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-kube-api-access-sbm6r\") pod \"redhat-marketplace-w22pd\" (UID: \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\") " pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.679867 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-catalog-content\") pod \"redhat-marketplace-w22pd\" (UID: \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\") " pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.679896 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-utilities\") pod \"redhat-marketplace-w22pd\" (UID: \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\") " pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.680406 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-catalog-content\") pod \"redhat-marketplace-w22pd\" (UID: \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\") " pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.680411 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-utilities\") pod \"redhat-marketplace-w22pd\" (UID: \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\") " pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.710002 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbm6r\" (UniqueName: \"kubernetes.io/projected/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-kube-api-access-sbm6r\") pod \"redhat-marketplace-w22pd\" (UID: \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\") " pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:00:42 crc kubenswrapper[4736]: I0316 16:00:42.773624 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:00:43 crc kubenswrapper[4736]: I0316 16:00:43.262656 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftntd"] Mar 16 16:00:43 crc kubenswrapper[4736]: I0316 16:00:43.589673 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w22pd"] Mar 16 16:00:43 crc kubenswrapper[4736]: W0316 16:00:43.595766 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b4b7df8_84c0_45a8_b2de_dd28ca33da25.slice/crio-5d349061f8c413cf226ad465dc454fd1387c80f69f83ec7f28ffa89379c585eb WatchSource:0}: Error finding container 5d349061f8c413cf226ad465dc454fd1387c80f69f83ec7f28ffa89379c585eb: Status 404 returned error can't find the container with id 5d349061f8c413cf226ad465dc454fd1387c80f69f83ec7f28ffa89379c585eb Mar 16 16:00:44 crc kubenswrapper[4736]: I0316 16:00:44.133087 4736 generic.go:334] "Generic (PLEG): container finished" podID="6b4b7df8-84c0-45a8-b2de-dd28ca33da25" containerID="12b531119107f122a5c5fbfcfaa282ddc74bd3ec0f1f13d998b44311767fcec6" exitCode=0 Mar 16 16:00:44 crc kubenswrapper[4736]: I0316 16:00:44.133220 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w22pd" event={"ID":"6b4b7df8-84c0-45a8-b2de-dd28ca33da25","Type":"ContainerDied","Data":"12b531119107f122a5c5fbfcfaa282ddc74bd3ec0f1f13d998b44311767fcec6"} Mar 16 16:00:44 crc kubenswrapper[4736]: I0316 16:00:44.133263 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w22pd" event={"ID":"6b4b7df8-84c0-45a8-b2de-dd28ca33da25","Type":"ContainerStarted","Data":"5d349061f8c413cf226ad465dc454fd1387c80f69f83ec7f28ffa89379c585eb"} Mar 16 16:00:44 crc kubenswrapper[4736]: I0316 16:00:44.135124 4736 generic.go:334] "Generic (PLEG): container finished" podID="44843712-11b8-4d11-b61f-00678e344b30" containerID="299c7fdac0a2219f5165184ee36697bb1011f1681bb211d7b8334099c2db26e2" exitCode=0 Mar 16 16:00:44 crc kubenswrapper[4736]: I0316 16:00:44.135166 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftntd" event={"ID":"44843712-11b8-4d11-b61f-00678e344b30","Type":"ContainerDied","Data":"299c7fdac0a2219f5165184ee36697bb1011f1681bb211d7b8334099c2db26e2"} Mar 16 16:00:44 crc kubenswrapper[4736]: I0316 16:00:44.135193 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftntd" event={"ID":"44843712-11b8-4d11-b61f-00678e344b30","Type":"ContainerStarted","Data":"910e1f03278f9061a63140aa05a1f593cfa59e6850a983b60c95bbcb291d0c06"} Mar 16 16:00:44 crc kubenswrapper[4736]: I0316 16:00:44.135933 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 16:00:45 crc kubenswrapper[4736]: I0316 16:00:45.150237 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w22pd" event={"ID":"6b4b7df8-84c0-45a8-b2de-dd28ca33da25","Type":"ContainerStarted","Data":"4e9c0e15aecc5625c4ffd94a51fe29095b87a10130c82b3e3ae12020f7f55335"} Mar 16 16:00:47 crc kubenswrapper[4736]: I0316 16:00:47.173427 4736 generic.go:334] "Generic (PLEG): container finished" podID="6b4b7df8-84c0-45a8-b2de-dd28ca33da25" containerID="4e9c0e15aecc5625c4ffd94a51fe29095b87a10130c82b3e3ae12020f7f55335" exitCode=0 Mar 16 16:00:47 crc kubenswrapper[4736]: I0316 16:00:47.173638 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w22pd" event={"ID":"6b4b7df8-84c0-45a8-b2de-dd28ca33da25","Type":"ContainerDied","Data":"4e9c0e15aecc5625c4ffd94a51fe29095b87a10130c82b3e3ae12020f7f55335"} Mar 16 16:00:50 crc kubenswrapper[4736]: I0316 16:00:50.206482 4736 generic.go:334] "Generic (PLEG): container finished" podID="44843712-11b8-4d11-b61f-00678e344b30" containerID="5b85387327c831f5148d91bf2cc9c35a0d71c330c802dadf0c094e25d194fca5" exitCode=0 Mar 16 16:00:50 crc kubenswrapper[4736]: I0316 16:00:50.206537 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftntd" event={"ID":"44843712-11b8-4d11-b61f-00678e344b30","Type":"ContainerDied","Data":"5b85387327c831f5148d91bf2cc9c35a0d71c330c802dadf0c094e25d194fca5"} Mar 16 16:00:50 crc kubenswrapper[4736]: I0316 16:00:50.213255 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w22pd" event={"ID":"6b4b7df8-84c0-45a8-b2de-dd28ca33da25","Type":"ContainerStarted","Data":"b0abe9ca77dd2d955a54951b7d0455b010818bec0089db11c004aa0538eae9c6"} Mar 16 16:00:50 crc kubenswrapper[4736]: I0316 16:00:50.257470 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w22pd" podStartSLOduration=2.577755567 podStartE2EDuration="8.257452158s" podCreationTimestamp="2026-03-16 16:00:42 +0000 UTC" firstStartedPulling="2026-03-16 16:00:44.135649668 +0000 UTC m=+2845.863039955" lastFinishedPulling="2026-03-16 16:00:49.815346259 +0000 UTC m=+2851.542736546" observedRunningTime="2026-03-16 16:00:50.255307781 +0000 UTC m=+2851.982698078" watchObservedRunningTime="2026-03-16 16:00:50.257452158 +0000 UTC m=+2851.984842445" Mar 16 16:00:50 crc kubenswrapper[4736]: I0316 16:00:50.460574 4736 scope.go:117] "RemoveContainer" containerID="988ab0a1449a75719b2eaa713d728f43e859fed2b271c60d45a8ff342d1ac67d" Mar 16 16:00:50 crc kubenswrapper[4736]: I0316 16:00:50.510858 4736 scope.go:117] "RemoveContainer" containerID="19ccce3bd13bcee9c4bea214ebe6034c8d2139fce8eb1204d3d9993e69b77a2b" Mar 16 16:00:52 crc kubenswrapper[4736]: I0316 16:00:52.230947 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftntd" event={"ID":"44843712-11b8-4d11-b61f-00678e344b30","Type":"ContainerStarted","Data":"0ecbeefd8e3b38c484a44b435c8864a5517539648f94142ed9ca578b578964bd"} Mar 16 16:00:52 crc kubenswrapper[4736]: I0316 16:00:52.259179 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ftntd" podStartSLOduration=3.1282189049999998 podStartE2EDuration="10.259162551s" podCreationTimestamp="2026-03-16 16:00:42 +0000 UTC" firstStartedPulling="2026-03-16 16:00:44.136929282 +0000 UTC m=+2845.864319569" lastFinishedPulling="2026-03-16 16:00:51.267872938 +0000 UTC m=+2852.995263215" observedRunningTime="2026-03-16 16:00:52.251970547 +0000 UTC m=+2853.979360834" watchObservedRunningTime="2026-03-16 16:00:52.259162551 +0000 UTC m=+2853.986552838" Mar 16 16:00:52 crc kubenswrapper[4736]: I0316 16:00:52.567164 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:00:52 crc kubenswrapper[4736]: I0316 16:00:52.567223 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:00:52 crc kubenswrapper[4736]: I0316 16:00:52.773940 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:00:52 crc kubenswrapper[4736]: I0316 16:00:52.774006 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:00:53 crc kubenswrapper[4736]: I0316 16:00:53.611930 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ftntd" podUID="44843712-11b8-4d11-b61f-00678e344b30" containerName="registry-server" probeResult="failure" output=< Mar 16 16:00:53 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:00:53 crc kubenswrapper[4736]: > Mar 16 16:00:53 crc kubenswrapper[4736]: I0316 16:00:53.823865 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-w22pd" podUID="6b4b7df8-84c0-45a8-b2de-dd28ca33da25" containerName="registry-server" probeResult="failure" output=< Mar 16 16:00:53 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:00:53 crc kubenswrapper[4736]: > Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.148983 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29561281-4v2pq"] Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.151176 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.179428 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29561281-4v2pq"] Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.260937 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-config-data\") pod \"keystone-cron-29561281-4v2pq\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.261008 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-fernet-keys\") pod \"keystone-cron-29561281-4v2pq\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.261301 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-combined-ca-bundle\") pod \"keystone-cron-29561281-4v2pq\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.261481 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9wns\" (UniqueName: \"kubernetes.io/projected/1be53923-c22e-42a2-936a-5dd4a6484821-kube-api-access-v9wns\") pod \"keystone-cron-29561281-4v2pq\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.363582 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-combined-ca-bundle\") pod \"keystone-cron-29561281-4v2pq\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.363682 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9wns\" (UniqueName: \"kubernetes.io/projected/1be53923-c22e-42a2-936a-5dd4a6484821-kube-api-access-v9wns\") pod \"keystone-cron-29561281-4v2pq\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.363780 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-config-data\") pod \"keystone-cron-29561281-4v2pq\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.363819 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-fernet-keys\") pod \"keystone-cron-29561281-4v2pq\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.383351 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-fernet-keys\") pod \"keystone-cron-29561281-4v2pq\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.384498 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-combined-ca-bundle\") pod \"keystone-cron-29561281-4v2pq\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.399270 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-config-data\") pod \"keystone-cron-29561281-4v2pq\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.413856 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9wns\" (UniqueName: \"kubernetes.io/projected/1be53923-c22e-42a2-936a-5dd4a6484821-kube-api-access-v9wns\") pod \"keystone-cron-29561281-4v2pq\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:00 crc kubenswrapper[4736]: I0316 16:01:00.471233 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:01 crc kubenswrapper[4736]: I0316 16:01:01.009670 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29561281-4v2pq"] Mar 16 16:01:01 crc kubenswrapper[4736]: I0316 16:01:01.374096 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29561281-4v2pq" event={"ID":"1be53923-c22e-42a2-936a-5dd4a6484821","Type":"ContainerStarted","Data":"9900d8b7094e99244587d5e1555573a423f29359118694ff3ca7ea3f9664955d"} Mar 16 16:01:01 crc kubenswrapper[4736]: I0316 16:01:01.374525 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29561281-4v2pq" event={"ID":"1be53923-c22e-42a2-936a-5dd4a6484821","Type":"ContainerStarted","Data":"e292901af5d13933568423298290233afd81536b89ed048cb1cff6cc9d318c01"} Mar 16 16:01:02 crc kubenswrapper[4736]: I0316 16:01:02.622867 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:01:02 crc kubenswrapper[4736]: I0316 16:01:02.649270 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29561281-4v2pq" podStartSLOduration=2.649248412 podStartE2EDuration="2.649248412s" podCreationTimestamp="2026-03-16 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 16:01:01.403205716 +0000 UTC m=+2863.130596003" watchObservedRunningTime="2026-03-16 16:01:02.649248412 +0000 UTC m=+2864.376638699" Mar 16 16:01:02 crc kubenswrapper[4736]: I0316 16:01:02.670902 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:01:02 crc kubenswrapper[4736]: I0316 16:01:02.770694 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftntd"] Mar 16 16:01:02 crc kubenswrapper[4736]: I0316 16:01:02.832806 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:01:02 crc kubenswrapper[4736]: I0316 16:01:02.876763 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m9cjw"] Mar 16 16:01:02 crc kubenswrapper[4736]: I0316 16:01:02.877061 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m9cjw" podUID="47430cd6-bdce-4f4c-b736-6dad559aed15" containerName="registry-server" containerID="cri-o://35c716cda902b938a5da689de1a2a9b4d282a507edaf9f170de20dff9481ec3c" gracePeriod=2 Mar 16 16:01:02 crc kubenswrapper[4736]: I0316 16:01:02.882920 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.323283 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9cjw" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.394788 4736 generic.go:334] "Generic (PLEG): container finished" podID="47430cd6-bdce-4f4c-b736-6dad559aed15" containerID="35c716cda902b938a5da689de1a2a9b4d282a507edaf9f170de20dff9481ec3c" exitCode=0 Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.395738 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m9cjw" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.396044 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9cjw" event={"ID":"47430cd6-bdce-4f4c-b736-6dad559aed15","Type":"ContainerDied","Data":"35c716cda902b938a5da689de1a2a9b4d282a507edaf9f170de20dff9481ec3c"} Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.396092 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m9cjw" event={"ID":"47430cd6-bdce-4f4c-b736-6dad559aed15","Type":"ContainerDied","Data":"541d091b5e65e565a38943dd1a10848c30fc34929d1157fb13c8f09b037b9b80"} Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.396159 4736 scope.go:117] "RemoveContainer" containerID="35c716cda902b938a5da689de1a2a9b4d282a507edaf9f170de20dff9481ec3c" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.427945 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdm97\" (UniqueName: \"kubernetes.io/projected/47430cd6-bdce-4f4c-b736-6dad559aed15-kube-api-access-qdm97\") pod \"47430cd6-bdce-4f4c-b736-6dad559aed15\" (UID: \"47430cd6-bdce-4f4c-b736-6dad559aed15\") " Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.428262 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47430cd6-bdce-4f4c-b736-6dad559aed15-catalog-content\") pod \"47430cd6-bdce-4f4c-b736-6dad559aed15\" (UID: \"47430cd6-bdce-4f4c-b736-6dad559aed15\") " Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.428347 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47430cd6-bdce-4f4c-b736-6dad559aed15-utilities\") pod \"47430cd6-bdce-4f4c-b736-6dad559aed15\" (UID: \"47430cd6-bdce-4f4c-b736-6dad559aed15\") " Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.429937 4736 scope.go:117] "RemoveContainer" containerID="79f09f4b1084ea153226d68e6d153e2d37a78bc305bf557880de922c5eb0d5c1" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.430309 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47430cd6-bdce-4f4c-b736-6dad559aed15-utilities" (OuterVolumeSpecName: "utilities") pod "47430cd6-bdce-4f4c-b736-6dad559aed15" (UID: "47430cd6-bdce-4f4c-b736-6dad559aed15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.438381 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47430cd6-bdce-4f4c-b736-6dad559aed15-kube-api-access-qdm97" (OuterVolumeSpecName: "kube-api-access-qdm97") pod "47430cd6-bdce-4f4c-b736-6dad559aed15" (UID: "47430cd6-bdce-4f4c-b736-6dad559aed15"). InnerVolumeSpecName "kube-api-access-qdm97". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.517291 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47430cd6-bdce-4f4c-b736-6dad559aed15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47430cd6-bdce-4f4c-b736-6dad559aed15" (UID: "47430cd6-bdce-4f4c-b736-6dad559aed15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.532375 4736 scope.go:117] "RemoveContainer" containerID="57cb08d2237697596c9ac6aa976960fb7b9941ff088d584e353ff7cc7f84e4a3" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.534051 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdm97\" (UniqueName: \"kubernetes.io/projected/47430cd6-bdce-4f4c-b736-6dad559aed15-kube-api-access-qdm97\") on node \"crc\" DevicePath \"\"" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.534072 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47430cd6-bdce-4f4c-b736-6dad559aed15-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.534097 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47430cd6-bdce-4f4c-b736-6dad559aed15-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.570151 4736 scope.go:117] "RemoveContainer" containerID="35c716cda902b938a5da689de1a2a9b4d282a507edaf9f170de20dff9481ec3c" Mar 16 16:01:03 crc kubenswrapper[4736]: E0316 16:01:03.574049 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35c716cda902b938a5da689de1a2a9b4d282a507edaf9f170de20dff9481ec3c\": container with ID starting with 35c716cda902b938a5da689de1a2a9b4d282a507edaf9f170de20dff9481ec3c not found: ID does not exist" containerID="35c716cda902b938a5da689de1a2a9b4d282a507edaf9f170de20dff9481ec3c" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.574082 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35c716cda902b938a5da689de1a2a9b4d282a507edaf9f170de20dff9481ec3c"} err="failed to get container status \"35c716cda902b938a5da689de1a2a9b4d282a507edaf9f170de20dff9481ec3c\": rpc error: code = NotFound desc = could not find container \"35c716cda902b938a5da689de1a2a9b4d282a507edaf9f170de20dff9481ec3c\": container with ID starting with 35c716cda902b938a5da689de1a2a9b4d282a507edaf9f170de20dff9481ec3c not found: ID does not exist" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.574103 4736 scope.go:117] "RemoveContainer" containerID="79f09f4b1084ea153226d68e6d153e2d37a78bc305bf557880de922c5eb0d5c1" Mar 16 16:01:03 crc kubenswrapper[4736]: E0316 16:01:03.574348 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f09f4b1084ea153226d68e6d153e2d37a78bc305bf557880de922c5eb0d5c1\": container with ID starting with 79f09f4b1084ea153226d68e6d153e2d37a78bc305bf557880de922c5eb0d5c1 not found: ID does not exist" containerID="79f09f4b1084ea153226d68e6d153e2d37a78bc305bf557880de922c5eb0d5c1" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.574375 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f09f4b1084ea153226d68e6d153e2d37a78bc305bf557880de922c5eb0d5c1"} err="failed to get container status \"79f09f4b1084ea153226d68e6d153e2d37a78bc305bf557880de922c5eb0d5c1\": rpc error: code = NotFound desc = could not find container \"79f09f4b1084ea153226d68e6d153e2d37a78bc305bf557880de922c5eb0d5c1\": container with ID starting with 79f09f4b1084ea153226d68e6d153e2d37a78bc305bf557880de922c5eb0d5c1 not found: ID does not exist" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.574391 4736 scope.go:117] "RemoveContainer" containerID="57cb08d2237697596c9ac6aa976960fb7b9941ff088d584e353ff7cc7f84e4a3" Mar 16 16:01:03 crc kubenswrapper[4736]: E0316 16:01:03.574615 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57cb08d2237697596c9ac6aa976960fb7b9941ff088d584e353ff7cc7f84e4a3\": container with ID starting with 57cb08d2237697596c9ac6aa976960fb7b9941ff088d584e353ff7cc7f84e4a3 not found: ID does not exist" containerID="57cb08d2237697596c9ac6aa976960fb7b9941ff088d584e353ff7cc7f84e4a3" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.574641 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57cb08d2237697596c9ac6aa976960fb7b9941ff088d584e353ff7cc7f84e4a3"} err="failed to get container status \"57cb08d2237697596c9ac6aa976960fb7b9941ff088d584e353ff7cc7f84e4a3\": rpc error: code = NotFound desc = could not find container \"57cb08d2237697596c9ac6aa976960fb7b9941ff088d584e353ff7cc7f84e4a3\": container with ID starting with 57cb08d2237697596c9ac6aa976960fb7b9941ff088d584e353ff7cc7f84e4a3 not found: ID does not exist" Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.721205 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m9cjw"] Mar 16 16:01:03 crc kubenswrapper[4736]: I0316 16:01:03.746014 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m9cjw"] Mar 16 16:01:04 crc kubenswrapper[4736]: I0316 16:01:04.994676 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47430cd6-bdce-4f4c-b736-6dad559aed15" path="/var/lib/kubelet/pods/47430cd6-bdce-4f4c-b736-6dad559aed15/volumes" Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.261225 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w22pd"] Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.261456 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w22pd" podUID="6b4b7df8-84c0-45a8-b2de-dd28ca33da25" containerName="registry-server" containerID="cri-o://b0abe9ca77dd2d955a54951b7d0455b010818bec0089db11c004aa0538eae9c6" gracePeriod=2 Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.418799 4736 generic.go:334] "Generic (PLEG): container finished" podID="1be53923-c22e-42a2-936a-5dd4a6484821" containerID="9900d8b7094e99244587d5e1555573a423f29359118694ff3ca7ea3f9664955d" exitCode=0 Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.418870 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29561281-4v2pq" event={"ID":"1be53923-c22e-42a2-936a-5dd4a6484821","Type":"ContainerDied","Data":"9900d8b7094e99244587d5e1555573a423f29359118694ff3ca7ea3f9664955d"} Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.424481 4736 generic.go:334] "Generic (PLEG): container finished" podID="6b4b7df8-84c0-45a8-b2de-dd28ca33da25" containerID="b0abe9ca77dd2d955a54951b7d0455b010818bec0089db11c004aa0538eae9c6" exitCode=0 Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.424629 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w22pd" event={"ID":"6b4b7df8-84c0-45a8-b2de-dd28ca33da25","Type":"ContainerDied","Data":"b0abe9ca77dd2d955a54951b7d0455b010818bec0089db11c004aa0538eae9c6"} Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.696401 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.786059 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-utilities\") pod \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\" (UID: \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\") " Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.786428 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbm6r\" (UniqueName: \"kubernetes.io/projected/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-kube-api-access-sbm6r\") pod \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\" (UID: \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\") " Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.786498 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-catalog-content\") pod \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\" (UID: \"6b4b7df8-84c0-45a8-b2de-dd28ca33da25\") " Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.786621 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-utilities" (OuterVolumeSpecName: "utilities") pod "6b4b7df8-84c0-45a8-b2de-dd28ca33da25" (UID: "6b4b7df8-84c0-45a8-b2de-dd28ca33da25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.787329 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.795711 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-kube-api-access-sbm6r" (OuterVolumeSpecName: "kube-api-access-sbm6r") pod "6b4b7df8-84c0-45a8-b2de-dd28ca33da25" (UID: "6b4b7df8-84c0-45a8-b2de-dd28ca33da25"). InnerVolumeSpecName "kube-api-access-sbm6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.825747 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b4b7df8-84c0-45a8-b2de-dd28ca33da25" (UID: "6b4b7df8-84c0-45a8-b2de-dd28ca33da25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.889373 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbm6r\" (UniqueName: \"kubernetes.io/projected/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-kube-api-access-sbm6r\") on node \"crc\" DevicePath \"\"" Mar 16 16:01:05 crc kubenswrapper[4736]: I0316 16:01:05.889410 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b4b7df8-84c0-45a8-b2de-dd28ca33da25-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.437733 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w22pd" Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.439701 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w22pd" event={"ID":"6b4b7df8-84c0-45a8-b2de-dd28ca33da25","Type":"ContainerDied","Data":"5d349061f8c413cf226ad465dc454fd1387c80f69f83ec7f28ffa89379c585eb"} Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.439774 4736 scope.go:117] "RemoveContainer" containerID="b0abe9ca77dd2d955a54951b7d0455b010818bec0089db11c004aa0538eae9c6" Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.488625 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w22pd"] Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.495657 4736 scope.go:117] "RemoveContainer" containerID="4e9c0e15aecc5625c4ffd94a51fe29095b87a10130c82b3e3ae12020f7f55335" Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.496327 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w22pd"] Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.539145 4736 scope.go:117] "RemoveContainer" containerID="12b531119107f122a5c5fbfcfaa282ddc74bd3ec0f1f13d998b44311767fcec6" Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.839363 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.907804 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9wns\" (UniqueName: \"kubernetes.io/projected/1be53923-c22e-42a2-936a-5dd4a6484821-kube-api-access-v9wns\") pod \"1be53923-c22e-42a2-936a-5dd4a6484821\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.908267 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-fernet-keys\") pod \"1be53923-c22e-42a2-936a-5dd4a6484821\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.908370 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-config-data\") pod \"1be53923-c22e-42a2-936a-5dd4a6484821\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.908513 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-combined-ca-bundle\") pod \"1be53923-c22e-42a2-936a-5dd4a6484821\" (UID: \"1be53923-c22e-42a2-936a-5dd4a6484821\") " Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.913896 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be53923-c22e-42a2-936a-5dd4a6484821-kube-api-access-v9wns" (OuterVolumeSpecName: "kube-api-access-v9wns") pod "1be53923-c22e-42a2-936a-5dd4a6484821" (UID: "1be53923-c22e-42a2-936a-5dd4a6484821"). InnerVolumeSpecName "kube-api-access-v9wns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.918835 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1be53923-c22e-42a2-936a-5dd4a6484821" (UID: "1be53923-c22e-42a2-936a-5dd4a6484821"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.954036 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1be53923-c22e-42a2-936a-5dd4a6484821" (UID: "1be53923-c22e-42a2-936a-5dd4a6484821"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.992642 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b4b7df8-84c0-45a8-b2de-dd28ca33da25" path="/var/lib/kubelet/pods/6b4b7df8-84c0-45a8-b2de-dd28ca33da25/volumes" Mar 16 16:01:06 crc kubenswrapper[4736]: I0316 16:01:06.992816 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-config-data" (OuterVolumeSpecName: "config-data") pod "1be53923-c22e-42a2-936a-5dd4a6484821" (UID: "1be53923-c22e-42a2-936a-5dd4a6484821"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 16:01:07 crc kubenswrapper[4736]: I0316 16:01:07.017780 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 16:01:07 crc kubenswrapper[4736]: I0316 16:01:07.017988 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9wns\" (UniqueName: \"kubernetes.io/projected/1be53923-c22e-42a2-936a-5dd4a6484821-kube-api-access-v9wns\") on node \"crc\" DevicePath \"\"" Mar 16 16:01:07 crc kubenswrapper[4736]: I0316 16:01:07.018022 4736 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 16 16:01:07 crc kubenswrapper[4736]: I0316 16:01:07.018040 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1be53923-c22e-42a2-936a-5dd4a6484821-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 16:01:07 crc kubenswrapper[4736]: I0316 16:01:07.449669 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29561281-4v2pq" event={"ID":"1be53923-c22e-42a2-936a-5dd4a6484821","Type":"ContainerDied","Data":"e292901af5d13933568423298290233afd81536b89ed048cb1cff6cc9d318c01"} Mar 16 16:01:07 crc kubenswrapper[4736]: I0316 16:01:07.449710 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e292901af5d13933568423298290233afd81536b89ed048cb1cff6cc9d318c01" Mar 16 16:01:07 crc kubenswrapper[4736]: I0316 16:01:07.449709 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29561281-4v2pq" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.148735 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561282-kqh5k"] Mar 16 16:02:00 crc kubenswrapper[4736]: E0316 16:02:00.150493 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47430cd6-bdce-4f4c-b736-6dad559aed15" containerName="extract-content" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.150524 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="47430cd6-bdce-4f4c-b736-6dad559aed15" containerName="extract-content" Mar 16 16:02:00 crc kubenswrapper[4736]: E0316 16:02:00.150559 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b4b7df8-84c0-45a8-b2de-dd28ca33da25" containerName="extract-content" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.150576 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b4b7df8-84c0-45a8-b2de-dd28ca33da25" containerName="extract-content" Mar 16 16:02:00 crc kubenswrapper[4736]: E0316 16:02:00.150617 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be53923-c22e-42a2-936a-5dd4a6484821" containerName="keystone-cron" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.150636 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be53923-c22e-42a2-936a-5dd4a6484821" containerName="keystone-cron" Mar 16 16:02:00 crc kubenswrapper[4736]: E0316 16:02:00.150662 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b4b7df8-84c0-45a8-b2de-dd28ca33da25" containerName="extract-utilities" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.150679 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b4b7df8-84c0-45a8-b2de-dd28ca33da25" containerName="extract-utilities" Mar 16 16:02:00 crc kubenswrapper[4736]: E0316 16:02:00.150776 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b4b7df8-84c0-45a8-b2de-dd28ca33da25" containerName="registry-server" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.150797 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b4b7df8-84c0-45a8-b2de-dd28ca33da25" containerName="registry-server" Mar 16 16:02:00 crc kubenswrapper[4736]: E0316 16:02:00.150824 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47430cd6-bdce-4f4c-b736-6dad559aed15" containerName="registry-server" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.150839 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="47430cd6-bdce-4f4c-b736-6dad559aed15" containerName="registry-server" Mar 16 16:02:00 crc kubenswrapper[4736]: E0316 16:02:00.150881 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47430cd6-bdce-4f4c-b736-6dad559aed15" containerName="extract-utilities" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.150897 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="47430cd6-bdce-4f4c-b736-6dad559aed15" containerName="extract-utilities" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.151433 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="47430cd6-bdce-4f4c-b736-6dad559aed15" containerName="registry-server" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.151467 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be53923-c22e-42a2-936a-5dd4a6484821" containerName="keystone-cron" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.151496 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b4b7df8-84c0-45a8-b2de-dd28ca33da25" containerName="registry-server" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.152957 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561282-kqh5k" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.155464 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.155503 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.155627 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.159674 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561282-kqh5k"] Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.253205 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmv5b\" (UniqueName: \"kubernetes.io/projected/1faa3a39-edf5-45df-ac37-69de3c2acea5-kube-api-access-fmv5b\") pod \"auto-csr-approver-29561282-kqh5k\" (UID: \"1faa3a39-edf5-45df-ac37-69de3c2acea5\") " pod="openshift-infra/auto-csr-approver-29561282-kqh5k" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.354678 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmv5b\" (UniqueName: \"kubernetes.io/projected/1faa3a39-edf5-45df-ac37-69de3c2acea5-kube-api-access-fmv5b\") pod \"auto-csr-approver-29561282-kqh5k\" (UID: \"1faa3a39-edf5-45df-ac37-69de3c2acea5\") " pod="openshift-infra/auto-csr-approver-29561282-kqh5k" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.378153 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmv5b\" (UniqueName: \"kubernetes.io/projected/1faa3a39-edf5-45df-ac37-69de3c2acea5-kube-api-access-fmv5b\") pod \"auto-csr-approver-29561282-kqh5k\" (UID: \"1faa3a39-edf5-45df-ac37-69de3c2acea5\") " pod="openshift-infra/auto-csr-approver-29561282-kqh5k" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.480321 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561282-kqh5k" Mar 16 16:02:00 crc kubenswrapper[4736]: I0316 16:02:00.946530 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561282-kqh5k"] Mar 16 16:02:01 crc kubenswrapper[4736]: I0316 16:02:01.968027 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561282-kqh5k" event={"ID":"1faa3a39-edf5-45df-ac37-69de3c2acea5","Type":"ContainerStarted","Data":"23ec227ff696482d12bde6b5e24eeda4793288ed4efc6bd938fdb3874cbded3d"} Mar 16 16:02:03 crc kubenswrapper[4736]: I0316 16:02:03.009288 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561282-kqh5k" event={"ID":"1faa3a39-edf5-45df-ac37-69de3c2acea5","Type":"ContainerStarted","Data":"78b8b5a99f500c86bbe904d122650503a1f3d1a7bae26be3e31b5b4847f2153d"} Mar 16 16:02:03 crc kubenswrapper[4736]: I0316 16:02:03.022565 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561282-kqh5k" podStartSLOduration=1.85208211 podStartE2EDuration="3.02254551s" podCreationTimestamp="2026-03-16 16:02:00 +0000 UTC" firstStartedPulling="2026-03-16 16:02:00.962845856 +0000 UTC m=+2922.690236143" lastFinishedPulling="2026-03-16 16:02:02.133309256 +0000 UTC m=+2923.860699543" observedRunningTime="2026-03-16 16:02:03.010701801 +0000 UTC m=+2924.738092098" watchObservedRunningTime="2026-03-16 16:02:03.02254551 +0000 UTC m=+2924.749935797" Mar 16 16:02:04 crc kubenswrapper[4736]: I0316 16:02:04.005460 4736 generic.go:334] "Generic (PLEG): container finished" podID="1faa3a39-edf5-45df-ac37-69de3c2acea5" containerID="78b8b5a99f500c86bbe904d122650503a1f3d1a7bae26be3e31b5b4847f2153d" exitCode=0 Mar 16 16:02:04 crc kubenswrapper[4736]: I0316 16:02:04.005576 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561282-kqh5k" event={"ID":"1faa3a39-edf5-45df-ac37-69de3c2acea5","Type":"ContainerDied","Data":"78b8b5a99f500c86bbe904d122650503a1f3d1a7bae26be3e31b5b4847f2153d"} Mar 16 16:02:05 crc kubenswrapper[4736]: I0316 16:02:05.395745 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561282-kqh5k" Mar 16 16:02:05 crc kubenswrapper[4736]: I0316 16:02:05.565646 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmv5b\" (UniqueName: \"kubernetes.io/projected/1faa3a39-edf5-45df-ac37-69de3c2acea5-kube-api-access-fmv5b\") pod \"1faa3a39-edf5-45df-ac37-69de3c2acea5\" (UID: \"1faa3a39-edf5-45df-ac37-69de3c2acea5\") " Mar 16 16:02:05 crc kubenswrapper[4736]: I0316 16:02:05.580495 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1faa3a39-edf5-45df-ac37-69de3c2acea5-kube-api-access-fmv5b" (OuterVolumeSpecName: "kube-api-access-fmv5b") pod "1faa3a39-edf5-45df-ac37-69de3c2acea5" (UID: "1faa3a39-edf5-45df-ac37-69de3c2acea5"). InnerVolumeSpecName "kube-api-access-fmv5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:02:05 crc kubenswrapper[4736]: I0316 16:02:05.668832 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmv5b\" (UniqueName: \"kubernetes.io/projected/1faa3a39-edf5-45df-ac37-69de3c2acea5-kube-api-access-fmv5b\") on node \"crc\" DevicePath \"\"" Mar 16 16:02:06 crc kubenswrapper[4736]: I0316 16:02:06.024605 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561282-kqh5k" event={"ID":"1faa3a39-edf5-45df-ac37-69de3c2acea5","Type":"ContainerDied","Data":"23ec227ff696482d12bde6b5e24eeda4793288ed4efc6bd938fdb3874cbded3d"} Mar 16 16:02:06 crc kubenswrapper[4736]: I0316 16:02:06.024655 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23ec227ff696482d12bde6b5e24eeda4793288ed4efc6bd938fdb3874cbded3d" Mar 16 16:02:06 crc kubenswrapper[4736]: I0316 16:02:06.024713 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561282-kqh5k" Mar 16 16:02:06 crc kubenswrapper[4736]: I0316 16:02:06.086972 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561276-n47zt"] Mar 16 16:02:06 crc kubenswrapper[4736]: I0316 16:02:06.096046 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561276-n47zt"] Mar 16 16:02:06 crc kubenswrapper[4736]: I0316 16:02:06.991797 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80de45db-7b56-43ff-a1eb-e35f34e5de64" path="/var/lib/kubelet/pods/80de45db-7b56-43ff-a1eb-e35f34e5de64/volumes" Mar 16 16:02:08 crc kubenswrapper[4736]: I0316 16:02:08.507705 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:02:08 crc kubenswrapper[4736]: I0316 16:02:08.507771 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:02:30 crc kubenswrapper[4736]: I0316 16:02:30.054079 4736 generic.go:334] "Generic (PLEG): container finished" podID="bf51b7ea-25d5-4fa2-9abe-db781c31f96f" containerID="5e2d317568390de919777c35957f08ccb5a52264be995408f561145e7dd78912" exitCode=0 Mar 16 16:02:30 crc kubenswrapper[4736]: I0316 16:02:30.054162 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" event={"ID":"bf51b7ea-25d5-4fa2-9abe-db781c31f96f","Type":"ContainerDied","Data":"5e2d317568390de919777c35957f08ccb5a52264be995408f561145e7dd78912"} Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.520918 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.635799 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-0\") pod \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.636309 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ssh-key-openstack-edpm-ipam\") pod \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.636384 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-inventory\") pod \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.636454 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-1\") pod \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.636498 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdfhz\" (UniqueName: \"kubernetes.io/projected/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-kube-api-access-jdfhz\") pod \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.636553 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-2\") pod \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.636592 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-telemetry-combined-ca-bundle\") pod \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\" (UID: \"bf51b7ea-25d5-4fa2-9abe-db781c31f96f\") " Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.641667 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-kube-api-access-jdfhz" (OuterVolumeSpecName: "kube-api-access-jdfhz") pod "bf51b7ea-25d5-4fa2-9abe-db781c31f96f" (UID: "bf51b7ea-25d5-4fa2-9abe-db781c31f96f"). InnerVolumeSpecName "kube-api-access-jdfhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.642904 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "bf51b7ea-25d5-4fa2-9abe-db781c31f96f" (UID: "bf51b7ea-25d5-4fa2-9abe-db781c31f96f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.668579 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "bf51b7ea-25d5-4fa2-9abe-db781c31f96f" (UID: "bf51b7ea-25d5-4fa2-9abe-db781c31f96f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.678387 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "bf51b7ea-25d5-4fa2-9abe-db781c31f96f" (UID: "bf51b7ea-25d5-4fa2-9abe-db781c31f96f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.679022 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bf51b7ea-25d5-4fa2-9abe-db781c31f96f" (UID: "bf51b7ea-25d5-4fa2-9abe-db781c31f96f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.680013 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-inventory" (OuterVolumeSpecName: "inventory") pod "bf51b7ea-25d5-4fa2-9abe-db781c31f96f" (UID: "bf51b7ea-25d5-4fa2-9abe-db781c31f96f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.680682 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "bf51b7ea-25d5-4fa2-9abe-db781c31f96f" (UID: "bf51b7ea-25d5-4fa2-9abe-db781c31f96f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.738522 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.738558 4736 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-inventory\") on node \"crc\" DevicePath \"\"" Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.738568 4736 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.738580 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdfhz\" (UniqueName: \"kubernetes.io/projected/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-kube-api-access-jdfhz\") on node \"crc\" DevicePath \"\"" Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.738590 4736 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.738599 4736 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 16:02:31 crc kubenswrapper[4736]: I0316 16:02:31.738611 4736 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bf51b7ea-25d5-4fa2-9abe-db781c31f96f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 16 16:02:32 crc kubenswrapper[4736]: I0316 16:02:32.074704 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" event={"ID":"bf51b7ea-25d5-4fa2-9abe-db781c31f96f","Type":"ContainerDied","Data":"8996351b88c6881c6dcac9a315a08344d8069b9eb6c8ef5ed7b4fc3faac76108"} Mar 16 16:02:32 crc kubenswrapper[4736]: I0316 16:02:32.074747 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8996351b88c6881c6dcac9a315a08344d8069b9eb6c8ef5ed7b4fc3faac76108" Mar 16 16:02:32 crc kubenswrapper[4736]: I0316 16:02:32.074786 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl" Mar 16 16:02:38 crc kubenswrapper[4736]: I0316 16:02:38.508391 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:02:38 crc kubenswrapper[4736]: I0316 16:02:38.509128 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:02:50 crc kubenswrapper[4736]: I0316 16:02:50.653868 4736 scope.go:117] "RemoveContainer" containerID="9cc0ffdd03fd069318084af0a58cd54e789774ae113bc2be1567ebbc91ce708d" Mar 16 16:03:08 crc kubenswrapper[4736]: I0316 16:03:08.507968 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:03:08 crc kubenswrapper[4736]: I0316 16:03:08.508643 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:03:08 crc kubenswrapper[4736]: I0316 16:03:08.508895 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 16:03:08 crc kubenswrapper[4736]: I0316 16:03:08.510286 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 16:03:08 crc kubenswrapper[4736]: I0316 16:03:08.510357 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" gracePeriod=600 Mar 16 16:03:08 crc kubenswrapper[4736]: E0316 16:03:08.630963 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:03:09 crc kubenswrapper[4736]: I0316 16:03:09.434367 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" exitCode=0 Mar 16 16:03:09 crc kubenswrapper[4736]: I0316 16:03:09.434731 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8"} Mar 16 16:03:09 crc kubenswrapper[4736]: I0316 16:03:09.434801 4736 scope.go:117] "RemoveContainer" containerID="b210602e9eee8800bde2e5397f030a8b6bfc604c9741a0e2ad86494bc7ece957" Mar 16 16:03:09 crc kubenswrapper[4736]: I0316 16:03:09.438389 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:03:09 crc kubenswrapper[4736]: E0316 16:03:09.438989 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.481345 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Mar 16 16:03:18 crc kubenswrapper[4736]: E0316 16:03:18.482468 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf51b7ea-25d5-4fa2-9abe-db781c31f96f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.482484 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf51b7ea-25d5-4fa2-9abe-db781c31f96f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 16 16:03:18 crc kubenswrapper[4736]: E0316 16:03:18.482514 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1faa3a39-edf5-45df-ac37-69de3c2acea5" containerName="oc" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.482520 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1faa3a39-edf5-45df-ac37-69de3c2acea5" containerName="oc" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.482684 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf51b7ea-25d5-4fa2-9abe-db781c31f96f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.482702 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1faa3a39-edf5-45df-ac37-69de3c2acea5" containerName="oc" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.483394 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.486965 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.488334 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.489265 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.492179 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.497291 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zn5xx" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.647465 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.647811 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.647853 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.647883 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.647918 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.647940 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.647955 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwndq\" (UniqueName: \"kubernetes.io/projected/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-kube-api-access-hwndq\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.647988 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.648004 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.750124 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.750325 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.750463 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.750496 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.750569 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.751583 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.751681 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.751658 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.752037 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwndq\" (UniqueName: \"kubernetes.io/projected/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-kube-api-access-hwndq\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.752124 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.752143 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.752580 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.753149 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.757593 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.760947 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.760994 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.770289 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.773998 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwndq\" (UniqueName: \"kubernetes.io/projected/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-kube-api-access-hwndq\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.782654 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:18 crc kubenswrapper[4736]: I0316 16:03:18.832724 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 16:03:19 crc kubenswrapper[4736]: I0316 16:03:19.376309 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Mar 16 16:03:19 crc kubenswrapper[4736]: I0316 16:03:19.536068 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31","Type":"ContainerStarted","Data":"307caa4d112429eb1d3f33368ee6e6756b3e73b61c54e41e28b54dd934a24f20"} Mar 16 16:03:24 crc kubenswrapper[4736]: I0316 16:03:24.979957 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:03:24 crc kubenswrapper[4736]: E0316 16:03:24.980799 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:03:39 crc kubenswrapper[4736]: I0316 16:03:39.978223 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:03:39 crc kubenswrapper[4736]: E0316 16:03:39.978996 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:03:51 crc kubenswrapper[4736]: I0316 16:03:51.978809 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:03:51 crc kubenswrapper[4736]: E0316 16:03:51.979705 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:03:54 crc kubenswrapper[4736]: E0316 16:03:54.363894 4736 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-tempest-all:e43235cb19da04699a53f42b6a75afe9" Mar 16 16:03:54 crc kubenswrapper[4736]: E0316 16:03:54.364245 4736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.18:5001/podified-antelope-centos9/openstack-tempest-all:e43235cb19da04699a53f42b6a75afe9" Mar 16 16:03:54 crc kubenswrapper[4736]: E0316 16:03:54.366769 4736 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:38.102.83.18:5001/podified-antelope-centos9/openstack-tempest-all:e43235cb19da04699a53f42b6a75afe9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwndq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-multi-thread-testing_openstack(50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 16 16:03:54 crc kubenswrapper[4736]: E0316 16:03:54.367994 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" Mar 16 16:03:54 crc kubenswrapper[4736]: E0316 16:03:54.951244 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.18:5001/podified-antelope-centos9/openstack-tempest-all:e43235cb19da04699a53f42b6a75afe9\\\"\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" Mar 16 16:04:00 crc kubenswrapper[4736]: I0316 16:04:00.140749 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561284-xc5nb"] Mar 16 16:04:00 crc kubenswrapper[4736]: I0316 16:04:00.142520 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561284-xc5nb" Mar 16 16:04:00 crc kubenswrapper[4736]: I0316 16:04:00.144694 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:04:00 crc kubenswrapper[4736]: I0316 16:04:00.144885 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:04:00 crc kubenswrapper[4736]: I0316 16:04:00.144920 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:04:00 crc kubenswrapper[4736]: I0316 16:04:00.164868 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561284-xc5nb"] Mar 16 16:04:00 crc kubenswrapper[4736]: I0316 16:04:00.337821 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vtld\" (UniqueName: \"kubernetes.io/projected/801867d0-26b4-4eed-a2d2-fdd653eea92f-kube-api-access-9vtld\") pod \"auto-csr-approver-29561284-xc5nb\" (UID: \"801867d0-26b4-4eed-a2d2-fdd653eea92f\") " pod="openshift-infra/auto-csr-approver-29561284-xc5nb" Mar 16 16:04:00 crc kubenswrapper[4736]: I0316 16:04:00.440694 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vtld\" (UniqueName: \"kubernetes.io/projected/801867d0-26b4-4eed-a2d2-fdd653eea92f-kube-api-access-9vtld\") pod \"auto-csr-approver-29561284-xc5nb\" (UID: \"801867d0-26b4-4eed-a2d2-fdd653eea92f\") " pod="openshift-infra/auto-csr-approver-29561284-xc5nb" Mar 16 16:04:00 crc kubenswrapper[4736]: I0316 16:04:00.460041 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vtld\" (UniqueName: \"kubernetes.io/projected/801867d0-26b4-4eed-a2d2-fdd653eea92f-kube-api-access-9vtld\") pod \"auto-csr-approver-29561284-xc5nb\" (UID: \"801867d0-26b4-4eed-a2d2-fdd653eea92f\") " pod="openshift-infra/auto-csr-approver-29561284-xc5nb" Mar 16 16:04:00 crc kubenswrapper[4736]: I0316 16:04:00.462100 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561284-xc5nb" Mar 16 16:04:00 crc kubenswrapper[4736]: I0316 16:04:00.922790 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561284-xc5nb"] Mar 16 16:04:00 crc kubenswrapper[4736]: I0316 16:04:00.998439 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561284-xc5nb" event={"ID":"801867d0-26b4-4eed-a2d2-fdd653eea92f","Type":"ContainerStarted","Data":"13af8498b6cb4e6da5dcb991927bdfc4704f2d157f46ea63e4b6d8de1c1b8412"} Mar 16 16:04:03 crc kubenswrapper[4736]: I0316 16:04:03.028219 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561284-xc5nb" event={"ID":"801867d0-26b4-4eed-a2d2-fdd653eea92f","Type":"ContainerStarted","Data":"e399496f77e933e8654777c11a34dad175c6bae4fff6c84b9cd2a94b3796dd44"} Mar 16 16:04:03 crc kubenswrapper[4736]: I0316 16:04:03.055699 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561284-xc5nb" podStartSLOduration=2.041313164 podStartE2EDuration="3.055681399s" podCreationTimestamp="2026-03-16 16:04:00 +0000 UTC" firstStartedPulling="2026-03-16 16:04:00.929917245 +0000 UTC m=+3042.657307532" lastFinishedPulling="2026-03-16 16:04:01.94428547 +0000 UTC m=+3043.671675767" observedRunningTime="2026-03-16 16:04:03.050082778 +0000 UTC m=+3044.777473075" watchObservedRunningTime="2026-03-16 16:04:03.055681399 +0000 UTC m=+3044.783071686" Mar 16 16:04:04 crc kubenswrapper[4736]: I0316 16:04:04.038200 4736 generic.go:334] "Generic (PLEG): container finished" podID="801867d0-26b4-4eed-a2d2-fdd653eea92f" containerID="e399496f77e933e8654777c11a34dad175c6bae4fff6c84b9cd2a94b3796dd44" exitCode=0 Mar 16 16:04:04 crc kubenswrapper[4736]: I0316 16:04:04.038247 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561284-xc5nb" event={"ID":"801867d0-26b4-4eed-a2d2-fdd653eea92f","Type":"ContainerDied","Data":"e399496f77e933e8654777c11a34dad175c6bae4fff6c84b9cd2a94b3796dd44"} Mar 16 16:04:04 crc kubenswrapper[4736]: I0316 16:04:04.978548 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:04:04 crc kubenswrapper[4736]: E0316 16:04:04.979231 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:04:05 crc kubenswrapper[4736]: I0316 16:04:05.546131 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561284-xc5nb" Mar 16 16:04:05 crc kubenswrapper[4736]: I0316 16:04:05.651839 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vtld\" (UniqueName: \"kubernetes.io/projected/801867d0-26b4-4eed-a2d2-fdd653eea92f-kube-api-access-9vtld\") pod \"801867d0-26b4-4eed-a2d2-fdd653eea92f\" (UID: \"801867d0-26b4-4eed-a2d2-fdd653eea92f\") " Mar 16 16:04:05 crc kubenswrapper[4736]: I0316 16:04:05.657510 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/801867d0-26b4-4eed-a2d2-fdd653eea92f-kube-api-access-9vtld" (OuterVolumeSpecName: "kube-api-access-9vtld") pod "801867d0-26b4-4eed-a2d2-fdd653eea92f" (UID: "801867d0-26b4-4eed-a2d2-fdd653eea92f"). InnerVolumeSpecName "kube-api-access-9vtld". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:04:05 crc kubenswrapper[4736]: I0316 16:04:05.753702 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vtld\" (UniqueName: \"kubernetes.io/projected/801867d0-26b4-4eed-a2d2-fdd653eea92f-kube-api-access-9vtld\") on node \"crc\" DevicePath \"\"" Mar 16 16:04:06 crc kubenswrapper[4736]: I0316 16:04:06.065661 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561284-xc5nb" event={"ID":"801867d0-26b4-4eed-a2d2-fdd653eea92f","Type":"ContainerDied","Data":"13af8498b6cb4e6da5dcb991927bdfc4704f2d157f46ea63e4b6d8de1c1b8412"} Mar 16 16:04:06 crc kubenswrapper[4736]: I0316 16:04:06.065721 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13af8498b6cb4e6da5dcb991927bdfc4704f2d157f46ea63e4b6d8de1c1b8412" Mar 16 16:04:06 crc kubenswrapper[4736]: I0316 16:04:06.065798 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561284-xc5nb" Mar 16 16:04:06 crc kubenswrapper[4736]: I0316 16:04:06.153781 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561278-kk4fq"] Mar 16 16:04:06 crc kubenswrapper[4736]: I0316 16:04:06.165176 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561278-kk4fq"] Mar 16 16:04:07 crc kubenswrapper[4736]: I0316 16:04:07.003754 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30f78a5b-48f7-4120-a8d4-5537e8378c0d" path="/var/lib/kubelet/pods/30f78a5b-48f7-4120-a8d4-5537e8378c0d/volumes" Mar 16 16:04:07 crc kubenswrapper[4736]: I0316 16:04:07.060606 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 16 16:04:09 crc kubenswrapper[4736]: I0316 16:04:09.095544 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31","Type":"ContainerStarted","Data":"98a899eb5d7c2fab7ff8f13bc38010546afd220ba2c4144b28b4a5b7d441beb2"} Mar 16 16:04:09 crc kubenswrapper[4736]: I0316 16:04:09.115343 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podStartSLOduration=4.4308773200000005 podStartE2EDuration="52.115326075s" podCreationTimestamp="2026-03-16 16:03:17 +0000 UTC" firstStartedPulling="2026-03-16 16:03:19.372942993 +0000 UTC m=+3001.100333280" lastFinishedPulling="2026-03-16 16:04:07.057391728 +0000 UTC m=+3048.784782035" observedRunningTime="2026-03-16 16:04:09.114178694 +0000 UTC m=+3050.841568991" watchObservedRunningTime="2026-03-16 16:04:09.115326075 +0000 UTC m=+3050.842716372" Mar 16 16:04:19 crc kubenswrapper[4736]: I0316 16:04:19.978624 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:04:19 crc kubenswrapper[4736]: E0316 16:04:19.979730 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:04:30 crc kubenswrapper[4736]: I0316 16:04:30.978407 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:04:30 crc kubenswrapper[4736]: E0316 16:04:30.979048 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:04:45 crc kubenswrapper[4736]: I0316 16:04:45.979017 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:04:45 crc kubenswrapper[4736]: E0316 16:04:45.979846 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:04:50 crc kubenswrapper[4736]: I0316 16:04:50.791651 4736 scope.go:117] "RemoveContainer" containerID="e526daeae278e39b64be1085317bf676310ee4121b8127c32c4ae808e5b93f2b" Mar 16 16:04:57 crc kubenswrapper[4736]: I0316 16:04:57.978726 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:04:57 crc kubenswrapper[4736]: E0316 16:04:57.979602 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:05:09 crc kubenswrapper[4736]: I0316 16:05:09.978698 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:05:09 crc kubenswrapper[4736]: E0316 16:05:09.979574 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:05:23 crc kubenswrapper[4736]: I0316 16:05:23.978870 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:05:23 crc kubenswrapper[4736]: E0316 16:05:23.979959 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.463440 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m9skw"] Mar 16 16:05:28 crc kubenswrapper[4736]: E0316 16:05:28.464846 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801867d0-26b4-4eed-a2d2-fdd653eea92f" containerName="oc" Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.464870 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="801867d0-26b4-4eed-a2d2-fdd653eea92f" containerName="oc" Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.465264 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="801867d0-26b4-4eed-a2d2-fdd653eea92f" containerName="oc" Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.467952 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.536201 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m9skw"] Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.664721 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04a340b-25b7-40e5-9de9-999a6959efcf-utilities\") pod \"redhat-operators-m9skw\" (UID: \"c04a340b-25b7-40e5-9de9-999a6959efcf\") " pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.664938 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04a340b-25b7-40e5-9de9-999a6959efcf-catalog-content\") pod \"redhat-operators-m9skw\" (UID: \"c04a340b-25b7-40e5-9de9-999a6959efcf\") " pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.664992 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5dx4\" (UniqueName: \"kubernetes.io/projected/c04a340b-25b7-40e5-9de9-999a6959efcf-kube-api-access-f5dx4\") pod \"redhat-operators-m9skw\" (UID: \"c04a340b-25b7-40e5-9de9-999a6959efcf\") " pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.767388 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04a340b-25b7-40e5-9de9-999a6959efcf-catalog-content\") pod \"redhat-operators-m9skw\" (UID: \"c04a340b-25b7-40e5-9de9-999a6959efcf\") " pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.767456 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5dx4\" (UniqueName: \"kubernetes.io/projected/c04a340b-25b7-40e5-9de9-999a6959efcf-kube-api-access-f5dx4\") pod \"redhat-operators-m9skw\" (UID: \"c04a340b-25b7-40e5-9de9-999a6959efcf\") " pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.767592 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04a340b-25b7-40e5-9de9-999a6959efcf-utilities\") pod \"redhat-operators-m9skw\" (UID: \"c04a340b-25b7-40e5-9de9-999a6959efcf\") " pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.767959 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04a340b-25b7-40e5-9de9-999a6959efcf-catalog-content\") pod \"redhat-operators-m9skw\" (UID: \"c04a340b-25b7-40e5-9de9-999a6959efcf\") " pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.768059 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04a340b-25b7-40e5-9de9-999a6959efcf-utilities\") pod \"redhat-operators-m9skw\" (UID: \"c04a340b-25b7-40e5-9de9-999a6959efcf\") " pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.791002 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5dx4\" (UniqueName: \"kubernetes.io/projected/c04a340b-25b7-40e5-9de9-999a6959efcf-kube-api-access-f5dx4\") pod \"redhat-operators-m9skw\" (UID: \"c04a340b-25b7-40e5-9de9-999a6959efcf\") " pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:05:28 crc kubenswrapper[4736]: I0316 16:05:28.795486 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:05:29 crc kubenswrapper[4736]: I0316 16:05:29.469529 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m9skw"] Mar 16 16:05:29 crc kubenswrapper[4736]: I0316 16:05:29.845000 4736 generic.go:334] "Generic (PLEG): container finished" podID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerID="b66c33941001c6fd0b992e49d3fe81f4fc1a0f26d594c9beed01e692f4197cd0" exitCode=0 Mar 16 16:05:29 crc kubenswrapper[4736]: I0316 16:05:29.845267 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9skw" event={"ID":"c04a340b-25b7-40e5-9de9-999a6959efcf","Type":"ContainerDied","Data":"b66c33941001c6fd0b992e49d3fe81f4fc1a0f26d594c9beed01e692f4197cd0"} Mar 16 16:05:29 crc kubenswrapper[4736]: I0316 16:05:29.845302 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9skw" event={"ID":"c04a340b-25b7-40e5-9de9-999a6959efcf","Type":"ContainerStarted","Data":"af13687901fa5721d113f40e8fa5a73662e3f5678907dbede8d3d7bc8544793c"} Mar 16 16:05:30 crc kubenswrapper[4736]: I0316 16:05:30.859066 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9skw" event={"ID":"c04a340b-25b7-40e5-9de9-999a6959efcf","Type":"ContainerStarted","Data":"80d62407c507f6ae7df6ff2b2013b2d36471e97ea14f4e1330cd26000e4ea98f"} Mar 16 16:05:36 crc kubenswrapper[4736]: I0316 16:05:36.914535 4736 generic.go:334] "Generic (PLEG): container finished" podID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerID="80d62407c507f6ae7df6ff2b2013b2d36471e97ea14f4e1330cd26000e4ea98f" exitCode=0 Mar 16 16:05:36 crc kubenswrapper[4736]: I0316 16:05:36.914700 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9skw" event={"ID":"c04a340b-25b7-40e5-9de9-999a6959efcf","Type":"ContainerDied","Data":"80d62407c507f6ae7df6ff2b2013b2d36471e97ea14f4e1330cd26000e4ea98f"} Mar 16 16:05:36 crc kubenswrapper[4736]: I0316 16:05:36.978700 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:05:36 crc kubenswrapper[4736]: E0316 16:05:36.979253 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:05:37 crc kubenswrapper[4736]: I0316 16:05:37.926084 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9skw" event={"ID":"c04a340b-25b7-40e5-9de9-999a6959efcf","Type":"ContainerStarted","Data":"2acf0155f7ed872d3879eda91d252b136d417bae9121cf038aeaaf050bd98cb1"} Mar 16 16:05:37 crc kubenswrapper[4736]: I0316 16:05:37.955097 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m9skw" podStartSLOduration=2.476134189 podStartE2EDuration="9.955076229s" podCreationTimestamp="2026-03-16 16:05:28 +0000 UTC" firstStartedPulling="2026-03-16 16:05:29.846877708 +0000 UTC m=+3131.574267995" lastFinishedPulling="2026-03-16 16:05:37.325819748 +0000 UTC m=+3139.053210035" observedRunningTime="2026-03-16 16:05:37.947999678 +0000 UTC m=+3139.675389975" watchObservedRunningTime="2026-03-16 16:05:37.955076229 +0000 UTC m=+3139.682466536" Mar 16 16:05:38 crc kubenswrapper[4736]: I0316 16:05:38.796423 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:05:38 crc kubenswrapper[4736]: I0316 16:05:38.796702 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:05:39 crc kubenswrapper[4736]: I0316 16:05:39.849134 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m9skw" podUID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerName="registry-server" probeResult="failure" output=< Mar 16 16:05:39 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:05:39 crc kubenswrapper[4736]: > Mar 16 16:05:49 crc kubenswrapper[4736]: I0316 16:05:49.869025 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m9skw" podUID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerName="registry-server" probeResult="failure" output=< Mar 16 16:05:49 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:05:49 crc kubenswrapper[4736]: > Mar 16 16:05:51 crc kubenswrapper[4736]: I0316 16:05:51.978481 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:05:51 crc kubenswrapper[4736]: E0316 16:05:51.979147 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:05:58 crc kubenswrapper[4736]: E0316 16:05:58.385012 4736 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.30:52098->38.102.83.30:38289: write tcp 38.102.83.30:52098->38.102.83.30:38289: write: connection reset by peer Mar 16 16:05:59 crc kubenswrapper[4736]: I0316 16:05:59.861998 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m9skw" podUID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerName="registry-server" probeResult="failure" output=< Mar 16 16:05:59 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:05:59 crc kubenswrapper[4736]: > Mar 16 16:06:00 crc kubenswrapper[4736]: I0316 16:06:00.295754 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561286-88p5t"] Mar 16 16:06:00 crc kubenswrapper[4736]: I0316 16:06:00.296878 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561286-88p5t" Mar 16 16:06:00 crc kubenswrapper[4736]: I0316 16:06:00.304348 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:06:00 crc kubenswrapper[4736]: I0316 16:06:00.304367 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:06:00 crc kubenswrapper[4736]: I0316 16:06:00.304592 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:06:00 crc kubenswrapper[4736]: I0316 16:06:00.399556 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561286-88p5t"] Mar 16 16:06:00 crc kubenswrapper[4736]: I0316 16:06:00.424185 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sntjg\" (UniqueName: \"kubernetes.io/projected/f08d77cb-4954-4f67-95e3-5f214c7b3ddd-kube-api-access-sntjg\") pod \"auto-csr-approver-29561286-88p5t\" (UID: \"f08d77cb-4954-4f67-95e3-5f214c7b3ddd\") " pod="openshift-infra/auto-csr-approver-29561286-88p5t" Mar 16 16:06:00 crc kubenswrapper[4736]: I0316 16:06:00.526189 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sntjg\" (UniqueName: \"kubernetes.io/projected/f08d77cb-4954-4f67-95e3-5f214c7b3ddd-kube-api-access-sntjg\") pod \"auto-csr-approver-29561286-88p5t\" (UID: \"f08d77cb-4954-4f67-95e3-5f214c7b3ddd\") " pod="openshift-infra/auto-csr-approver-29561286-88p5t" Mar 16 16:06:00 crc kubenswrapper[4736]: I0316 16:06:00.576924 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sntjg\" (UniqueName: \"kubernetes.io/projected/f08d77cb-4954-4f67-95e3-5f214c7b3ddd-kube-api-access-sntjg\") pod \"auto-csr-approver-29561286-88p5t\" (UID: \"f08d77cb-4954-4f67-95e3-5f214c7b3ddd\") " pod="openshift-infra/auto-csr-approver-29561286-88p5t" Mar 16 16:06:00 crc kubenswrapper[4736]: I0316 16:06:00.627565 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561286-88p5t" Mar 16 16:06:01 crc kubenswrapper[4736]: I0316 16:06:01.832353 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561286-88p5t"] Mar 16 16:06:01 crc kubenswrapper[4736]: I0316 16:06:01.850998 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 16:06:02 crc kubenswrapper[4736]: I0316 16:06:02.141505 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561286-88p5t" event={"ID":"f08d77cb-4954-4f67-95e3-5f214c7b3ddd","Type":"ContainerStarted","Data":"8b66216262004008441ba11e7a363f2fd364178b4d095fb8a1436329f33e9b4d"} Mar 16 16:06:03 crc kubenswrapper[4736]: I0316 16:06:03.978237 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:06:03 crc kubenswrapper[4736]: E0316 16:06:03.978931 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:06:04 crc kubenswrapper[4736]: I0316 16:06:04.158869 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561286-88p5t" event={"ID":"f08d77cb-4954-4f67-95e3-5f214c7b3ddd","Type":"ContainerStarted","Data":"8531b8f6bf1e8be892b00a20fbb1549047388c151d723d8e862724df8825753d"} Mar 16 16:06:04 crc kubenswrapper[4736]: I0316 16:06:04.181180 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561286-88p5t" podStartSLOduration=2.851931655 podStartE2EDuration="4.181162192s" podCreationTimestamp="2026-03-16 16:06:00 +0000 UTC" firstStartedPulling="2026-03-16 16:06:01.848001622 +0000 UTC m=+3163.575391909" lastFinishedPulling="2026-03-16 16:06:03.177232159 +0000 UTC m=+3164.904622446" observedRunningTime="2026-03-16 16:06:04.175297564 +0000 UTC m=+3165.902687851" watchObservedRunningTime="2026-03-16 16:06:04.181162192 +0000 UTC m=+3165.908552479" Mar 16 16:06:06 crc kubenswrapper[4736]: I0316 16:06:06.179035 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561286-88p5t" event={"ID":"f08d77cb-4954-4f67-95e3-5f214c7b3ddd","Type":"ContainerDied","Data":"8531b8f6bf1e8be892b00a20fbb1549047388c151d723d8e862724df8825753d"} Mar 16 16:06:06 crc kubenswrapper[4736]: I0316 16:06:06.179394 4736 generic.go:334] "Generic (PLEG): container finished" podID="f08d77cb-4954-4f67-95e3-5f214c7b3ddd" containerID="8531b8f6bf1e8be892b00a20fbb1549047388c151d723d8e862724df8825753d" exitCode=0 Mar 16 16:06:07 crc kubenswrapper[4736]: I0316 16:06:07.748547 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561286-88p5t" Mar 16 16:06:07 crc kubenswrapper[4736]: I0316 16:06:07.955761 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sntjg\" (UniqueName: \"kubernetes.io/projected/f08d77cb-4954-4f67-95e3-5f214c7b3ddd-kube-api-access-sntjg\") pod \"f08d77cb-4954-4f67-95e3-5f214c7b3ddd\" (UID: \"f08d77cb-4954-4f67-95e3-5f214c7b3ddd\") " Mar 16 16:06:07 crc kubenswrapper[4736]: I0316 16:06:07.971203 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f08d77cb-4954-4f67-95e3-5f214c7b3ddd-kube-api-access-sntjg" (OuterVolumeSpecName: "kube-api-access-sntjg") pod "f08d77cb-4954-4f67-95e3-5f214c7b3ddd" (UID: "f08d77cb-4954-4f67-95e3-5f214c7b3ddd"). InnerVolumeSpecName "kube-api-access-sntjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:06:08 crc kubenswrapper[4736]: I0316 16:06:08.059044 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sntjg\" (UniqueName: \"kubernetes.io/projected/f08d77cb-4954-4f67-95e3-5f214c7b3ddd-kube-api-access-sntjg\") on node \"crc\" DevicePath \"\"" Mar 16 16:06:08 crc kubenswrapper[4736]: I0316 16:06:08.195134 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561286-88p5t" event={"ID":"f08d77cb-4954-4f67-95e3-5f214c7b3ddd","Type":"ContainerDied","Data":"8b66216262004008441ba11e7a363f2fd364178b4d095fb8a1436329f33e9b4d"} Mar 16 16:06:08 crc kubenswrapper[4736]: I0316 16:06:08.195258 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561286-88p5t" Mar 16 16:06:08 crc kubenswrapper[4736]: I0316 16:06:08.196086 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b66216262004008441ba11e7a363f2fd364178b4d095fb8a1436329f33e9b4d" Mar 16 16:06:08 crc kubenswrapper[4736]: I0316 16:06:08.278203 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561280-j4bz6"] Mar 16 16:06:08 crc kubenswrapper[4736]: I0316 16:06:08.285571 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561280-j4bz6"] Mar 16 16:06:08 crc kubenswrapper[4736]: I0316 16:06:08.991353 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41929956-3d26-4a51-9444-5011e999a62d" path="/var/lib/kubelet/pods/41929956-3d26-4a51-9444-5011e999a62d/volumes" Mar 16 16:06:09 crc kubenswrapper[4736]: I0316 16:06:09.866493 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m9skw" podUID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerName="registry-server" probeResult="failure" output=< Mar 16 16:06:09 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:06:09 crc kubenswrapper[4736]: > Mar 16 16:06:15 crc kubenswrapper[4736]: I0316 16:06:15.978154 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:06:15 crc kubenswrapper[4736]: E0316 16:06:15.978857 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:06:19 crc kubenswrapper[4736]: I0316 16:06:19.851668 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m9skw" podUID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerName="registry-server" probeResult="failure" output=< Mar 16 16:06:19 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:06:19 crc kubenswrapper[4736]: > Mar 16 16:06:27 crc kubenswrapper[4736]: I0316 16:06:27.978259 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:06:27 crc kubenswrapper[4736]: E0316 16:06:27.979167 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:06:28 crc kubenswrapper[4736]: I0316 16:06:28.869722 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:06:28 crc kubenswrapper[4736]: I0316 16:06:28.932081 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:06:29 crc kubenswrapper[4736]: I0316 16:06:29.131946 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m9skw"] Mar 16 16:06:30 crc kubenswrapper[4736]: I0316 16:06:30.403585 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m9skw" podUID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerName="registry-server" containerID="cri-o://2acf0155f7ed872d3879eda91d252b136d417bae9121cf038aeaaf050bd98cb1" gracePeriod=2 Mar 16 16:06:31 crc kubenswrapper[4736]: I0316 16:06:31.414330 4736 generic.go:334] "Generic (PLEG): container finished" podID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerID="2acf0155f7ed872d3879eda91d252b136d417bae9121cf038aeaaf050bd98cb1" exitCode=0 Mar 16 16:06:31 crc kubenswrapper[4736]: I0316 16:06:31.414412 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9skw" event={"ID":"c04a340b-25b7-40e5-9de9-999a6959efcf","Type":"ContainerDied","Data":"2acf0155f7ed872d3879eda91d252b136d417bae9121cf038aeaaf050bd98cb1"} Mar 16 16:06:31 crc kubenswrapper[4736]: I0316 16:06:31.414790 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m9skw" event={"ID":"c04a340b-25b7-40e5-9de9-999a6959efcf","Type":"ContainerDied","Data":"af13687901fa5721d113f40e8fa5a73662e3f5678907dbede8d3d7bc8544793c"} Mar 16 16:06:31 crc kubenswrapper[4736]: I0316 16:06:31.414802 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af13687901fa5721d113f40e8fa5a73662e3f5678907dbede8d3d7bc8544793c" Mar 16 16:06:31 crc kubenswrapper[4736]: I0316 16:06:31.490929 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:06:31 crc kubenswrapper[4736]: I0316 16:06:31.554475 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5dx4\" (UniqueName: \"kubernetes.io/projected/c04a340b-25b7-40e5-9de9-999a6959efcf-kube-api-access-f5dx4\") pod \"c04a340b-25b7-40e5-9de9-999a6959efcf\" (UID: \"c04a340b-25b7-40e5-9de9-999a6959efcf\") " Mar 16 16:06:31 crc kubenswrapper[4736]: I0316 16:06:31.554623 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04a340b-25b7-40e5-9de9-999a6959efcf-utilities\") pod \"c04a340b-25b7-40e5-9de9-999a6959efcf\" (UID: \"c04a340b-25b7-40e5-9de9-999a6959efcf\") " Mar 16 16:06:31 crc kubenswrapper[4736]: I0316 16:06:31.554682 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04a340b-25b7-40e5-9de9-999a6959efcf-catalog-content\") pod \"c04a340b-25b7-40e5-9de9-999a6959efcf\" (UID: \"c04a340b-25b7-40e5-9de9-999a6959efcf\") " Mar 16 16:06:31 crc kubenswrapper[4736]: I0316 16:06:31.562052 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c04a340b-25b7-40e5-9de9-999a6959efcf-utilities" (OuterVolumeSpecName: "utilities") pod "c04a340b-25b7-40e5-9de9-999a6959efcf" (UID: "c04a340b-25b7-40e5-9de9-999a6959efcf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:06:31 crc kubenswrapper[4736]: I0316 16:06:31.590732 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04a340b-25b7-40e5-9de9-999a6959efcf-kube-api-access-f5dx4" (OuterVolumeSpecName: "kube-api-access-f5dx4") pod "c04a340b-25b7-40e5-9de9-999a6959efcf" (UID: "c04a340b-25b7-40e5-9de9-999a6959efcf"). InnerVolumeSpecName "kube-api-access-f5dx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:06:31 crc kubenswrapper[4736]: I0316 16:06:31.659953 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5dx4\" (UniqueName: \"kubernetes.io/projected/c04a340b-25b7-40e5-9de9-999a6959efcf-kube-api-access-f5dx4\") on node \"crc\" DevicePath \"\"" Mar 16 16:06:31 crc kubenswrapper[4736]: I0316 16:06:31.660210 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c04a340b-25b7-40e5-9de9-999a6959efcf-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:06:31 crc kubenswrapper[4736]: I0316 16:06:31.825295 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c04a340b-25b7-40e5-9de9-999a6959efcf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c04a340b-25b7-40e5-9de9-999a6959efcf" (UID: "c04a340b-25b7-40e5-9de9-999a6959efcf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:06:31 crc kubenswrapper[4736]: I0316 16:06:31.864317 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c04a340b-25b7-40e5-9de9-999a6959efcf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:06:32 crc kubenswrapper[4736]: I0316 16:06:32.422279 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m9skw" Mar 16 16:06:32 crc kubenswrapper[4736]: I0316 16:06:32.467052 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m9skw"] Mar 16 16:06:32 crc kubenswrapper[4736]: I0316 16:06:32.479148 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m9skw"] Mar 16 16:06:32 crc kubenswrapper[4736]: I0316 16:06:32.991048 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04a340b-25b7-40e5-9de9-999a6959efcf" path="/var/lib/kubelet/pods/c04a340b-25b7-40e5-9de9-999a6959efcf/volumes" Mar 16 16:06:42 crc kubenswrapper[4736]: I0316 16:06:42.978204 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:06:42 crc kubenswrapper[4736]: E0316 16:06:42.979078 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:06:51 crc kubenswrapper[4736]: I0316 16:06:51.047047 4736 scope.go:117] "RemoveContainer" containerID="1a60fd41b672fdf85125d6607ce0feae8fdb9ac4f2488a72e65fc4a69fb5f52b" Mar 16 16:06:54 crc kubenswrapper[4736]: I0316 16:06:54.978038 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:06:54 crc kubenswrapper[4736]: E0316 16:06:54.978873 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:07:05 crc kubenswrapper[4736]: I0316 16:07:05.979759 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:07:05 crc kubenswrapper[4736]: E0316 16:07:05.980512 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:07:17 crc kubenswrapper[4736]: I0316 16:07:17.978288 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:07:17 crc kubenswrapper[4736]: E0316 16:07:17.979146 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:07:30 crc kubenswrapper[4736]: I0316 16:07:30.980918 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:07:30 crc kubenswrapper[4736]: E0316 16:07:30.981608 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:07:43 crc kubenswrapper[4736]: I0316 16:07:43.978726 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:07:43 crc kubenswrapper[4736]: E0316 16:07:43.982022 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:07:54 crc kubenswrapper[4736]: I0316 16:07:54.978176 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:07:54 crc kubenswrapper[4736]: E0316 16:07:54.978940 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.410344 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561288-9hxnz"] Mar 16 16:08:00 crc kubenswrapper[4736]: E0316 16:08:00.422027 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerName="registry-server" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.422074 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerName="registry-server" Mar 16 16:08:00 crc kubenswrapper[4736]: E0316 16:08:00.422459 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerName="extract-utilities" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.422474 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerName="extract-utilities" Mar 16 16:08:00 crc kubenswrapper[4736]: E0316 16:08:00.422515 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08d77cb-4954-4f67-95e3-5f214c7b3ddd" containerName="oc" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.422522 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08d77cb-4954-4f67-95e3-5f214c7b3ddd" containerName="oc" Mar 16 16:08:00 crc kubenswrapper[4736]: E0316 16:08:00.422541 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerName="extract-content" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.422547 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerName="extract-content" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.424134 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f08d77cb-4954-4f67-95e3-5f214c7b3ddd" containerName="oc" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.424155 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04a340b-25b7-40e5-9de9-999a6959efcf" containerName="registry-server" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.431182 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561288-9hxnz" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.450279 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561288-9hxnz"] Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.449590 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.449598 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.449619 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.457401 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvkjj\" (UniqueName: \"kubernetes.io/projected/a4e12120-062f-47ca-a22b-bb863e9adab8-kube-api-access-fvkjj\") pod \"auto-csr-approver-29561288-9hxnz\" (UID: \"a4e12120-062f-47ca-a22b-bb863e9adab8\") " pod="openshift-infra/auto-csr-approver-29561288-9hxnz" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.559286 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvkjj\" (UniqueName: \"kubernetes.io/projected/a4e12120-062f-47ca-a22b-bb863e9adab8-kube-api-access-fvkjj\") pod \"auto-csr-approver-29561288-9hxnz\" (UID: \"a4e12120-062f-47ca-a22b-bb863e9adab8\") " pod="openshift-infra/auto-csr-approver-29561288-9hxnz" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.593327 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvkjj\" (UniqueName: \"kubernetes.io/projected/a4e12120-062f-47ca-a22b-bb863e9adab8-kube-api-access-fvkjj\") pod \"auto-csr-approver-29561288-9hxnz\" (UID: \"a4e12120-062f-47ca-a22b-bb863e9adab8\") " pod="openshift-infra/auto-csr-approver-29561288-9hxnz" Mar 16 16:08:00 crc kubenswrapper[4736]: I0316 16:08:00.761217 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561288-9hxnz" Mar 16 16:08:02 crc kubenswrapper[4736]: I0316 16:08:02.133800 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561288-9hxnz"] Mar 16 16:08:02 crc kubenswrapper[4736]: W0316 16:08:02.184987 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4e12120_062f_47ca_a22b_bb863e9adab8.slice/crio-35b9cfac8ba281293ce57892fcdc564614578d095aa20df301b7185d192e3131 WatchSource:0}: Error finding container 35b9cfac8ba281293ce57892fcdc564614578d095aa20df301b7185d192e3131: Status 404 returned error can't find the container with id 35b9cfac8ba281293ce57892fcdc564614578d095aa20df301b7185d192e3131 Mar 16 16:08:02 crc kubenswrapper[4736]: I0316 16:08:02.312906 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561288-9hxnz" event={"ID":"a4e12120-062f-47ca-a22b-bb863e9adab8","Type":"ContainerStarted","Data":"35b9cfac8ba281293ce57892fcdc564614578d095aa20df301b7185d192e3131"} Mar 16 16:08:04 crc kubenswrapper[4736]: I0316 16:08:04.337226 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561288-9hxnz" event={"ID":"a4e12120-062f-47ca-a22b-bb863e9adab8","Type":"ContainerStarted","Data":"dd6ba9fd1cef5b1e4b86a56ec8407e653761377dfa1ff56d2c742e890d479eab"} Mar 16 16:08:04 crc kubenswrapper[4736]: I0316 16:08:04.362908 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561288-9hxnz" podStartSLOduration=3.31186539 podStartE2EDuration="4.358484425s" podCreationTimestamp="2026-03-16 16:08:00 +0000 UTC" firstStartedPulling="2026-03-16 16:08:02.200465171 +0000 UTC m=+3283.927855458" lastFinishedPulling="2026-03-16 16:08:03.247084196 +0000 UTC m=+3284.974474493" observedRunningTime="2026-03-16 16:08:04.357594661 +0000 UTC m=+3286.084984968" watchObservedRunningTime="2026-03-16 16:08:04.358484425 +0000 UTC m=+3286.085874712" Mar 16 16:08:05 crc kubenswrapper[4736]: I0316 16:08:05.355729 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561288-9hxnz" event={"ID":"a4e12120-062f-47ca-a22b-bb863e9adab8","Type":"ContainerDied","Data":"dd6ba9fd1cef5b1e4b86a56ec8407e653761377dfa1ff56d2c742e890d479eab"} Mar 16 16:08:05 crc kubenswrapper[4736]: I0316 16:08:05.356292 4736 generic.go:334] "Generic (PLEG): container finished" podID="a4e12120-062f-47ca-a22b-bb863e9adab8" containerID="dd6ba9fd1cef5b1e4b86a56ec8407e653761377dfa1ff56d2c742e890d479eab" exitCode=0 Mar 16 16:08:06 crc kubenswrapper[4736]: I0316 16:08:06.847907 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561288-9hxnz" Mar 16 16:08:06 crc kubenswrapper[4736]: I0316 16:08:06.979711 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:08:06 crc kubenswrapper[4736]: E0316 16:08:06.979928 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:08:06 crc kubenswrapper[4736]: I0316 16:08:06.981913 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvkjj\" (UniqueName: \"kubernetes.io/projected/a4e12120-062f-47ca-a22b-bb863e9adab8-kube-api-access-fvkjj\") pod \"a4e12120-062f-47ca-a22b-bb863e9adab8\" (UID: \"a4e12120-062f-47ca-a22b-bb863e9adab8\") " Mar 16 16:08:07 crc kubenswrapper[4736]: I0316 16:08:07.023368 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e12120-062f-47ca-a22b-bb863e9adab8-kube-api-access-fvkjj" (OuterVolumeSpecName: "kube-api-access-fvkjj") pod "a4e12120-062f-47ca-a22b-bb863e9adab8" (UID: "a4e12120-062f-47ca-a22b-bb863e9adab8"). InnerVolumeSpecName "kube-api-access-fvkjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:08:07 crc kubenswrapper[4736]: I0316 16:08:07.090070 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvkjj\" (UniqueName: \"kubernetes.io/projected/a4e12120-062f-47ca-a22b-bb863e9adab8-kube-api-access-fvkjj\") on node \"crc\" DevicePath \"\"" Mar 16 16:08:07 crc kubenswrapper[4736]: I0316 16:08:07.377914 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561288-9hxnz" event={"ID":"a4e12120-062f-47ca-a22b-bb863e9adab8","Type":"ContainerDied","Data":"35b9cfac8ba281293ce57892fcdc564614578d095aa20df301b7185d192e3131"} Mar 16 16:08:07 crc kubenswrapper[4736]: I0316 16:08:07.378507 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561288-9hxnz" Mar 16 16:08:07 crc kubenswrapper[4736]: I0316 16:08:07.379378 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35b9cfac8ba281293ce57892fcdc564614578d095aa20df301b7185d192e3131" Mar 16 16:08:07 crc kubenswrapper[4736]: I0316 16:08:07.475782 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561282-kqh5k"] Mar 16 16:08:07 crc kubenswrapper[4736]: I0316 16:08:07.503440 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561282-kqh5k"] Mar 16 16:08:08 crc kubenswrapper[4736]: I0316 16:08:08.992554 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1faa3a39-edf5-45df-ac37-69de3c2acea5" path="/var/lib/kubelet/pods/1faa3a39-edf5-45df-ac37-69de3c2acea5/volumes" Mar 16 16:08:18 crc kubenswrapper[4736]: I0316 16:08:18.985664 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:08:19 crc kubenswrapper[4736]: I0316 16:08:19.671972 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"b44025c007220700f200d3776777feab228dcb2788d109664cc698f3069ea6db"} Mar 16 16:08:51 crc kubenswrapper[4736]: I0316 16:08:51.310592 4736 scope.go:117] "RemoveContainer" containerID="78b8b5a99f500c86bbe904d122650503a1f3d1a7bae26be3e31b5b4847f2153d" Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.455319 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7qw5p"] Mar 16 16:09:19 crc kubenswrapper[4736]: E0316 16:09:19.463646 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e12120-062f-47ca-a22b-bb863e9adab8" containerName="oc" Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.463692 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e12120-062f-47ca-a22b-bb863e9adab8" containerName="oc" Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.465227 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e12120-062f-47ca-a22b-bb863e9adab8" containerName="oc" Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.472932 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.573096 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qw5p"] Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.655745 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a674f49-8dce-4b04-a4d6-c5057dac49f4-utilities\") pod \"certified-operators-7qw5p\" (UID: \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\") " pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.655838 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a674f49-8dce-4b04-a4d6-c5057dac49f4-catalog-content\") pod \"certified-operators-7qw5p\" (UID: \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\") " pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.655866 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jszh\" (UniqueName: \"kubernetes.io/projected/5a674f49-8dce-4b04-a4d6-c5057dac49f4-kube-api-access-5jszh\") pod \"certified-operators-7qw5p\" (UID: \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\") " pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.758248 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a674f49-8dce-4b04-a4d6-c5057dac49f4-catalog-content\") pod \"certified-operators-7qw5p\" (UID: \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\") " pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.758293 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jszh\" (UniqueName: \"kubernetes.io/projected/5a674f49-8dce-4b04-a4d6-c5057dac49f4-kube-api-access-5jszh\") pod \"certified-operators-7qw5p\" (UID: \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\") " pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.758540 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a674f49-8dce-4b04-a4d6-c5057dac49f4-utilities\") pod \"certified-operators-7qw5p\" (UID: \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\") " pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.769626 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a674f49-8dce-4b04-a4d6-c5057dac49f4-catalog-content\") pod \"certified-operators-7qw5p\" (UID: \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\") " pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.769626 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a674f49-8dce-4b04-a4d6-c5057dac49f4-utilities\") pod \"certified-operators-7qw5p\" (UID: \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\") " pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.802129 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jszh\" (UniqueName: \"kubernetes.io/projected/5a674f49-8dce-4b04-a4d6-c5057dac49f4-kube-api-access-5jszh\") pod \"certified-operators-7qw5p\" (UID: \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\") " pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:09:19 crc kubenswrapper[4736]: I0316 16:09:19.841147 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:09:22 crc kubenswrapper[4736]: I0316 16:09:22.072232 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qw5p"] Mar 16 16:09:22 crc kubenswrapper[4736]: I0316 16:09:22.288614 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qw5p" event={"ID":"5a674f49-8dce-4b04-a4d6-c5057dac49f4","Type":"ContainerStarted","Data":"e12c4d7c423c738c1d519cbdead267b9248253182825fddd1e6bae75b9d880e6"} Mar 16 16:09:23 crc kubenswrapper[4736]: I0316 16:09:23.332413 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qw5p" event={"ID":"5a674f49-8dce-4b04-a4d6-c5057dac49f4","Type":"ContainerDied","Data":"66bc3ca9977a6552327d5d772a51117009769a4243aac221eff1851050c686df"} Mar 16 16:09:23 crc kubenswrapper[4736]: I0316 16:09:23.333184 4736 generic.go:334] "Generic (PLEG): container finished" podID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" containerID="66bc3ca9977a6552327d5d772a51117009769a4243aac221eff1851050c686df" exitCode=0 Mar 16 16:09:24 crc kubenswrapper[4736]: E0316 16:09:24.013382 4736 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.30:54834->38.102.83.30:38289: read tcp 38.102.83.30:54834->38.102.83.30:38289: read: connection reset by peer Mar 16 16:09:24 crc kubenswrapper[4736]: I0316 16:09:24.346064 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qw5p" event={"ID":"5a674f49-8dce-4b04-a4d6-c5057dac49f4","Type":"ContainerStarted","Data":"31f42a54f573e9ddc50654922c9e109eaf871ae2a7b99cb57d2fe6ebfc6b358f"} Mar 16 16:09:27 crc kubenswrapper[4736]: I0316 16:09:27.371700 4736 generic.go:334] "Generic (PLEG): container finished" podID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" containerID="31f42a54f573e9ddc50654922c9e109eaf871ae2a7b99cb57d2fe6ebfc6b358f" exitCode=0 Mar 16 16:09:27 crc kubenswrapper[4736]: I0316 16:09:27.371763 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qw5p" event={"ID":"5a674f49-8dce-4b04-a4d6-c5057dac49f4","Type":"ContainerDied","Data":"31f42a54f573e9ddc50654922c9e109eaf871ae2a7b99cb57d2fe6ebfc6b358f"} Mar 16 16:09:28 crc kubenswrapper[4736]: I0316 16:09:28.411654 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qw5p" event={"ID":"5a674f49-8dce-4b04-a4d6-c5057dac49f4","Type":"ContainerStarted","Data":"49837f907903b543d953d0267f2dab516578c25c90310ad4e357fd479ca0f58b"} Mar 16 16:09:28 crc kubenswrapper[4736]: I0316 16:09:28.444218 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7qw5p" podStartSLOduration=4.912106484 podStartE2EDuration="9.438478428s" podCreationTimestamp="2026-03-16 16:09:19 +0000 UTC" firstStartedPulling="2026-03-16 16:09:23.334945705 +0000 UTC m=+3365.062335982" lastFinishedPulling="2026-03-16 16:09:27.861317639 +0000 UTC m=+3369.588707926" observedRunningTime="2026-03-16 16:09:28.430288878 +0000 UTC m=+3370.157679165" watchObservedRunningTime="2026-03-16 16:09:28.438478428 +0000 UTC m=+3370.165868705" Mar 16 16:09:29 crc kubenswrapper[4736]: I0316 16:09:29.846190 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:09:29 crc kubenswrapper[4736]: I0316 16:09:29.847753 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:09:31 crc kubenswrapper[4736]: I0316 16:09:31.011017 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7qw5p" podUID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" containerName="registry-server" probeResult="failure" output=< Mar 16 16:09:31 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:09:31 crc kubenswrapper[4736]: > Mar 16 16:09:41 crc kubenswrapper[4736]: I0316 16:09:41.026878 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7qw5p" podUID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" containerName="registry-server" probeResult="failure" output=< Mar 16 16:09:41 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:09:41 crc kubenswrapper[4736]: > Mar 16 16:09:51 crc kubenswrapper[4736]: I0316 16:09:51.062161 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7qw5p" podUID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" containerName="registry-server" probeResult="failure" output=< Mar 16 16:09:51 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:09:51 crc kubenswrapper[4736]: > Mar 16 16:10:00 crc kubenswrapper[4736]: I0316 16:10:00.006789 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:10:00 crc kubenswrapper[4736]: I0316 16:10:00.065505 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:10:01 crc kubenswrapper[4736]: I0316 16:10:01.344373 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7qw5p"] Mar 16 16:10:01 crc kubenswrapper[4736]: I0316 16:10:01.411307 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561290-6q6qm"] Mar 16 16:10:01 crc kubenswrapper[4736]: I0316 16:10:01.437199 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561290-6q6qm" Mar 16 16:10:01 crc kubenswrapper[4736]: I0316 16:10:01.472628 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:10:01 crc kubenswrapper[4736]: I0316 16:10:01.472631 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:10:01 crc kubenswrapper[4736]: I0316 16:10:01.481867 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:10:01 crc kubenswrapper[4736]: I0316 16:10:01.495915 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llt6m\" (UniqueName: \"kubernetes.io/projected/ff8c1171-e56d-44e6-bfdf-180d12747766-kube-api-access-llt6m\") pod \"auto-csr-approver-29561290-6q6qm\" (UID: \"ff8c1171-e56d-44e6-bfdf-180d12747766\") " pod="openshift-infra/auto-csr-approver-29561290-6q6qm" Mar 16 16:10:01 crc kubenswrapper[4736]: I0316 16:10:01.582766 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561290-6q6qm"] Mar 16 16:10:01 crc kubenswrapper[4736]: I0316 16:10:01.598243 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llt6m\" (UniqueName: \"kubernetes.io/projected/ff8c1171-e56d-44e6-bfdf-180d12747766-kube-api-access-llt6m\") pod \"auto-csr-approver-29561290-6q6qm\" (UID: \"ff8c1171-e56d-44e6-bfdf-180d12747766\") " pod="openshift-infra/auto-csr-approver-29561290-6q6qm" Mar 16 16:10:01 crc kubenswrapper[4736]: I0316 16:10:01.656008 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llt6m\" (UniqueName: \"kubernetes.io/projected/ff8c1171-e56d-44e6-bfdf-180d12747766-kube-api-access-llt6m\") pod \"auto-csr-approver-29561290-6q6qm\" (UID: \"ff8c1171-e56d-44e6-bfdf-180d12747766\") " pod="openshift-infra/auto-csr-approver-29561290-6q6qm" Mar 16 16:10:01 crc kubenswrapper[4736]: I0316 16:10:01.801756 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7qw5p" podUID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" containerName="registry-server" containerID="cri-o://49837f907903b543d953d0267f2dab516578c25c90310ad4e357fd479ca0f58b" gracePeriod=2 Mar 16 16:10:01 crc kubenswrapper[4736]: I0316 16:10:01.837418 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561290-6q6qm" Mar 16 16:10:02 crc kubenswrapper[4736]: E0316 16:10:02.068203 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a674f49_8dce_4b04_a4d6_c5057dac49f4.slice/crio-conmon-49837f907903b543d953d0267f2dab516578c25c90310ad4e357fd479ca0f58b.scope\": RecentStats: unable to find data in memory cache]" Mar 16 16:10:02 crc kubenswrapper[4736]: I0316 16:10:02.812948 4736 generic.go:334] "Generic (PLEG): container finished" podID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" containerID="49837f907903b543d953d0267f2dab516578c25c90310ad4e357fd479ca0f58b" exitCode=0 Mar 16 16:10:02 crc kubenswrapper[4736]: I0316 16:10:02.813040 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qw5p" event={"ID":"5a674f49-8dce-4b04-a4d6-c5057dac49f4","Type":"ContainerDied","Data":"49837f907903b543d953d0267f2dab516578c25c90310ad4e357fd479ca0f58b"} Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.647599 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.719124 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561290-6q6qm"] Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.751733 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a674f49-8dce-4b04-a4d6-c5057dac49f4-utilities\") pod \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\" (UID: \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\") " Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.751940 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jszh\" (UniqueName: \"kubernetes.io/projected/5a674f49-8dce-4b04-a4d6-c5057dac49f4-kube-api-access-5jszh\") pod \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\" (UID: \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\") " Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.752132 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a674f49-8dce-4b04-a4d6-c5057dac49f4-catalog-content\") pod \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\" (UID: \"5a674f49-8dce-4b04-a4d6-c5057dac49f4\") " Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.757518 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a674f49-8dce-4b04-a4d6-c5057dac49f4-utilities" (OuterVolumeSpecName: "utilities") pod "5a674f49-8dce-4b04-a4d6-c5057dac49f4" (UID: "5a674f49-8dce-4b04-a4d6-c5057dac49f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.778927 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a674f49-8dce-4b04-a4d6-c5057dac49f4-kube-api-access-5jszh" (OuterVolumeSpecName: "kube-api-access-5jszh") pod "5a674f49-8dce-4b04-a4d6-c5057dac49f4" (UID: "5a674f49-8dce-4b04-a4d6-c5057dac49f4"). InnerVolumeSpecName "kube-api-access-5jszh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.829941 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561290-6q6qm" event={"ID":"ff8c1171-e56d-44e6-bfdf-180d12747766","Type":"ContainerStarted","Data":"233589a6edbee5f73194137c72634606d11d011356e6f49aa1a78d1ccd1b3882"} Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.836407 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qw5p" event={"ID":"5a674f49-8dce-4b04-a4d6-c5057dac49f4","Type":"ContainerDied","Data":"e12c4d7c423c738c1d519cbdead267b9248253182825fddd1e6bae75b9d880e6"} Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.836926 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qw5p" Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.838567 4736 scope.go:117] "RemoveContainer" containerID="49837f907903b543d953d0267f2dab516578c25c90310ad4e357fd479ca0f58b" Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.856158 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jszh\" (UniqueName: \"kubernetes.io/projected/5a674f49-8dce-4b04-a4d6-c5057dac49f4-kube-api-access-5jszh\") on node \"crc\" DevicePath \"\"" Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.856202 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a674f49-8dce-4b04-a4d6-c5057dac49f4-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.898089 4736 scope.go:117] "RemoveContainer" containerID="31f42a54f573e9ddc50654922c9e109eaf871ae2a7b99cb57d2fe6ebfc6b358f" Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.914016 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a674f49-8dce-4b04-a4d6-c5057dac49f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a674f49-8dce-4b04-a4d6-c5057dac49f4" (UID: "5a674f49-8dce-4b04-a4d6-c5057dac49f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.948422 4736 scope.go:117] "RemoveContainer" containerID="66bc3ca9977a6552327d5d772a51117009769a4243aac221eff1851050c686df" Mar 16 16:10:03 crc kubenswrapper[4736]: I0316 16:10:03.959511 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a674f49-8dce-4b04-a4d6-c5057dac49f4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:10:04 crc kubenswrapper[4736]: I0316 16:10:04.192193 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7qw5p"] Mar 16 16:10:04 crc kubenswrapper[4736]: I0316 16:10:04.205915 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7qw5p"] Mar 16 16:10:05 crc kubenswrapper[4736]: I0316 16:10:05.022776 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" path="/var/lib/kubelet/pods/5a674f49-8dce-4b04-a4d6-c5057dac49f4/volumes" Mar 16 16:10:06 crc kubenswrapper[4736]: I0316 16:10:06.871118 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561290-6q6qm" event={"ID":"ff8c1171-e56d-44e6-bfdf-180d12747766","Type":"ContainerStarted","Data":"6b7559ae65116fdafb6a761bb20fdc1579c764e6b8ce41b99bf7065a28019a44"} Mar 16 16:10:06 crc kubenswrapper[4736]: I0316 16:10:06.936537 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561290-6q6qm" podStartSLOduration=5.820569441 podStartE2EDuration="6.933477404s" podCreationTimestamp="2026-03-16 16:10:00 +0000 UTC" firstStartedPulling="2026-03-16 16:10:03.798281309 +0000 UTC m=+3405.525671606" lastFinishedPulling="2026-03-16 16:10:04.911189282 +0000 UTC m=+3406.638579569" observedRunningTime="2026-03-16 16:10:06.929883777 +0000 UTC m=+3408.657274054" watchObservedRunningTime="2026-03-16 16:10:06.933477404 +0000 UTC m=+3408.660867691" Mar 16 16:10:08 crc kubenswrapper[4736]: I0316 16:10:08.887835 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561290-6q6qm" event={"ID":"ff8c1171-e56d-44e6-bfdf-180d12747766","Type":"ContainerDied","Data":"6b7559ae65116fdafb6a761bb20fdc1579c764e6b8ce41b99bf7065a28019a44"} Mar 16 16:10:08 crc kubenswrapper[4736]: I0316 16:10:08.890757 4736 generic.go:334] "Generic (PLEG): container finished" podID="ff8c1171-e56d-44e6-bfdf-180d12747766" containerID="6b7559ae65116fdafb6a761bb20fdc1579c764e6b8ce41b99bf7065a28019a44" exitCode=0 Mar 16 16:10:11 crc kubenswrapper[4736]: I0316 16:10:11.428306 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561290-6q6qm" Mar 16 16:10:11 crc kubenswrapper[4736]: I0316 16:10:11.537774 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llt6m\" (UniqueName: \"kubernetes.io/projected/ff8c1171-e56d-44e6-bfdf-180d12747766-kube-api-access-llt6m\") pod \"ff8c1171-e56d-44e6-bfdf-180d12747766\" (UID: \"ff8c1171-e56d-44e6-bfdf-180d12747766\") " Mar 16 16:10:11 crc kubenswrapper[4736]: I0316 16:10:11.608251 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff8c1171-e56d-44e6-bfdf-180d12747766-kube-api-access-llt6m" (OuterVolumeSpecName: "kube-api-access-llt6m") pod "ff8c1171-e56d-44e6-bfdf-180d12747766" (UID: "ff8c1171-e56d-44e6-bfdf-180d12747766"). InnerVolumeSpecName "kube-api-access-llt6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:10:11 crc kubenswrapper[4736]: I0316 16:10:11.642088 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llt6m\" (UniqueName: \"kubernetes.io/projected/ff8c1171-e56d-44e6-bfdf-180d12747766-kube-api-access-llt6m\") on node \"crc\" DevicePath \"\"" Mar 16 16:10:11 crc kubenswrapper[4736]: I0316 16:10:11.918393 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561290-6q6qm" event={"ID":"ff8c1171-e56d-44e6-bfdf-180d12747766","Type":"ContainerDied","Data":"233589a6edbee5f73194137c72634606d11d011356e6f49aa1a78d1ccd1b3882"} Mar 16 16:10:11 crc kubenswrapper[4736]: I0316 16:10:11.918887 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561290-6q6qm" Mar 16 16:10:11 crc kubenswrapper[4736]: I0316 16:10:11.919302 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="233589a6edbee5f73194137c72634606d11d011356e6f49aa1a78d1ccd1b3882" Mar 16 16:10:12 crc kubenswrapper[4736]: I0316 16:10:12.724832 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561284-xc5nb"] Mar 16 16:10:12 crc kubenswrapper[4736]: I0316 16:10:12.735630 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561284-xc5nb"] Mar 16 16:10:13 crc kubenswrapper[4736]: I0316 16:10:13.006426 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="801867d0-26b4-4eed-a2d2-fdd653eea92f" path="/var/lib/kubelet/pods/801867d0-26b4-4eed-a2d2-fdd653eea92f/volumes" Mar 16 16:10:38 crc kubenswrapper[4736]: I0316 16:10:38.511703 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:10:38 crc kubenswrapper[4736]: I0316 16:10:38.518800 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:10:51 crc kubenswrapper[4736]: I0316 16:10:51.689635 4736 scope.go:117] "RemoveContainer" containerID="e399496f77e933e8654777c11a34dad175c6bae4fff6c84b9cd2a94b3796dd44" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.358836 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zp658"] Mar 16 16:11:02 crc kubenswrapper[4736]: E0316 16:11:02.364423 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff8c1171-e56d-44e6-bfdf-180d12747766" containerName="oc" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.364458 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff8c1171-e56d-44e6-bfdf-180d12747766" containerName="oc" Mar 16 16:11:02 crc kubenswrapper[4736]: E0316 16:11:02.364495 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" containerName="extract-utilities" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.364505 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" containerName="extract-utilities" Mar 16 16:11:02 crc kubenswrapper[4736]: E0316 16:11:02.364524 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" containerName="registry-server" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.364532 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" containerName="registry-server" Mar 16 16:11:02 crc kubenswrapper[4736]: E0316 16:11:02.364551 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" containerName="extract-content" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.364559 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" containerName="extract-content" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.365729 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a674f49-8dce-4b04-a4d6-c5057dac49f4" containerName="registry-server" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.365754 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff8c1171-e56d-44e6-bfdf-180d12747766" containerName="oc" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.375625 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.514710 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trths\" (UniqueName: \"kubernetes.io/projected/c9925e87-ad5d-4788-9b89-6113a9af7dca-kube-api-access-trths\") pod \"redhat-marketplace-zp658\" (UID: \"c9925e87-ad5d-4788-9b89-6113a9af7dca\") " pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.514895 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9925e87-ad5d-4788-9b89-6113a9af7dca-catalog-content\") pod \"redhat-marketplace-zp658\" (UID: \"c9925e87-ad5d-4788-9b89-6113a9af7dca\") " pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.514937 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9925e87-ad5d-4788-9b89-6113a9af7dca-utilities\") pod \"redhat-marketplace-zp658\" (UID: \"c9925e87-ad5d-4788-9b89-6113a9af7dca\") " pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.582023 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zp658"] Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.620356 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trths\" (UniqueName: \"kubernetes.io/projected/c9925e87-ad5d-4788-9b89-6113a9af7dca-kube-api-access-trths\") pod \"redhat-marketplace-zp658\" (UID: \"c9925e87-ad5d-4788-9b89-6113a9af7dca\") " pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.620520 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9925e87-ad5d-4788-9b89-6113a9af7dca-catalog-content\") pod \"redhat-marketplace-zp658\" (UID: \"c9925e87-ad5d-4788-9b89-6113a9af7dca\") " pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.620568 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9925e87-ad5d-4788-9b89-6113a9af7dca-utilities\") pod \"redhat-marketplace-zp658\" (UID: \"c9925e87-ad5d-4788-9b89-6113a9af7dca\") " pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.628358 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9925e87-ad5d-4788-9b89-6113a9af7dca-utilities\") pod \"redhat-marketplace-zp658\" (UID: \"c9925e87-ad5d-4788-9b89-6113a9af7dca\") " pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.630131 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9925e87-ad5d-4788-9b89-6113a9af7dca-catalog-content\") pod \"redhat-marketplace-zp658\" (UID: \"c9925e87-ad5d-4788-9b89-6113a9af7dca\") " pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.662059 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trths\" (UniqueName: \"kubernetes.io/projected/c9925e87-ad5d-4788-9b89-6113a9af7dca-kube-api-access-trths\") pod \"redhat-marketplace-zp658\" (UID: \"c9925e87-ad5d-4788-9b89-6113a9af7dca\") " pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:02 crc kubenswrapper[4736]: I0316 16:11:02.755900 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.078872 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zp658"] Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.441127 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zp658" event={"ID":"c9925e87-ad5d-4788-9b89-6113a9af7dca","Type":"ContainerDied","Data":"5d00be33fa1987406ad0aa75fa190dfcc839ffa42b47205bfda3b662cce4143b"} Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.442357 4736 generic.go:334] "Generic (PLEG): container finished" podID="c9925e87-ad5d-4788-9b89-6113a9af7dca" containerID="5d00be33fa1987406ad0aa75fa190dfcc839ffa42b47205bfda3b662cce4143b" exitCode=0 Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.442409 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zp658" event={"ID":"c9925e87-ad5d-4788-9b89-6113a9af7dca","Type":"ContainerStarted","Data":"5b203d0bfa7befd1b00bd084e8c853c46af7c06204fb9134b2c0925dc4d445c3"} Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.449441 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.554309 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r985x"] Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.556438 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.566441 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r985x"] Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.656432 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xggjn\" (UniqueName: \"kubernetes.io/projected/c28ab018-c698-4c1a-a538-b10394f3b93d-kube-api-access-xggjn\") pod \"community-operators-r985x\" (UID: \"c28ab018-c698-4c1a-a538-b10394f3b93d\") " pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.656551 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c28ab018-c698-4c1a-a538-b10394f3b93d-catalog-content\") pod \"community-operators-r985x\" (UID: \"c28ab018-c698-4c1a-a538-b10394f3b93d\") " pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.656663 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c28ab018-c698-4c1a-a538-b10394f3b93d-utilities\") pod \"community-operators-r985x\" (UID: \"c28ab018-c698-4c1a-a538-b10394f3b93d\") " pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.758259 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c28ab018-c698-4c1a-a538-b10394f3b93d-utilities\") pod \"community-operators-r985x\" (UID: \"c28ab018-c698-4c1a-a538-b10394f3b93d\") " pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.758372 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xggjn\" (UniqueName: \"kubernetes.io/projected/c28ab018-c698-4c1a-a538-b10394f3b93d-kube-api-access-xggjn\") pod \"community-operators-r985x\" (UID: \"c28ab018-c698-4c1a-a538-b10394f3b93d\") " pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.758452 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c28ab018-c698-4c1a-a538-b10394f3b93d-catalog-content\") pod \"community-operators-r985x\" (UID: \"c28ab018-c698-4c1a-a538-b10394f3b93d\") " pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.760035 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c28ab018-c698-4c1a-a538-b10394f3b93d-catalog-content\") pod \"community-operators-r985x\" (UID: \"c28ab018-c698-4c1a-a538-b10394f3b93d\") " pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.762600 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c28ab018-c698-4c1a-a538-b10394f3b93d-utilities\") pod \"community-operators-r985x\" (UID: \"c28ab018-c698-4c1a-a538-b10394f3b93d\") " pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.799996 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xggjn\" (UniqueName: \"kubernetes.io/projected/c28ab018-c698-4c1a-a538-b10394f3b93d-kube-api-access-xggjn\") pod \"community-operators-r985x\" (UID: \"c28ab018-c698-4c1a-a538-b10394f3b93d\") " pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:04 crc kubenswrapper[4736]: I0316 16:11:04.892089 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:05 crc kubenswrapper[4736]: I0316 16:11:05.453336 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zp658" event={"ID":"c9925e87-ad5d-4788-9b89-6113a9af7dca","Type":"ContainerStarted","Data":"97891e9c17376dc1d1b5d2268bf4032f693238eb75544d8b174fa364a1832eda"} Mar 16 16:11:05 crc kubenswrapper[4736]: I0316 16:11:05.506699 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r985x"] Mar 16 16:11:06 crc kubenswrapper[4736]: I0316 16:11:06.463432 4736 generic.go:334] "Generic (PLEG): container finished" podID="c28ab018-c698-4c1a-a538-b10394f3b93d" containerID="19725f7ac58f1b12c8151df746a71ae1f780bb980ace10d40199bb71cd721f2e" exitCode=0 Mar 16 16:11:06 crc kubenswrapper[4736]: I0316 16:11:06.463602 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r985x" event={"ID":"c28ab018-c698-4c1a-a538-b10394f3b93d","Type":"ContainerDied","Data":"19725f7ac58f1b12c8151df746a71ae1f780bb980ace10d40199bb71cd721f2e"} Mar 16 16:11:06 crc kubenswrapper[4736]: I0316 16:11:06.464683 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r985x" event={"ID":"c28ab018-c698-4c1a-a538-b10394f3b93d","Type":"ContainerStarted","Data":"6c603a2b381ef3b43336046652e0c13cadc9766eb3fbeb5ddd1a68ae69f103cb"} Mar 16 16:11:07 crc kubenswrapper[4736]: I0316 16:11:07.479583 4736 generic.go:334] "Generic (PLEG): container finished" podID="c9925e87-ad5d-4788-9b89-6113a9af7dca" containerID="97891e9c17376dc1d1b5d2268bf4032f693238eb75544d8b174fa364a1832eda" exitCode=0 Mar 16 16:11:07 crc kubenswrapper[4736]: I0316 16:11:07.479666 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zp658" event={"ID":"c9925e87-ad5d-4788-9b89-6113a9af7dca","Type":"ContainerDied","Data":"97891e9c17376dc1d1b5d2268bf4032f693238eb75544d8b174fa364a1832eda"} Mar 16 16:11:08 crc kubenswrapper[4736]: I0316 16:11:08.499390 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zp658" event={"ID":"c9925e87-ad5d-4788-9b89-6113a9af7dca","Type":"ContainerStarted","Data":"3e2f5cda2f9511a8632febf60278e6c28c3cb493ffbd0df8cc5e7366a8cda3fc"} Mar 16 16:11:08 crc kubenswrapper[4736]: I0316 16:11:08.506259 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r985x" event={"ID":"c28ab018-c698-4c1a-a538-b10394f3b93d","Type":"ContainerStarted","Data":"a11e61262c9b20403e2b182ba823feff90bbb8709b1eb90e8e5d8634991a5aa9"} Mar 16 16:11:08 crc kubenswrapper[4736]: I0316 16:11:08.507887 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:11:08 crc kubenswrapper[4736]: I0316 16:11:08.519187 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:11:08 crc kubenswrapper[4736]: I0316 16:11:08.554974 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zp658" podStartSLOduration=3.029837413 podStartE2EDuration="6.551264919s" podCreationTimestamp="2026-03-16 16:11:02 +0000 UTC" firstStartedPulling="2026-03-16 16:11:04.443044605 +0000 UTC m=+3466.170434892" lastFinishedPulling="2026-03-16 16:11:07.964472111 +0000 UTC m=+3469.691862398" observedRunningTime="2026-03-16 16:11:08.534342594 +0000 UTC m=+3470.261732881" watchObservedRunningTime="2026-03-16 16:11:08.551264919 +0000 UTC m=+3470.278655206" Mar 16 16:11:10 crc kubenswrapper[4736]: I0316 16:11:10.529432 4736 generic.go:334] "Generic (PLEG): container finished" podID="c28ab018-c698-4c1a-a538-b10394f3b93d" containerID="a11e61262c9b20403e2b182ba823feff90bbb8709b1eb90e8e5d8634991a5aa9" exitCode=0 Mar 16 16:11:10 crc kubenswrapper[4736]: I0316 16:11:10.529527 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r985x" event={"ID":"c28ab018-c698-4c1a-a538-b10394f3b93d","Type":"ContainerDied","Data":"a11e61262c9b20403e2b182ba823feff90bbb8709b1eb90e8e5d8634991a5aa9"} Mar 16 16:11:11 crc kubenswrapper[4736]: I0316 16:11:11.542178 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r985x" event={"ID":"c28ab018-c698-4c1a-a538-b10394f3b93d","Type":"ContainerStarted","Data":"793ecc1802df020043df9e4f6d5ae773b6f821c243d55e5da77a9f8e55c86941"} Mar 16 16:11:11 crc kubenswrapper[4736]: I0316 16:11:11.564879 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r985x" podStartSLOduration=3.100528846 podStartE2EDuration="7.564861092s" podCreationTimestamp="2026-03-16 16:11:04 +0000 UTC" firstStartedPulling="2026-03-16 16:11:06.485367485 +0000 UTC m=+3468.212757782" lastFinishedPulling="2026-03-16 16:11:10.949699741 +0000 UTC m=+3472.677090028" observedRunningTime="2026-03-16 16:11:11.561158833 +0000 UTC m=+3473.288549140" watchObservedRunningTime="2026-03-16 16:11:11.564861092 +0000 UTC m=+3473.292251379" Mar 16 16:11:12 crc kubenswrapper[4736]: I0316 16:11:12.758174 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:12 crc kubenswrapper[4736]: I0316 16:11:12.758231 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:13 crc kubenswrapper[4736]: I0316 16:11:13.806861 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zp658" podUID="c9925e87-ad5d-4788-9b89-6113a9af7dca" containerName="registry-server" probeResult="failure" output=< Mar 16 16:11:13 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:11:13 crc kubenswrapper[4736]: > Mar 16 16:11:14 crc kubenswrapper[4736]: I0316 16:11:14.892339 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:14 crc kubenswrapper[4736]: I0316 16:11:14.893341 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:15 crc kubenswrapper[4736]: I0316 16:11:15.939976 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-r985x" podUID="c28ab018-c698-4c1a-a538-b10394f3b93d" containerName="registry-server" probeResult="failure" output=< Mar 16 16:11:15 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:11:15 crc kubenswrapper[4736]: > Mar 16 16:11:22 crc kubenswrapper[4736]: I0316 16:11:22.817930 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:22 crc kubenswrapper[4736]: I0316 16:11:22.863466 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:23 crc kubenswrapper[4736]: I0316 16:11:23.103812 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zp658"] Mar 16 16:11:24 crc kubenswrapper[4736]: I0316 16:11:24.677360 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zp658" podUID="c9925e87-ad5d-4788-9b89-6113a9af7dca" containerName="registry-server" containerID="cri-o://3e2f5cda2f9511a8632febf60278e6c28c3cb493ffbd0df8cc5e7366a8cda3fc" gracePeriod=2 Mar 16 16:11:25 crc kubenswrapper[4736]: I0316 16:11:25.686151 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zp658" event={"ID":"c9925e87-ad5d-4788-9b89-6113a9af7dca","Type":"ContainerDied","Data":"3e2f5cda2f9511a8632febf60278e6c28c3cb493ffbd0df8cc5e7366a8cda3fc"} Mar 16 16:11:25 crc kubenswrapper[4736]: I0316 16:11:25.686679 4736 generic.go:334] "Generic (PLEG): container finished" podID="c9925e87-ad5d-4788-9b89-6113a9af7dca" containerID="3e2f5cda2f9511a8632febf60278e6c28c3cb493ffbd0df8cc5e7366a8cda3fc" exitCode=0 Mar 16 16:11:25 crc kubenswrapper[4736]: I0316 16:11:25.880174 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:25 crc kubenswrapper[4736]: I0316 16:11:25.959912 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trths\" (UniqueName: \"kubernetes.io/projected/c9925e87-ad5d-4788-9b89-6113a9af7dca-kube-api-access-trths\") pod \"c9925e87-ad5d-4788-9b89-6113a9af7dca\" (UID: \"c9925e87-ad5d-4788-9b89-6113a9af7dca\") " Mar 16 16:11:25 crc kubenswrapper[4736]: I0316 16:11:25.960009 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9925e87-ad5d-4788-9b89-6113a9af7dca-catalog-content\") pod \"c9925e87-ad5d-4788-9b89-6113a9af7dca\" (UID: \"c9925e87-ad5d-4788-9b89-6113a9af7dca\") " Mar 16 16:11:25 crc kubenswrapper[4736]: I0316 16:11:25.960314 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9925e87-ad5d-4788-9b89-6113a9af7dca-utilities\") pod \"c9925e87-ad5d-4788-9b89-6113a9af7dca\" (UID: \"c9925e87-ad5d-4788-9b89-6113a9af7dca\") " Mar 16 16:11:25 crc kubenswrapper[4736]: I0316 16:11:25.964544 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-r985x" podUID="c28ab018-c698-4c1a-a538-b10394f3b93d" containerName="registry-server" probeResult="failure" output=< Mar 16 16:11:25 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:11:25 crc kubenswrapper[4736]: > Mar 16 16:11:25 crc kubenswrapper[4736]: I0316 16:11:25.972510 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9925e87-ad5d-4788-9b89-6113a9af7dca-utilities" (OuterVolumeSpecName: "utilities") pod "c9925e87-ad5d-4788-9b89-6113a9af7dca" (UID: "c9925e87-ad5d-4788-9b89-6113a9af7dca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:11:25 crc kubenswrapper[4736]: I0316 16:11:25.995815 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9925e87-ad5d-4788-9b89-6113a9af7dca-kube-api-access-trths" (OuterVolumeSpecName: "kube-api-access-trths") pod "c9925e87-ad5d-4788-9b89-6113a9af7dca" (UID: "c9925e87-ad5d-4788-9b89-6113a9af7dca"). InnerVolumeSpecName "kube-api-access-trths". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:11:26 crc kubenswrapper[4736]: I0316 16:11:26.013447 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9925e87-ad5d-4788-9b89-6113a9af7dca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9925e87-ad5d-4788-9b89-6113a9af7dca" (UID: "c9925e87-ad5d-4788-9b89-6113a9af7dca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:11:26 crc kubenswrapper[4736]: I0316 16:11:26.063731 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trths\" (UniqueName: \"kubernetes.io/projected/c9925e87-ad5d-4788-9b89-6113a9af7dca-kube-api-access-trths\") on node \"crc\" DevicePath \"\"" Mar 16 16:11:26 crc kubenswrapper[4736]: I0316 16:11:26.063770 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9925e87-ad5d-4788-9b89-6113a9af7dca-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:11:26 crc kubenswrapper[4736]: I0316 16:11:26.063785 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9925e87-ad5d-4788-9b89-6113a9af7dca-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:11:26 crc kubenswrapper[4736]: I0316 16:11:26.697840 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zp658" event={"ID":"c9925e87-ad5d-4788-9b89-6113a9af7dca","Type":"ContainerDied","Data":"5b203d0bfa7befd1b00bd084e8c853c46af7c06204fb9134b2c0925dc4d445c3"} Mar 16 16:11:26 crc kubenswrapper[4736]: I0316 16:11:26.697910 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zp658" Mar 16 16:11:26 crc kubenswrapper[4736]: I0316 16:11:26.699682 4736 scope.go:117] "RemoveContainer" containerID="3e2f5cda2f9511a8632febf60278e6c28c3cb493ffbd0df8cc5e7366a8cda3fc" Mar 16 16:11:26 crc kubenswrapper[4736]: I0316 16:11:26.744759 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zp658"] Mar 16 16:11:26 crc kubenswrapper[4736]: I0316 16:11:26.758341 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zp658"] Mar 16 16:11:26 crc kubenswrapper[4736]: I0316 16:11:26.773548 4736 scope.go:117] "RemoveContainer" containerID="97891e9c17376dc1d1b5d2268bf4032f693238eb75544d8b174fa364a1832eda" Mar 16 16:11:26 crc kubenswrapper[4736]: I0316 16:11:26.820077 4736 scope.go:117] "RemoveContainer" containerID="5d00be33fa1987406ad0aa75fa190dfcc839ffa42b47205bfda3b662cce4143b" Mar 16 16:11:26 crc kubenswrapper[4736]: I0316 16:11:26.990697 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9925e87-ad5d-4788-9b89-6113a9af7dca" path="/var/lib/kubelet/pods/c9925e87-ad5d-4788-9b89-6113a9af7dca/volumes" Mar 16 16:11:34 crc kubenswrapper[4736]: I0316 16:11:34.977600 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:35 crc kubenswrapper[4736]: I0316 16:11:35.035266 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:35 crc kubenswrapper[4736]: I0316 16:11:35.792578 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r985x"] Mar 16 16:11:36 crc kubenswrapper[4736]: I0316 16:11:36.840484 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r985x" podUID="c28ab018-c698-4c1a-a538-b10394f3b93d" containerName="registry-server" containerID="cri-o://793ecc1802df020043df9e4f6d5ae773b6f821c243d55e5da77a9f8e55c86941" gracePeriod=2 Mar 16 16:11:37 crc kubenswrapper[4736]: I0316 16:11:37.821061 4736 generic.go:334] "Generic (PLEG): container finished" podID="c28ab018-c698-4c1a-a538-b10394f3b93d" containerID="793ecc1802df020043df9e4f6d5ae773b6f821c243d55e5da77a9f8e55c86941" exitCode=0 Mar 16 16:11:37 crc kubenswrapper[4736]: I0316 16:11:37.821388 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r985x" event={"ID":"c28ab018-c698-4c1a-a538-b10394f3b93d","Type":"ContainerDied","Data":"793ecc1802df020043df9e4f6d5ae773b6f821c243d55e5da77a9f8e55c86941"} Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.341563 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.425718 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xggjn\" (UniqueName: \"kubernetes.io/projected/c28ab018-c698-4c1a-a538-b10394f3b93d-kube-api-access-xggjn\") pod \"c28ab018-c698-4c1a-a538-b10394f3b93d\" (UID: \"c28ab018-c698-4c1a-a538-b10394f3b93d\") " Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.425928 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c28ab018-c698-4c1a-a538-b10394f3b93d-utilities\") pod \"c28ab018-c698-4c1a-a538-b10394f3b93d\" (UID: \"c28ab018-c698-4c1a-a538-b10394f3b93d\") " Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.425967 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c28ab018-c698-4c1a-a538-b10394f3b93d-catalog-content\") pod \"c28ab018-c698-4c1a-a538-b10394f3b93d\" (UID: \"c28ab018-c698-4c1a-a538-b10394f3b93d\") " Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.435374 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c28ab018-c698-4c1a-a538-b10394f3b93d-utilities" (OuterVolumeSpecName: "utilities") pod "c28ab018-c698-4c1a-a538-b10394f3b93d" (UID: "c28ab018-c698-4c1a-a538-b10394f3b93d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.463391 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c28ab018-c698-4c1a-a538-b10394f3b93d-kube-api-access-xggjn" (OuterVolumeSpecName: "kube-api-access-xggjn") pod "c28ab018-c698-4c1a-a538-b10394f3b93d" (UID: "c28ab018-c698-4c1a-a538-b10394f3b93d"). InnerVolumeSpecName "kube-api-access-xggjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.509744 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.509836 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.511853 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.513154 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b44025c007220700f200d3776777feab228dcb2788d109664cc698f3069ea6db"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.513217 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://b44025c007220700f200d3776777feab228dcb2788d109664cc698f3069ea6db" gracePeriod=600 Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.528849 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c28ab018-c698-4c1a-a538-b10394f3b93d-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.528872 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xggjn\" (UniqueName: \"kubernetes.io/projected/c28ab018-c698-4c1a-a538-b10394f3b93d-kube-api-access-xggjn\") on node \"crc\" DevicePath \"\"" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.666661 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c28ab018-c698-4c1a-a538-b10394f3b93d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c28ab018-c698-4c1a-a538-b10394f3b93d" (UID: "c28ab018-c698-4c1a-a538-b10394f3b93d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.732170 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c28ab018-c698-4c1a-a538-b10394f3b93d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.833588 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r985x" event={"ID":"c28ab018-c698-4c1a-a538-b10394f3b93d","Type":"ContainerDied","Data":"6c603a2b381ef3b43336046652e0c13cadc9766eb3fbeb5ddd1a68ae69f103cb"} Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.833642 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r985x" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.835671 4736 scope.go:117] "RemoveContainer" containerID="793ecc1802df020043df9e4f6d5ae773b6f821c243d55e5da77a9f8e55c86941" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.838700 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="b44025c007220700f200d3776777feab228dcb2788d109664cc698f3069ea6db" exitCode=0 Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.838731 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"b44025c007220700f200d3776777feab228dcb2788d109664cc698f3069ea6db"} Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.872612 4736 scope.go:117] "RemoveContainer" containerID="a11e61262c9b20403e2b182ba823feff90bbb8709b1eb90e8e5d8634991a5aa9" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.884800 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r985x"] Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.895061 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r985x"] Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.920946 4736 scope.go:117] "RemoveContainer" containerID="19725f7ac58f1b12c8151df746a71ae1f780bb980ace10d40199bb71cd721f2e" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.951388 4736 scope.go:117] "RemoveContainer" containerID="011daf9c5d1617a315c10aab9b20dc4f72d82f8ec864fcb3896f8136ec5038c8" Mar 16 16:11:38 crc kubenswrapper[4736]: I0316 16:11:38.989300 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c28ab018-c698-4c1a-a538-b10394f3b93d" path="/var/lib/kubelet/pods/c28ab018-c698-4c1a-a538-b10394f3b93d/volumes" Mar 16 16:11:39 crc kubenswrapper[4736]: I0316 16:11:39.868548 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7"} Mar 16 16:11:52 crc kubenswrapper[4736]: I0316 16:11:52.050330 4736 scope.go:117] "RemoveContainer" containerID="b66c33941001c6fd0b992e49d3fe81f4fc1a0f26d594c9beed01e692f4197cd0" Mar 16 16:11:52 crc kubenswrapper[4736]: I0316 16:11:52.142759 4736 scope.go:117] "RemoveContainer" containerID="80d62407c507f6ae7df6ff2b2013b2d36471e97ea14f4e1330cd26000e4ea98f" Mar 16 16:11:52 crc kubenswrapper[4736]: I0316 16:11:52.185540 4736 scope.go:117] "RemoveContainer" containerID="2acf0155f7ed872d3879eda91d252b136d417bae9121cf038aeaaf050bd98cb1" Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.717887 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561292-w4x2x"] Mar 16 16:12:00 crc kubenswrapper[4736]: E0316 16:12:00.735241 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9925e87-ad5d-4788-9b89-6113a9af7dca" containerName="registry-server" Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.735288 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9925e87-ad5d-4788-9b89-6113a9af7dca" containerName="registry-server" Mar 16 16:12:00 crc kubenswrapper[4736]: E0316 16:12:00.735315 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9925e87-ad5d-4788-9b89-6113a9af7dca" containerName="extract-content" Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.735322 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9925e87-ad5d-4788-9b89-6113a9af7dca" containerName="extract-content" Mar 16 16:12:00 crc kubenswrapper[4736]: E0316 16:12:00.735336 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c28ab018-c698-4c1a-a538-b10394f3b93d" containerName="extract-content" Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.735343 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c28ab018-c698-4c1a-a538-b10394f3b93d" containerName="extract-content" Mar 16 16:12:00 crc kubenswrapper[4736]: E0316 16:12:00.735386 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c28ab018-c698-4c1a-a538-b10394f3b93d" containerName="registry-server" Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.735394 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c28ab018-c698-4c1a-a538-b10394f3b93d" containerName="registry-server" Mar 16 16:12:00 crc kubenswrapper[4736]: E0316 16:12:00.735408 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9925e87-ad5d-4788-9b89-6113a9af7dca" containerName="extract-utilities" Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.735415 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9925e87-ad5d-4788-9b89-6113a9af7dca" containerName="extract-utilities" Mar 16 16:12:00 crc kubenswrapper[4736]: E0316 16:12:00.735434 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c28ab018-c698-4c1a-a538-b10394f3b93d" containerName="extract-utilities" Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.735441 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c28ab018-c698-4c1a-a538-b10394f3b93d" containerName="extract-utilities" Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.736555 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c28ab018-c698-4c1a-a538-b10394f3b93d" containerName="registry-server" Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.737202 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9925e87-ad5d-4788-9b89-6113a9af7dca" containerName="registry-server" Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.752230 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561292-w4x2x" Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.784945 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.784948 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.784952 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.847962 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561292-w4x2x"] Mar 16 16:12:00 crc kubenswrapper[4736]: I0316 16:12:00.901092 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcmgc\" (UniqueName: \"kubernetes.io/projected/3c4b6b4a-0d61-447b-b687-84dfe6d52466-kube-api-access-xcmgc\") pod \"auto-csr-approver-29561292-w4x2x\" (UID: \"3c4b6b4a-0d61-447b-b687-84dfe6d52466\") " pod="openshift-infra/auto-csr-approver-29561292-w4x2x" Mar 16 16:12:01 crc kubenswrapper[4736]: I0316 16:12:01.003203 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcmgc\" (UniqueName: \"kubernetes.io/projected/3c4b6b4a-0d61-447b-b687-84dfe6d52466-kube-api-access-xcmgc\") pod \"auto-csr-approver-29561292-w4x2x\" (UID: \"3c4b6b4a-0d61-447b-b687-84dfe6d52466\") " pod="openshift-infra/auto-csr-approver-29561292-w4x2x" Mar 16 16:12:01 crc kubenswrapper[4736]: I0316 16:12:01.050096 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcmgc\" (UniqueName: \"kubernetes.io/projected/3c4b6b4a-0d61-447b-b687-84dfe6d52466-kube-api-access-xcmgc\") pod \"auto-csr-approver-29561292-w4x2x\" (UID: \"3c4b6b4a-0d61-447b-b687-84dfe6d52466\") " pod="openshift-infra/auto-csr-approver-29561292-w4x2x" Mar 16 16:12:01 crc kubenswrapper[4736]: I0316 16:12:01.111451 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561292-w4x2x" Mar 16 16:12:02 crc kubenswrapper[4736]: I0316 16:12:02.732174 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561292-w4x2x"] Mar 16 16:12:03 crc kubenswrapper[4736]: I0316 16:12:03.109017 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561292-w4x2x" event={"ID":"3c4b6b4a-0d61-447b-b687-84dfe6d52466","Type":"ContainerStarted","Data":"4d6f4b79d47db615e6948a6c5cd003d17b9bfba94b9a7ac3ef29a32a01fa1b3d"} Mar 16 16:12:05 crc kubenswrapper[4736]: I0316 16:12:05.127965 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561292-w4x2x" event={"ID":"3c4b6b4a-0d61-447b-b687-84dfe6d52466","Type":"ContainerStarted","Data":"e7383854e3c90f17aeefe32956e194a26960a61005c130f79635912bad667a98"} Mar 16 16:12:05 crc kubenswrapper[4736]: I0316 16:12:05.163321 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561292-w4x2x" podStartSLOduration=4.061094611 podStartE2EDuration="5.155209088s" podCreationTimestamp="2026-03-16 16:12:00 +0000 UTC" firstStartedPulling="2026-03-16 16:12:02.787202686 +0000 UTC m=+3524.514592973" lastFinishedPulling="2026-03-16 16:12:03.881317163 +0000 UTC m=+3525.608707450" observedRunningTime="2026-03-16 16:12:05.148164019 +0000 UTC m=+3526.875554306" watchObservedRunningTime="2026-03-16 16:12:05.155209088 +0000 UTC m=+3526.882599375" Mar 16 16:12:07 crc kubenswrapper[4736]: I0316 16:12:07.143630 4736 generic.go:334] "Generic (PLEG): container finished" podID="3c4b6b4a-0d61-447b-b687-84dfe6d52466" containerID="e7383854e3c90f17aeefe32956e194a26960a61005c130f79635912bad667a98" exitCode=0 Mar 16 16:12:07 crc kubenswrapper[4736]: I0316 16:12:07.143785 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561292-w4x2x" event={"ID":"3c4b6b4a-0d61-447b-b687-84dfe6d52466","Type":"ContainerDied","Data":"e7383854e3c90f17aeefe32956e194a26960a61005c130f79635912bad667a98"} Mar 16 16:12:08 crc kubenswrapper[4736]: I0316 16:12:08.733297 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561292-w4x2x" Mar 16 16:12:08 crc kubenswrapper[4736]: I0316 16:12:08.872531 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcmgc\" (UniqueName: \"kubernetes.io/projected/3c4b6b4a-0d61-447b-b687-84dfe6d52466-kube-api-access-xcmgc\") pod \"3c4b6b4a-0d61-447b-b687-84dfe6d52466\" (UID: \"3c4b6b4a-0d61-447b-b687-84dfe6d52466\") " Mar 16 16:12:08 crc kubenswrapper[4736]: I0316 16:12:08.904873 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c4b6b4a-0d61-447b-b687-84dfe6d52466-kube-api-access-xcmgc" (OuterVolumeSpecName: "kube-api-access-xcmgc") pod "3c4b6b4a-0d61-447b-b687-84dfe6d52466" (UID: "3c4b6b4a-0d61-447b-b687-84dfe6d52466"). InnerVolumeSpecName "kube-api-access-xcmgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:12:08 crc kubenswrapper[4736]: I0316 16:12:08.974662 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcmgc\" (UniqueName: \"kubernetes.io/projected/3c4b6b4a-0d61-447b-b687-84dfe6d52466-kube-api-access-xcmgc\") on node \"crc\" DevicePath \"\"" Mar 16 16:12:09 crc kubenswrapper[4736]: I0316 16:12:09.162672 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561292-w4x2x" event={"ID":"3c4b6b4a-0d61-447b-b687-84dfe6d52466","Type":"ContainerDied","Data":"4d6f4b79d47db615e6948a6c5cd003d17b9bfba94b9a7ac3ef29a32a01fa1b3d"} Mar 16 16:12:09 crc kubenswrapper[4736]: I0316 16:12:09.162942 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561292-w4x2x" Mar 16 16:12:09 crc kubenswrapper[4736]: I0316 16:12:09.163475 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d6f4b79d47db615e6948a6c5cd003d17b9bfba94b9a7ac3ef29a32a01fa1b3d" Mar 16 16:12:09 crc kubenswrapper[4736]: I0316 16:12:09.904448 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561286-88p5t"] Mar 16 16:12:09 crc kubenswrapper[4736]: I0316 16:12:09.914291 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561286-88p5t"] Mar 16 16:12:10 crc kubenswrapper[4736]: I0316 16:12:10.990351 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f08d77cb-4954-4f67-95e3-5f214c7b3ddd" path="/var/lib/kubelet/pods/f08d77cb-4954-4f67-95e3-5f214c7b3ddd/volumes" Mar 16 16:12:36 crc kubenswrapper[4736]: E0316 16:12:36.314447 4736 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.30:38260->38.102.83.30:38289: read tcp 38.102.83.30:38260->38.102.83.30:38289: read: connection reset by peer Mar 16 16:12:52 crc kubenswrapper[4736]: I0316 16:12:52.692900 4736 scope.go:117] "RemoveContainer" containerID="8531b8f6bf1e8be892b00a20fbb1549047388c151d723d8e862724df8825753d" Mar 16 16:13:38 crc kubenswrapper[4736]: I0316 16:13:38.514022 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:13:38 crc kubenswrapper[4736]: I0316 16:13:38.520305 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:14:00 crc kubenswrapper[4736]: I0316 16:14:00.673229 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561294-j8kff"] Mar 16 16:14:00 crc kubenswrapper[4736]: E0316 16:14:00.686318 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4b6b4a-0d61-447b-b687-84dfe6d52466" containerName="oc" Mar 16 16:14:00 crc kubenswrapper[4736]: I0316 16:14:00.686371 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4b6b4a-0d61-447b-b687-84dfe6d52466" containerName="oc" Mar 16 16:14:00 crc kubenswrapper[4736]: I0316 16:14:00.689125 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c4b6b4a-0d61-447b-b687-84dfe6d52466" containerName="oc" Mar 16 16:14:00 crc kubenswrapper[4736]: I0316 16:14:00.705819 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561294-j8kff" Mar 16 16:14:00 crc kubenswrapper[4736]: I0316 16:14:00.723156 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:14:00 crc kubenswrapper[4736]: I0316 16:14:00.723180 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:14:00 crc kubenswrapper[4736]: I0316 16:14:00.723156 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:14:00 crc kubenswrapper[4736]: I0316 16:14:00.801113 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwzgh\" (UniqueName: \"kubernetes.io/projected/92d80b32-f733-4f90-b312-3be0b8443fe7-kube-api-access-nwzgh\") pod \"auto-csr-approver-29561294-j8kff\" (UID: \"92d80b32-f733-4f90-b312-3be0b8443fe7\") " pod="openshift-infra/auto-csr-approver-29561294-j8kff" Mar 16 16:14:00 crc kubenswrapper[4736]: I0316 16:14:00.846953 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561294-j8kff"] Mar 16 16:14:00 crc kubenswrapper[4736]: I0316 16:14:00.902811 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwzgh\" (UniqueName: \"kubernetes.io/projected/92d80b32-f733-4f90-b312-3be0b8443fe7-kube-api-access-nwzgh\") pod \"auto-csr-approver-29561294-j8kff\" (UID: \"92d80b32-f733-4f90-b312-3be0b8443fe7\") " pod="openshift-infra/auto-csr-approver-29561294-j8kff" Mar 16 16:14:00 crc kubenswrapper[4736]: I0316 16:14:00.958298 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwzgh\" (UniqueName: \"kubernetes.io/projected/92d80b32-f733-4f90-b312-3be0b8443fe7-kube-api-access-nwzgh\") pod \"auto-csr-approver-29561294-j8kff\" (UID: \"92d80b32-f733-4f90-b312-3be0b8443fe7\") " pod="openshift-infra/auto-csr-approver-29561294-j8kff" Mar 16 16:14:01 crc kubenswrapper[4736]: I0316 16:14:01.092252 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561294-j8kff" Mar 16 16:14:02 crc kubenswrapper[4736]: I0316 16:14:02.734797 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561294-j8kff"] Mar 16 16:14:03 crc kubenswrapper[4736]: I0316 16:14:03.183725 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561294-j8kff" event={"ID":"92d80b32-f733-4f90-b312-3be0b8443fe7","Type":"ContainerStarted","Data":"31384b33e5a29f5b701fa4ac587efacc953284563d32dcf0d44854bbe381f76d"} Mar 16 16:14:05 crc kubenswrapper[4736]: I0316 16:14:05.203205 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561294-j8kff" event={"ID":"92d80b32-f733-4f90-b312-3be0b8443fe7","Type":"ContainerStarted","Data":"6fb3f577668c57014f6894cad5aa889bc53590d2fefdd1a4e0bc0928c9c9c6bf"} Mar 16 16:14:05 crc kubenswrapper[4736]: I0316 16:14:05.224966 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561294-j8kff" podStartSLOduration=4.039259964 podStartE2EDuration="5.223429835s" podCreationTimestamp="2026-03-16 16:14:00 +0000 UTC" firstStartedPulling="2026-03-16 16:14:02.800133034 +0000 UTC m=+3644.527523311" lastFinishedPulling="2026-03-16 16:14:03.984302895 +0000 UTC m=+3645.711693182" observedRunningTime="2026-03-16 16:14:05.215761659 +0000 UTC m=+3646.943151956" watchObservedRunningTime="2026-03-16 16:14:05.223429835 +0000 UTC m=+3646.950820122" Mar 16 16:14:06 crc kubenswrapper[4736]: I0316 16:14:06.218563 4736 generic.go:334] "Generic (PLEG): container finished" podID="92d80b32-f733-4f90-b312-3be0b8443fe7" containerID="6fb3f577668c57014f6894cad5aa889bc53590d2fefdd1a4e0bc0928c9c9c6bf" exitCode=0 Mar 16 16:14:06 crc kubenswrapper[4736]: I0316 16:14:06.225049 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561294-j8kff" event={"ID":"92d80b32-f733-4f90-b312-3be0b8443fe7","Type":"ContainerDied","Data":"6fb3f577668c57014f6894cad5aa889bc53590d2fefdd1a4e0bc0928c9c9c6bf"} Mar 16 16:14:07 crc kubenswrapper[4736]: I0316 16:14:07.871404 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561294-j8kff" Mar 16 16:14:07 crc kubenswrapper[4736]: I0316 16:14:07.942582 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwzgh\" (UniqueName: \"kubernetes.io/projected/92d80b32-f733-4f90-b312-3be0b8443fe7-kube-api-access-nwzgh\") pod \"92d80b32-f733-4f90-b312-3be0b8443fe7\" (UID: \"92d80b32-f733-4f90-b312-3be0b8443fe7\") " Mar 16 16:14:07 crc kubenswrapper[4736]: I0316 16:14:07.985603 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d80b32-f733-4f90-b312-3be0b8443fe7-kube-api-access-nwzgh" (OuterVolumeSpecName: "kube-api-access-nwzgh") pod "92d80b32-f733-4f90-b312-3be0b8443fe7" (UID: "92d80b32-f733-4f90-b312-3be0b8443fe7"). InnerVolumeSpecName "kube-api-access-nwzgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:14:08 crc kubenswrapper[4736]: I0316 16:14:08.045774 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwzgh\" (UniqueName: \"kubernetes.io/projected/92d80b32-f733-4f90-b312-3be0b8443fe7-kube-api-access-nwzgh\") on node \"crc\" DevicePath \"\"" Mar 16 16:14:08 crc kubenswrapper[4736]: I0316 16:14:08.246730 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561294-j8kff" event={"ID":"92d80b32-f733-4f90-b312-3be0b8443fe7","Type":"ContainerDied","Data":"31384b33e5a29f5b701fa4ac587efacc953284563d32dcf0d44854bbe381f76d"} Mar 16 16:14:08 crc kubenswrapper[4736]: I0316 16:14:08.247017 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561294-j8kff" Mar 16 16:14:08 crc kubenswrapper[4736]: I0316 16:14:08.247089 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31384b33e5a29f5b701fa4ac587efacc953284563d32dcf0d44854bbe381f76d" Mar 16 16:14:08 crc kubenswrapper[4736]: I0316 16:14:08.361994 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561288-9hxnz"] Mar 16 16:14:08 crc kubenswrapper[4736]: I0316 16:14:08.372520 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561288-9hxnz"] Mar 16 16:14:08 crc kubenswrapper[4736]: I0316 16:14:08.507821 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:14:08 crc kubenswrapper[4736]: I0316 16:14:08.510348 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:14:08 crc kubenswrapper[4736]: I0316 16:14:08.990315 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4e12120-062f-47ca-a22b-bb863e9adab8" path="/var/lib/kubelet/pods/a4e12120-062f-47ca-a22b-bb863e9adab8/volumes" Mar 16 16:14:38 crc kubenswrapper[4736]: I0316 16:14:38.508315 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:14:38 crc kubenswrapper[4736]: I0316 16:14:38.509016 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:14:38 crc kubenswrapper[4736]: I0316 16:14:38.509068 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 16:14:38 crc kubenswrapper[4736]: I0316 16:14:38.510834 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 16:14:38 crc kubenswrapper[4736]: I0316 16:14:38.510909 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" gracePeriod=600 Mar 16 16:14:38 crc kubenswrapper[4736]: E0316 16:14:38.633541 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:14:39 crc kubenswrapper[4736]: I0316 16:14:39.535395 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" exitCode=0 Mar 16 16:14:39 crc kubenswrapper[4736]: I0316 16:14:39.535446 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7"} Mar 16 16:14:39 crc kubenswrapper[4736]: I0316 16:14:39.537389 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:14:39 crc kubenswrapper[4736]: I0316 16:14:39.537513 4736 scope.go:117] "RemoveContainer" containerID="b44025c007220700f200d3776777feab228dcb2788d109664cc698f3069ea6db" Mar 16 16:14:39 crc kubenswrapper[4736]: E0316 16:14:39.537689 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:14:51 crc kubenswrapper[4736]: I0316 16:14:51.977963 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:14:51 crc kubenswrapper[4736]: E0316 16:14:51.978810 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:14:53 crc kubenswrapper[4736]: I0316 16:14:53.304305 4736 scope.go:117] "RemoveContainer" containerID="dd6ba9fd1cef5b1e4b86a56ec8407e653761377dfa1ff56d2c742e890d479eab" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.212521 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q"] Mar 16 16:15:00 crc kubenswrapper[4736]: E0316 16:15:00.219440 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d80b32-f733-4f90-b312-3be0b8443fe7" containerName="oc" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.219476 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d80b32-f733-4f90-b312-3be0b8443fe7" containerName="oc" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.219820 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d80b32-f733-4f90-b312-3be0b8443fe7" containerName="oc" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.223439 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.232772 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.233442 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.241173 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q"] Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.309549 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77cf92e2-f06a-4211-b2f3-9c65fdef5987-secret-volume\") pod \"collect-profiles-29561295-75v2q\" (UID: \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.309690 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq4wt\" (UniqueName: \"kubernetes.io/projected/77cf92e2-f06a-4211-b2f3-9c65fdef5987-kube-api-access-kq4wt\") pod \"collect-profiles-29561295-75v2q\" (UID: \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.309762 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77cf92e2-f06a-4211-b2f3-9c65fdef5987-config-volume\") pod \"collect-profiles-29561295-75v2q\" (UID: \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.411815 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq4wt\" (UniqueName: \"kubernetes.io/projected/77cf92e2-f06a-4211-b2f3-9c65fdef5987-kube-api-access-kq4wt\") pod \"collect-profiles-29561295-75v2q\" (UID: \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.412283 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77cf92e2-f06a-4211-b2f3-9c65fdef5987-config-volume\") pod \"collect-profiles-29561295-75v2q\" (UID: \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.412362 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77cf92e2-f06a-4211-b2f3-9c65fdef5987-secret-volume\") pod \"collect-profiles-29561295-75v2q\" (UID: \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.414195 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77cf92e2-f06a-4211-b2f3-9c65fdef5987-config-volume\") pod \"collect-profiles-29561295-75v2q\" (UID: \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.442887 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq4wt\" (UniqueName: \"kubernetes.io/projected/77cf92e2-f06a-4211-b2f3-9c65fdef5987-kube-api-access-kq4wt\") pod \"collect-profiles-29561295-75v2q\" (UID: \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.443339 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77cf92e2-f06a-4211-b2f3-9c65fdef5987-secret-volume\") pod \"collect-profiles-29561295-75v2q\" (UID: \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" Mar 16 16:15:00 crc kubenswrapper[4736]: I0316 16:15:00.555575 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" Mar 16 16:15:02 crc kubenswrapper[4736]: I0316 16:15:02.058427 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q"] Mar 16 16:15:02 crc kubenswrapper[4736]: W0316 16:15:02.105833 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77cf92e2_f06a_4211_b2f3_9c65fdef5987.slice/crio-f57ba58d72adf5c0165d234d4cc9c6edb2e01d66b620743b941a0a63e95ed984 WatchSource:0}: Error finding container f57ba58d72adf5c0165d234d4cc9c6edb2e01d66b620743b941a0a63e95ed984: Status 404 returned error can't find the container with id f57ba58d72adf5c0165d234d4cc9c6edb2e01d66b620743b941a0a63e95ed984 Mar 16 16:15:02 crc kubenswrapper[4736]: I0316 16:15:02.747742 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" event={"ID":"77cf92e2-f06a-4211-b2f3-9c65fdef5987","Type":"ContainerStarted","Data":"3376f8edc3a0196b1244bf39e51ef57f81bc11a9d9afc707646c3073650fdf7d"} Mar 16 16:15:02 crc kubenswrapper[4736]: I0316 16:15:02.748045 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" event={"ID":"77cf92e2-f06a-4211-b2f3-9c65fdef5987","Type":"ContainerStarted","Data":"f57ba58d72adf5c0165d234d4cc9c6edb2e01d66b620743b941a0a63e95ed984"} Mar 16 16:15:02 crc kubenswrapper[4736]: I0316 16:15:02.772076 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" podStartSLOduration=2.7720581170000003 podStartE2EDuration="2.772058117s" podCreationTimestamp="2026-03-16 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 16:15:02.765153871 +0000 UTC m=+3704.492544158" watchObservedRunningTime="2026-03-16 16:15:02.772058117 +0000 UTC m=+3704.499448404" Mar 16 16:15:02 crc kubenswrapper[4736]: I0316 16:15:02.977737 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:15:02 crc kubenswrapper[4736]: E0316 16:15:02.978395 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:15:03 crc kubenswrapper[4736]: I0316 16:15:03.759850 4736 generic.go:334] "Generic (PLEG): container finished" podID="77cf92e2-f06a-4211-b2f3-9c65fdef5987" containerID="3376f8edc3a0196b1244bf39e51ef57f81bc11a9d9afc707646c3073650fdf7d" exitCode=0 Mar 16 16:15:03 crc kubenswrapper[4736]: I0316 16:15:03.759955 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" event={"ID":"77cf92e2-f06a-4211-b2f3-9c65fdef5987","Type":"ContainerDied","Data":"3376f8edc3a0196b1244bf39e51ef57f81bc11a9d9afc707646c3073650fdf7d"} Mar 16 16:15:05 crc kubenswrapper[4736]: I0316 16:15:05.260698 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" Mar 16 16:15:05 crc kubenswrapper[4736]: I0316 16:15:05.306378 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77cf92e2-f06a-4211-b2f3-9c65fdef5987-config-volume\") pod \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\" (UID: \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\") " Mar 16 16:15:05 crc kubenswrapper[4736]: I0316 16:15:05.307622 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq4wt\" (UniqueName: \"kubernetes.io/projected/77cf92e2-f06a-4211-b2f3-9c65fdef5987-kube-api-access-kq4wt\") pod \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\" (UID: \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\") " Mar 16 16:15:05 crc kubenswrapper[4736]: I0316 16:15:05.308024 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77cf92e2-f06a-4211-b2f3-9c65fdef5987-secret-volume\") pod \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\" (UID: \"77cf92e2-f06a-4211-b2f3-9c65fdef5987\") " Mar 16 16:15:05 crc kubenswrapper[4736]: I0316 16:15:05.311277 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77cf92e2-f06a-4211-b2f3-9c65fdef5987-config-volume" (OuterVolumeSpecName: "config-volume") pod "77cf92e2-f06a-4211-b2f3-9c65fdef5987" (UID: "77cf92e2-f06a-4211-b2f3-9c65fdef5987"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 16:15:05 crc kubenswrapper[4736]: I0316 16:15:05.319340 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77cf92e2-f06a-4211-b2f3-9c65fdef5987-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "77cf92e2-f06a-4211-b2f3-9c65fdef5987" (UID: "77cf92e2-f06a-4211-b2f3-9c65fdef5987"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 16:15:05 crc kubenswrapper[4736]: I0316 16:15:05.330734 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77cf92e2-f06a-4211-b2f3-9c65fdef5987-kube-api-access-kq4wt" (OuterVolumeSpecName: "kube-api-access-kq4wt") pod "77cf92e2-f06a-4211-b2f3-9c65fdef5987" (UID: "77cf92e2-f06a-4211-b2f3-9c65fdef5987"). InnerVolumeSpecName "kube-api-access-kq4wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:15:05 crc kubenswrapper[4736]: I0316 16:15:05.411518 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77cf92e2-f06a-4211-b2f3-9c65fdef5987-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 16:15:05 crc kubenswrapper[4736]: I0316 16:15:05.411560 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77cf92e2-f06a-4211-b2f3-9c65fdef5987-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 16:15:05 crc kubenswrapper[4736]: I0316 16:15:05.411577 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq4wt\" (UniqueName: \"kubernetes.io/projected/77cf92e2-f06a-4211-b2f3-9c65fdef5987-kube-api-access-kq4wt\") on node \"crc\" DevicePath \"\"" Mar 16 16:15:05 crc kubenswrapper[4736]: I0316 16:15:05.797530 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" event={"ID":"77cf92e2-f06a-4211-b2f3-9c65fdef5987","Type":"ContainerDied","Data":"f57ba58d72adf5c0165d234d4cc9c6edb2e01d66b620743b941a0a63e95ed984"} Mar 16 16:15:05 crc kubenswrapper[4736]: I0316 16:15:05.797749 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f57ba58d72adf5c0165d234d4cc9c6edb2e01d66b620743b941a0a63e95ed984" Mar 16 16:15:05 crc kubenswrapper[4736]: I0316 16:15:05.797674 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q" Mar 16 16:15:06 crc kubenswrapper[4736]: I0316 16:15:06.373377 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk"] Mar 16 16:15:06 crc kubenswrapper[4736]: I0316 16:15:06.387035 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561250-jqjsk"] Mar 16 16:15:07 crc kubenswrapper[4736]: I0316 16:15:07.002838 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="929f108f-20f2-47ca-8e05-e29b5e7c4609" path="/var/lib/kubelet/pods/929f108f-20f2-47ca-8e05-e29b5e7c4609/volumes" Mar 16 16:15:13 crc kubenswrapper[4736]: I0316 16:15:13.978208 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:15:13 crc kubenswrapper[4736]: E0316 16:15:13.979828 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:15:24 crc kubenswrapper[4736]: I0316 16:15:24.978312 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:15:24 crc kubenswrapper[4736]: E0316 16:15:24.979231 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:15:35 crc kubenswrapper[4736]: I0316 16:15:35.978891 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:15:35 crc kubenswrapper[4736]: E0316 16:15:35.980293 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:15:48 crc kubenswrapper[4736]: I0316 16:15:48.988094 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:15:48 crc kubenswrapper[4736]: E0316 16:15:48.989070 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:15:53 crc kubenswrapper[4736]: I0316 16:15:53.516716 4736 scope.go:117] "RemoveContainer" containerID="5cced67b82a376a1411e2ed994b07c0fe0b37e14cde8e6d954bb48e8fe1a1769" Mar 16 16:16:00 crc kubenswrapper[4736]: I0316 16:16:00.164436 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561296-tgdlv"] Mar 16 16:16:00 crc kubenswrapper[4736]: E0316 16:16:00.166457 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77cf92e2-f06a-4211-b2f3-9c65fdef5987" containerName="collect-profiles" Mar 16 16:16:00 crc kubenswrapper[4736]: I0316 16:16:00.166486 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="77cf92e2-f06a-4211-b2f3-9c65fdef5987" containerName="collect-profiles" Mar 16 16:16:00 crc kubenswrapper[4736]: I0316 16:16:00.166782 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="77cf92e2-f06a-4211-b2f3-9c65fdef5987" containerName="collect-profiles" Mar 16 16:16:00 crc kubenswrapper[4736]: I0316 16:16:00.167628 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561296-tgdlv" Mar 16 16:16:00 crc kubenswrapper[4736]: I0316 16:16:00.175017 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:16:00 crc kubenswrapper[4736]: I0316 16:16:00.175269 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:16:00 crc kubenswrapper[4736]: I0316 16:16:00.177567 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:16:00 crc kubenswrapper[4736]: I0316 16:16:00.182624 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561296-tgdlv"] Mar 16 16:16:00 crc kubenswrapper[4736]: I0316 16:16:00.318693 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5685\" (UniqueName: \"kubernetes.io/projected/9ac3d572-9b5f-4831-b0b9-3694ed742563-kube-api-access-g5685\") pod \"auto-csr-approver-29561296-tgdlv\" (UID: \"9ac3d572-9b5f-4831-b0b9-3694ed742563\") " pod="openshift-infra/auto-csr-approver-29561296-tgdlv" Mar 16 16:16:00 crc kubenswrapper[4736]: I0316 16:16:00.420787 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5685\" (UniqueName: \"kubernetes.io/projected/9ac3d572-9b5f-4831-b0b9-3694ed742563-kube-api-access-g5685\") pod \"auto-csr-approver-29561296-tgdlv\" (UID: \"9ac3d572-9b5f-4831-b0b9-3694ed742563\") " pod="openshift-infra/auto-csr-approver-29561296-tgdlv" Mar 16 16:16:00 crc kubenswrapper[4736]: I0316 16:16:00.455708 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5685\" (UniqueName: \"kubernetes.io/projected/9ac3d572-9b5f-4831-b0b9-3694ed742563-kube-api-access-g5685\") pod \"auto-csr-approver-29561296-tgdlv\" (UID: \"9ac3d572-9b5f-4831-b0b9-3694ed742563\") " pod="openshift-infra/auto-csr-approver-29561296-tgdlv" Mar 16 16:16:00 crc kubenswrapper[4736]: I0316 16:16:00.492567 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561296-tgdlv" Mar 16 16:16:00 crc kubenswrapper[4736]: I0316 16:16:00.981203 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:16:00 crc kubenswrapper[4736]: E0316 16:16:00.982309 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:16:01 crc kubenswrapper[4736]: I0316 16:16:01.233756 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561296-tgdlv"] Mar 16 16:16:01 crc kubenswrapper[4736]: I0316 16:16:01.320936 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561296-tgdlv" event={"ID":"9ac3d572-9b5f-4831-b0b9-3694ed742563","Type":"ContainerStarted","Data":"33b6e2da01686b0a6deb557ea555f679fbbdc1bc4cab96c8f3413f3b856b6f50"} Mar 16 16:16:04 crc kubenswrapper[4736]: I0316 16:16:04.348309 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561296-tgdlv" event={"ID":"9ac3d572-9b5f-4831-b0b9-3694ed742563","Type":"ContainerStarted","Data":"ec5740b8b51fc9d7e80eb844a6129a1ba4b3ca651801fe19c5acdc38451c3fdb"} Mar 16 16:16:04 crc kubenswrapper[4736]: I0316 16:16:04.363023 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561296-tgdlv" podStartSLOduration=3.315013075 podStartE2EDuration="4.363005642s" podCreationTimestamp="2026-03-16 16:16:00 +0000 UTC" firstStartedPulling="2026-03-16 16:16:01.249952913 +0000 UTC m=+3762.977343200" lastFinishedPulling="2026-03-16 16:16:02.29794548 +0000 UTC m=+3764.025335767" observedRunningTime="2026-03-16 16:16:04.361474721 +0000 UTC m=+3766.088865008" watchObservedRunningTime="2026-03-16 16:16:04.363005642 +0000 UTC m=+3766.090395929" Mar 16 16:16:05 crc kubenswrapper[4736]: I0316 16:16:05.358816 4736 generic.go:334] "Generic (PLEG): container finished" podID="9ac3d572-9b5f-4831-b0b9-3694ed742563" containerID="ec5740b8b51fc9d7e80eb844a6129a1ba4b3ca651801fe19c5acdc38451c3fdb" exitCode=0 Mar 16 16:16:05 crc kubenswrapper[4736]: I0316 16:16:05.358890 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561296-tgdlv" event={"ID":"9ac3d572-9b5f-4831-b0b9-3694ed742563","Type":"ContainerDied","Data":"ec5740b8b51fc9d7e80eb844a6129a1ba4b3ca651801fe19c5acdc38451c3fdb"} Mar 16 16:16:06 crc kubenswrapper[4736]: I0316 16:16:06.965627 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561296-tgdlv" Mar 16 16:16:07 crc kubenswrapper[4736]: I0316 16:16:07.085086 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5685\" (UniqueName: \"kubernetes.io/projected/9ac3d572-9b5f-4831-b0b9-3694ed742563-kube-api-access-g5685\") pod \"9ac3d572-9b5f-4831-b0b9-3694ed742563\" (UID: \"9ac3d572-9b5f-4831-b0b9-3694ed742563\") " Mar 16 16:16:07 crc kubenswrapper[4736]: I0316 16:16:07.095940 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac3d572-9b5f-4831-b0b9-3694ed742563-kube-api-access-g5685" (OuterVolumeSpecName: "kube-api-access-g5685") pod "9ac3d572-9b5f-4831-b0b9-3694ed742563" (UID: "9ac3d572-9b5f-4831-b0b9-3694ed742563"). InnerVolumeSpecName "kube-api-access-g5685". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:16:07 crc kubenswrapper[4736]: I0316 16:16:07.188050 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5685\" (UniqueName: \"kubernetes.io/projected/9ac3d572-9b5f-4831-b0b9-3694ed742563-kube-api-access-g5685\") on node \"crc\" DevicePath \"\"" Mar 16 16:16:07 crc kubenswrapper[4736]: I0316 16:16:07.379881 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561296-tgdlv" event={"ID":"9ac3d572-9b5f-4831-b0b9-3694ed742563","Type":"ContainerDied","Data":"33b6e2da01686b0a6deb557ea555f679fbbdc1bc4cab96c8f3413f3b856b6f50"} Mar 16 16:16:07 crc kubenswrapper[4736]: I0316 16:16:07.379919 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33b6e2da01686b0a6deb557ea555f679fbbdc1bc4cab96c8f3413f3b856b6f50" Mar 16 16:16:07 crc kubenswrapper[4736]: I0316 16:16:07.379973 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561296-tgdlv" Mar 16 16:16:07 crc kubenswrapper[4736]: I0316 16:16:07.464607 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561290-6q6qm"] Mar 16 16:16:07 crc kubenswrapper[4736]: I0316 16:16:07.475826 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561290-6q6qm"] Mar 16 16:16:08 crc kubenswrapper[4736]: I0316 16:16:08.989681 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff8c1171-e56d-44e6-bfdf-180d12747766" path="/var/lib/kubelet/pods/ff8c1171-e56d-44e6-bfdf-180d12747766/volumes" Mar 16 16:16:09 crc kubenswrapper[4736]: I0316 16:16:09.669967 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bw9rw"] Mar 16 16:16:09 crc kubenswrapper[4736]: E0316 16:16:09.670479 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac3d572-9b5f-4831-b0b9-3694ed742563" containerName="oc" Mar 16 16:16:09 crc kubenswrapper[4736]: I0316 16:16:09.670503 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac3d572-9b5f-4831-b0b9-3694ed742563" containerName="oc" Mar 16 16:16:09 crc kubenswrapper[4736]: I0316 16:16:09.670756 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac3d572-9b5f-4831-b0b9-3694ed742563" containerName="oc" Mar 16 16:16:09 crc kubenswrapper[4736]: I0316 16:16:09.674944 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:16:09 crc kubenswrapper[4736]: I0316 16:16:09.697420 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bw9rw"] Mar 16 16:16:09 crc kubenswrapper[4736]: I0316 16:16:09.854291 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bdr2\" (UniqueName: \"kubernetes.io/projected/18dfe22f-d1c8-419f-ac98-8a792518c82d-kube-api-access-6bdr2\") pod \"redhat-operators-bw9rw\" (UID: \"18dfe22f-d1c8-419f-ac98-8a792518c82d\") " pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:16:09 crc kubenswrapper[4736]: I0316 16:16:09.854390 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dfe22f-d1c8-419f-ac98-8a792518c82d-utilities\") pod \"redhat-operators-bw9rw\" (UID: \"18dfe22f-d1c8-419f-ac98-8a792518c82d\") " pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:16:09 crc kubenswrapper[4736]: I0316 16:16:09.854459 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dfe22f-d1c8-419f-ac98-8a792518c82d-catalog-content\") pod \"redhat-operators-bw9rw\" (UID: \"18dfe22f-d1c8-419f-ac98-8a792518c82d\") " pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:16:09 crc kubenswrapper[4736]: I0316 16:16:09.955613 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dfe22f-d1c8-419f-ac98-8a792518c82d-catalog-content\") pod \"redhat-operators-bw9rw\" (UID: \"18dfe22f-d1c8-419f-ac98-8a792518c82d\") " pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:16:09 crc kubenswrapper[4736]: I0316 16:16:09.955748 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bdr2\" (UniqueName: \"kubernetes.io/projected/18dfe22f-d1c8-419f-ac98-8a792518c82d-kube-api-access-6bdr2\") pod \"redhat-operators-bw9rw\" (UID: \"18dfe22f-d1c8-419f-ac98-8a792518c82d\") " pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:16:09 crc kubenswrapper[4736]: I0316 16:16:09.955812 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dfe22f-d1c8-419f-ac98-8a792518c82d-utilities\") pod \"redhat-operators-bw9rw\" (UID: \"18dfe22f-d1c8-419f-ac98-8a792518c82d\") " pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:16:09 crc kubenswrapper[4736]: I0316 16:16:09.956781 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dfe22f-d1c8-419f-ac98-8a792518c82d-utilities\") pod \"redhat-operators-bw9rw\" (UID: \"18dfe22f-d1c8-419f-ac98-8a792518c82d\") " pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:16:09 crc kubenswrapper[4736]: I0316 16:16:09.956867 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dfe22f-d1c8-419f-ac98-8a792518c82d-catalog-content\") pod \"redhat-operators-bw9rw\" (UID: \"18dfe22f-d1c8-419f-ac98-8a792518c82d\") " pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:16:10 crc kubenswrapper[4736]: I0316 16:16:10.005088 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bdr2\" (UniqueName: \"kubernetes.io/projected/18dfe22f-d1c8-419f-ac98-8a792518c82d-kube-api-access-6bdr2\") pod \"redhat-operators-bw9rw\" (UID: \"18dfe22f-d1c8-419f-ac98-8a792518c82d\") " pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:16:10 crc kubenswrapper[4736]: I0316 16:16:10.299866 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:16:10 crc kubenswrapper[4736]: I0316 16:16:10.830350 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bw9rw"] Mar 16 16:16:11 crc kubenswrapper[4736]: I0316 16:16:11.414627 4736 generic.go:334] "Generic (PLEG): container finished" podID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerID="307c4ee6936059607a463452daccd6ea3e20cf736fad3b229b242ffd4fc24c81" exitCode=0 Mar 16 16:16:11 crc kubenswrapper[4736]: I0316 16:16:11.414695 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw9rw" event={"ID":"18dfe22f-d1c8-419f-ac98-8a792518c82d","Type":"ContainerDied","Data":"307c4ee6936059607a463452daccd6ea3e20cf736fad3b229b242ffd4fc24c81"} Mar 16 16:16:11 crc kubenswrapper[4736]: I0316 16:16:11.414726 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw9rw" event={"ID":"18dfe22f-d1c8-419f-ac98-8a792518c82d","Type":"ContainerStarted","Data":"72f657989ac60e9ce5ef2143f11aa2e81d4e3f5a9d04f55b78ecdae0636f2308"} Mar 16 16:16:11 crc kubenswrapper[4736]: I0316 16:16:11.420321 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 16:16:13 crc kubenswrapper[4736]: I0316 16:16:13.434324 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw9rw" event={"ID":"18dfe22f-d1c8-419f-ac98-8a792518c82d","Type":"ContainerStarted","Data":"300fd134414219d067b57d0e64c70a0316fd07602896627011e25ddbb5c25248"} Mar 16 16:16:14 crc kubenswrapper[4736]: I0316 16:16:14.980447 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:16:14 crc kubenswrapper[4736]: E0316 16:16:14.981828 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:16:19 crc kubenswrapper[4736]: I0316 16:16:19.496781 4736 generic.go:334] "Generic (PLEG): container finished" podID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerID="300fd134414219d067b57d0e64c70a0316fd07602896627011e25ddbb5c25248" exitCode=0 Mar 16 16:16:19 crc kubenswrapper[4736]: I0316 16:16:19.496872 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw9rw" event={"ID":"18dfe22f-d1c8-419f-ac98-8a792518c82d","Type":"ContainerDied","Data":"300fd134414219d067b57d0e64c70a0316fd07602896627011e25ddbb5c25248"} Mar 16 16:16:20 crc kubenswrapper[4736]: I0316 16:16:20.508865 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw9rw" event={"ID":"18dfe22f-d1c8-419f-ac98-8a792518c82d","Type":"ContainerStarted","Data":"60fc4e114400a52a55a52e7cf316447b0bb629784a1728b1d2d32cdb411a1001"} Mar 16 16:16:20 crc kubenswrapper[4736]: I0316 16:16:20.530095 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bw9rw" podStartSLOduration=2.853876079 podStartE2EDuration="11.530076378s" podCreationTimestamp="2026-03-16 16:16:09 +0000 UTC" firstStartedPulling="2026-03-16 16:16:11.416235364 +0000 UTC m=+3773.143625651" lastFinishedPulling="2026-03-16 16:16:20.092435663 +0000 UTC m=+3781.819825950" observedRunningTime="2026-03-16 16:16:20.524233071 +0000 UTC m=+3782.251623358" watchObservedRunningTime="2026-03-16 16:16:20.530076378 +0000 UTC m=+3782.257466665" Mar 16 16:16:28 crc kubenswrapper[4736]: I0316 16:16:28.987893 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:16:28 crc kubenswrapper[4736]: E0316 16:16:28.988633 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:16:30 crc kubenswrapper[4736]: I0316 16:16:30.301352 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:16:30 crc kubenswrapper[4736]: I0316 16:16:30.302054 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:16:31 crc kubenswrapper[4736]: I0316 16:16:31.367310 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bw9rw" podUID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerName="registry-server" probeResult="failure" output=< Mar 16 16:16:31 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:16:31 crc kubenswrapper[4736]: > Mar 16 16:16:41 crc kubenswrapper[4736]: I0316 16:16:41.439525 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bw9rw" podUID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerName="registry-server" probeResult="failure" output=< Mar 16 16:16:41 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:16:41 crc kubenswrapper[4736]: > Mar 16 16:16:43 crc kubenswrapper[4736]: I0316 16:16:43.978973 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:16:43 crc kubenswrapper[4736]: E0316 16:16:43.979293 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:16:51 crc kubenswrapper[4736]: I0316 16:16:51.367038 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bw9rw" podUID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerName="registry-server" probeResult="failure" output=< Mar 16 16:16:51 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:16:51 crc kubenswrapper[4736]: > Mar 16 16:16:53 crc kubenswrapper[4736]: I0316 16:16:53.688036 4736 scope.go:117] "RemoveContainer" containerID="6b7559ae65116fdafb6a761bb20fdc1579c764e6b8ce41b99bf7065a28019a44" Mar 16 16:16:59 crc kubenswrapper[4736]: I0316 16:16:59.006699 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:16:59 crc kubenswrapper[4736]: E0316 16:16:59.010791 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:17:01 crc kubenswrapper[4736]: I0316 16:17:01.351533 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bw9rw" podUID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerName="registry-server" probeResult="failure" output=< Mar 16 16:17:01 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:17:01 crc kubenswrapper[4736]: > Mar 16 16:17:10 crc kubenswrapper[4736]: I0316 16:17:10.388400 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:17:10 crc kubenswrapper[4736]: I0316 16:17:10.452952 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:17:10 crc kubenswrapper[4736]: I0316 16:17:10.622172 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bw9rw"] Mar 16 16:17:11 crc kubenswrapper[4736]: I0316 16:17:11.984500 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bw9rw" podUID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerName="registry-server" containerID="cri-o://60fc4e114400a52a55a52e7cf316447b0bb629784a1728b1d2d32cdb411a1001" gracePeriod=2 Mar 16 16:17:12 crc kubenswrapper[4736]: I0316 16:17:12.998610 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw9rw" event={"ID":"18dfe22f-d1c8-419f-ac98-8a792518c82d","Type":"ContainerDied","Data":"60fc4e114400a52a55a52e7cf316447b0bb629784a1728b1d2d32cdb411a1001"} Mar 16 16:17:12 crc kubenswrapper[4736]: I0316 16:17:12.998343 4736 generic.go:334] "Generic (PLEG): container finished" podID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerID="60fc4e114400a52a55a52e7cf316447b0bb629784a1728b1d2d32cdb411a1001" exitCode=0 Mar 16 16:17:13 crc kubenswrapper[4736]: I0316 16:17:13.420906 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:17:13 crc kubenswrapper[4736]: I0316 16:17:13.540831 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bdr2\" (UniqueName: \"kubernetes.io/projected/18dfe22f-d1c8-419f-ac98-8a792518c82d-kube-api-access-6bdr2\") pod \"18dfe22f-d1c8-419f-ac98-8a792518c82d\" (UID: \"18dfe22f-d1c8-419f-ac98-8a792518c82d\") " Mar 16 16:17:13 crc kubenswrapper[4736]: I0316 16:17:13.540932 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dfe22f-d1c8-419f-ac98-8a792518c82d-catalog-content\") pod \"18dfe22f-d1c8-419f-ac98-8a792518c82d\" (UID: \"18dfe22f-d1c8-419f-ac98-8a792518c82d\") " Mar 16 16:17:13 crc kubenswrapper[4736]: I0316 16:17:13.541251 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dfe22f-d1c8-419f-ac98-8a792518c82d-utilities\") pod \"18dfe22f-d1c8-419f-ac98-8a792518c82d\" (UID: \"18dfe22f-d1c8-419f-ac98-8a792518c82d\") " Mar 16 16:17:13 crc kubenswrapper[4736]: I0316 16:17:13.549351 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18dfe22f-d1c8-419f-ac98-8a792518c82d-utilities" (OuterVolumeSpecName: "utilities") pod "18dfe22f-d1c8-419f-ac98-8a792518c82d" (UID: "18dfe22f-d1c8-419f-ac98-8a792518c82d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:17:13 crc kubenswrapper[4736]: I0316 16:17:13.577637 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18dfe22f-d1c8-419f-ac98-8a792518c82d-kube-api-access-6bdr2" (OuterVolumeSpecName: "kube-api-access-6bdr2") pod "18dfe22f-d1c8-419f-ac98-8a792518c82d" (UID: "18dfe22f-d1c8-419f-ac98-8a792518c82d"). InnerVolumeSpecName "kube-api-access-6bdr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:17:13 crc kubenswrapper[4736]: I0316 16:17:13.644860 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/18dfe22f-d1c8-419f-ac98-8a792518c82d-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:17:13 crc kubenswrapper[4736]: I0316 16:17:13.644902 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bdr2\" (UniqueName: \"kubernetes.io/projected/18dfe22f-d1c8-419f-ac98-8a792518c82d-kube-api-access-6bdr2\") on node \"crc\" DevicePath \"\"" Mar 16 16:17:13 crc kubenswrapper[4736]: I0316 16:17:13.978811 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:17:14 crc kubenswrapper[4736]: E0316 16:17:13.979279 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:17:14 crc kubenswrapper[4736]: I0316 16:17:14.013520 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18dfe22f-d1c8-419f-ac98-8a792518c82d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "18dfe22f-d1c8-419f-ac98-8a792518c82d" (UID: "18dfe22f-d1c8-419f-ac98-8a792518c82d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:17:14 crc kubenswrapper[4736]: I0316 16:17:14.015974 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bw9rw" event={"ID":"18dfe22f-d1c8-419f-ac98-8a792518c82d","Type":"ContainerDied","Data":"72f657989ac60e9ce5ef2143f11aa2e81d4e3f5a9d04f55b78ecdae0636f2308"} Mar 16 16:17:14 crc kubenswrapper[4736]: I0316 16:17:14.016254 4736 scope.go:117] "RemoveContainer" containerID="60fc4e114400a52a55a52e7cf316447b0bb629784a1728b1d2d32cdb411a1001" Mar 16 16:17:14 crc kubenswrapper[4736]: I0316 16:17:14.016791 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bw9rw" Mar 16 16:17:14 crc kubenswrapper[4736]: I0316 16:17:14.052065 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/18dfe22f-d1c8-419f-ac98-8a792518c82d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:17:14 crc kubenswrapper[4736]: I0316 16:17:14.084883 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bw9rw"] Mar 16 16:17:14 crc kubenswrapper[4736]: I0316 16:17:14.090985 4736 scope.go:117] "RemoveContainer" containerID="300fd134414219d067b57d0e64c70a0316fd07602896627011e25ddbb5c25248" Mar 16 16:17:14 crc kubenswrapper[4736]: I0316 16:17:14.097888 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bw9rw"] Mar 16 16:17:14 crc kubenswrapper[4736]: I0316 16:17:14.133836 4736 scope.go:117] "RemoveContainer" containerID="307c4ee6936059607a463452daccd6ea3e20cf736fad3b229b242ffd4fc24c81" Mar 16 16:17:14 crc kubenswrapper[4736]: I0316 16:17:14.992656 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18dfe22f-d1c8-419f-ac98-8a792518c82d" path="/var/lib/kubelet/pods/18dfe22f-d1c8-419f-ac98-8a792518c82d/volumes" Mar 16 16:17:26 crc kubenswrapper[4736]: I0316 16:17:26.978406 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:17:26 crc kubenswrapper[4736]: E0316 16:17:26.979370 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:17:38 crc kubenswrapper[4736]: I0316 16:17:38.986090 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:17:38 crc kubenswrapper[4736]: E0316 16:17:38.988225 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:17:51 crc kubenswrapper[4736]: I0316 16:17:51.979124 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:17:51 crc kubenswrapper[4736]: E0316 16:17:51.979924 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:18:00 crc kubenswrapper[4736]: I0316 16:18:00.480983 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561298-rcgzr"] Mar 16 16:18:00 crc kubenswrapper[4736]: E0316 16:18:00.490279 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerName="extract-utilities" Mar 16 16:18:00 crc kubenswrapper[4736]: I0316 16:18:00.490480 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerName="extract-utilities" Mar 16 16:18:00 crc kubenswrapper[4736]: E0316 16:18:00.490527 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerName="extract-content" Mar 16 16:18:00 crc kubenswrapper[4736]: I0316 16:18:00.490535 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerName="extract-content" Mar 16 16:18:00 crc kubenswrapper[4736]: E0316 16:18:00.490562 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerName="registry-server" Mar 16 16:18:00 crc kubenswrapper[4736]: I0316 16:18:00.490571 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerName="registry-server" Mar 16 16:18:00 crc kubenswrapper[4736]: I0316 16:18:00.494400 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="18dfe22f-d1c8-419f-ac98-8a792518c82d" containerName="registry-server" Mar 16 16:18:00 crc kubenswrapper[4736]: I0316 16:18:00.508653 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561298-rcgzr" Mar 16 16:18:00 crc kubenswrapper[4736]: I0316 16:18:00.522214 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:18:00 crc kubenswrapper[4736]: I0316 16:18:00.522388 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:18:00 crc kubenswrapper[4736]: I0316 16:18:00.522389 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:18:00 crc kubenswrapper[4736]: I0316 16:18:00.569762 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561298-rcgzr"] Mar 16 16:18:00 crc kubenswrapper[4736]: I0316 16:18:00.671447 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9gjl\" (UniqueName: \"kubernetes.io/projected/ad9d14ec-5a0a-4f26-bbc5-969d76f2c364-kube-api-access-q9gjl\") pod \"auto-csr-approver-29561298-rcgzr\" (UID: \"ad9d14ec-5a0a-4f26-bbc5-969d76f2c364\") " pod="openshift-infra/auto-csr-approver-29561298-rcgzr" Mar 16 16:18:00 crc kubenswrapper[4736]: I0316 16:18:00.773305 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9gjl\" (UniqueName: \"kubernetes.io/projected/ad9d14ec-5a0a-4f26-bbc5-969d76f2c364-kube-api-access-q9gjl\") pod \"auto-csr-approver-29561298-rcgzr\" (UID: \"ad9d14ec-5a0a-4f26-bbc5-969d76f2c364\") " pod="openshift-infra/auto-csr-approver-29561298-rcgzr" Mar 16 16:18:00 crc kubenswrapper[4736]: I0316 16:18:00.808599 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9gjl\" (UniqueName: \"kubernetes.io/projected/ad9d14ec-5a0a-4f26-bbc5-969d76f2c364-kube-api-access-q9gjl\") pod \"auto-csr-approver-29561298-rcgzr\" (UID: \"ad9d14ec-5a0a-4f26-bbc5-969d76f2c364\") " pod="openshift-infra/auto-csr-approver-29561298-rcgzr" Mar 16 16:18:00 crc kubenswrapper[4736]: I0316 16:18:00.851092 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561298-rcgzr" Mar 16 16:18:02 crc kubenswrapper[4736]: I0316 16:18:02.183281 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561298-rcgzr"] Mar 16 16:18:02 crc kubenswrapper[4736]: I0316 16:18:02.376192 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561298-rcgzr" event={"ID":"ad9d14ec-5a0a-4f26-bbc5-969d76f2c364","Type":"ContainerStarted","Data":"e91b783950f41c2db497efc5d066bbc02c599fc8f32613a56724a2bf9c2331fa"} Mar 16 16:18:02 crc kubenswrapper[4736]: I0316 16:18:02.979078 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:18:02 crc kubenswrapper[4736]: E0316 16:18:02.979729 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:18:04 crc kubenswrapper[4736]: I0316 16:18:04.412407 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561298-rcgzr" event={"ID":"ad9d14ec-5a0a-4f26-bbc5-969d76f2c364","Type":"ContainerStarted","Data":"ce83b4bec7fb35f84676bf5ba52f0d4c489f6c6ded8d15937e7f419b835c5a93"} Mar 16 16:18:04 crc kubenswrapper[4736]: I0316 16:18:04.440791 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561298-rcgzr" podStartSLOduration=3.280990294 podStartE2EDuration="4.43750001s" podCreationTimestamp="2026-03-16 16:18:00 +0000 UTC" firstStartedPulling="2026-03-16 16:18:02.218772413 +0000 UTC m=+3883.946162690" lastFinishedPulling="2026-03-16 16:18:03.375282109 +0000 UTC m=+3885.102672406" observedRunningTime="2026-03-16 16:18:04.432854433 +0000 UTC m=+3886.160244730" watchObservedRunningTime="2026-03-16 16:18:04.43750001 +0000 UTC m=+3886.164890297" Mar 16 16:18:06 crc kubenswrapper[4736]: I0316 16:18:06.429403 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561298-rcgzr" event={"ID":"ad9d14ec-5a0a-4f26-bbc5-969d76f2c364","Type":"ContainerDied","Data":"ce83b4bec7fb35f84676bf5ba52f0d4c489f6c6ded8d15937e7f419b835c5a93"} Mar 16 16:18:06 crc kubenswrapper[4736]: I0316 16:18:06.430669 4736 generic.go:334] "Generic (PLEG): container finished" podID="ad9d14ec-5a0a-4f26-bbc5-969d76f2c364" containerID="ce83b4bec7fb35f84676bf5ba52f0d4c489f6c6ded8d15937e7f419b835c5a93" exitCode=0 Mar 16 16:18:08 crc kubenswrapper[4736]: I0316 16:18:08.068632 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561298-rcgzr" Mar 16 16:18:08 crc kubenswrapper[4736]: I0316 16:18:08.129084 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9gjl\" (UniqueName: \"kubernetes.io/projected/ad9d14ec-5a0a-4f26-bbc5-969d76f2c364-kube-api-access-q9gjl\") pod \"ad9d14ec-5a0a-4f26-bbc5-969d76f2c364\" (UID: \"ad9d14ec-5a0a-4f26-bbc5-969d76f2c364\") " Mar 16 16:18:08 crc kubenswrapper[4736]: I0316 16:18:08.157325 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad9d14ec-5a0a-4f26-bbc5-969d76f2c364-kube-api-access-q9gjl" (OuterVolumeSpecName: "kube-api-access-q9gjl") pod "ad9d14ec-5a0a-4f26-bbc5-969d76f2c364" (UID: "ad9d14ec-5a0a-4f26-bbc5-969d76f2c364"). InnerVolumeSpecName "kube-api-access-q9gjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:18:08 crc kubenswrapper[4736]: I0316 16:18:08.235936 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9gjl\" (UniqueName: \"kubernetes.io/projected/ad9d14ec-5a0a-4f26-bbc5-969d76f2c364-kube-api-access-q9gjl\") on node \"crc\" DevicePath \"\"" Mar 16 16:18:08 crc kubenswrapper[4736]: I0316 16:18:08.448590 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561298-rcgzr" event={"ID":"ad9d14ec-5a0a-4f26-bbc5-969d76f2c364","Type":"ContainerDied","Data":"e91b783950f41c2db497efc5d066bbc02c599fc8f32613a56724a2bf9c2331fa"} Mar 16 16:18:08 crc kubenswrapper[4736]: I0316 16:18:08.448654 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561298-rcgzr" Mar 16 16:18:08 crc kubenswrapper[4736]: I0316 16:18:08.449312 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e91b783950f41c2db497efc5d066bbc02c599fc8f32613a56724a2bf9c2331fa" Mar 16 16:18:08 crc kubenswrapper[4736]: I0316 16:18:08.548350 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561292-w4x2x"] Mar 16 16:18:08 crc kubenswrapper[4736]: I0316 16:18:08.560467 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561292-w4x2x"] Mar 16 16:18:08 crc kubenswrapper[4736]: I0316 16:18:08.993635 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c4b6b4a-0d61-447b-b687-84dfe6d52466" path="/var/lib/kubelet/pods/3c4b6b4a-0d61-447b-b687-84dfe6d52466/volumes" Mar 16 16:18:14 crc kubenswrapper[4736]: I0316 16:18:14.978712 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:18:14 crc kubenswrapper[4736]: E0316 16:18:14.979433 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:18:25 crc kubenswrapper[4736]: I0316 16:18:25.978779 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:18:25 crc kubenswrapper[4736]: E0316 16:18:25.980733 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:18:37 crc kubenswrapper[4736]: I0316 16:18:37.978688 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:18:37 crc kubenswrapper[4736]: E0316 16:18:37.979849 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:18:52 crc kubenswrapper[4736]: I0316 16:18:52.977798 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:18:52 crc kubenswrapper[4736]: E0316 16:18:52.978592 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:18:54 crc kubenswrapper[4736]: I0316 16:18:54.237898 4736 scope.go:117] "RemoveContainer" containerID="e7383854e3c90f17aeefe32956e194a26960a61005c130f79635912bad667a98" Mar 16 16:19:06 crc kubenswrapper[4736]: I0316 16:19:06.980639 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:19:06 crc kubenswrapper[4736]: E0316 16:19:06.983836 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:19:20 crc kubenswrapper[4736]: I0316 16:19:20.984249 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:19:20 crc kubenswrapper[4736]: E0316 16:19:20.988388 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:19:33 crc kubenswrapper[4736]: I0316 16:19:33.985229 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:19:33 crc kubenswrapper[4736]: E0316 16:19:33.989747 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:19:44 crc kubenswrapper[4736]: I0316 16:19:44.988211 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:19:45 crc kubenswrapper[4736]: I0316 16:19:45.704981 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zqlgm"] Mar 16 16:19:45 crc kubenswrapper[4736]: E0316 16:19:45.706222 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad9d14ec-5a0a-4f26-bbc5-969d76f2c364" containerName="oc" Mar 16 16:19:45 crc kubenswrapper[4736]: I0316 16:19:45.706244 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad9d14ec-5a0a-4f26-bbc5-969d76f2c364" containerName="oc" Mar 16 16:19:45 crc kubenswrapper[4736]: I0316 16:19:45.707840 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad9d14ec-5a0a-4f26-bbc5-969d76f2c364" containerName="oc" Mar 16 16:19:45 crc kubenswrapper[4736]: I0316 16:19:45.714050 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:19:45 crc kubenswrapper[4736]: I0316 16:19:45.801540 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-catalog-content\") pod \"certified-operators-zqlgm\" (UID: \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\") " pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:19:45 crc kubenswrapper[4736]: I0316 16:19:45.801708 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mkdk\" (UniqueName: \"kubernetes.io/projected/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-kube-api-access-8mkdk\") pod \"certified-operators-zqlgm\" (UID: \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\") " pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:19:45 crc kubenswrapper[4736]: I0316 16:19:45.802149 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-utilities\") pod \"certified-operators-zqlgm\" (UID: \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\") " pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:19:45 crc kubenswrapper[4736]: I0316 16:19:45.904500 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-utilities\") pod \"certified-operators-zqlgm\" (UID: \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\") " pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:19:45 crc kubenswrapper[4736]: I0316 16:19:45.904567 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-catalog-content\") pod \"certified-operators-zqlgm\" (UID: \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\") " pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:19:45 crc kubenswrapper[4736]: I0316 16:19:45.904624 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mkdk\" (UniqueName: \"kubernetes.io/projected/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-kube-api-access-8mkdk\") pod \"certified-operators-zqlgm\" (UID: \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\") " pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:19:45 crc kubenswrapper[4736]: I0316 16:19:45.910327 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-catalog-content\") pod \"certified-operators-zqlgm\" (UID: \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\") " pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:19:45 crc kubenswrapper[4736]: I0316 16:19:45.910329 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-utilities\") pod \"certified-operators-zqlgm\" (UID: \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\") " pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:19:46 crc kubenswrapper[4736]: I0316 16:19:46.012871 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mkdk\" (UniqueName: \"kubernetes.io/projected/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-kube-api-access-8mkdk\") pod \"certified-operators-zqlgm\" (UID: \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\") " pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:19:46 crc kubenswrapper[4736]: I0316 16:19:46.018789 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zqlgm"] Mar 16 16:19:46 crc kubenswrapper[4736]: I0316 16:19:46.067503 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:19:46 crc kubenswrapper[4736]: I0316 16:19:46.370469 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"02cf557e228638c4bab6b5abe16ceb58a685cf32678b273a9ddb78e9f99373fe"} Mar 16 16:19:49 crc kubenswrapper[4736]: I0316 16:19:49.663100 4736 patch_prober.go:28] interesting pod/oauth-openshift-5d97c857-tmkjv container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:19:49 crc kubenswrapper[4736]: I0316 16:19:49.663088 4736 patch_prober.go:28] interesting pod/oauth-openshift-5d97c857-tmkjv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:19:49 crc kubenswrapper[4736]: I0316 16:19:49.663118 4736 patch_prober.go:28] interesting pod/route-controller-manager-6c78568d4c-5mng5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:19:49 crc kubenswrapper[4736]: I0316 16:19:49.663087 4736 patch_prober.go:28] interesting pod/route-controller-manager-6c78568d4c-5mng5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:19:49 crc kubenswrapper[4736]: I0316 16:19:49.670399 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" podUID="8689f548-c815-44d2-bd27-6d3358162480" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:49 crc kubenswrapper[4736]: I0316 16:19:49.670429 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" podUID="3aded1fa-a7f1-4247-9f8b-3638bbbc47c7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:49 crc kubenswrapper[4736]: I0316 16:19:49.670535 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" podUID="8689f548-c815-44d2-bd27-6d3358162480" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:49 crc kubenswrapper[4736]: I0316 16:19:49.672372 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" podUID="3aded1fa-a7f1-4247-9f8b-3638bbbc47c7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:50 crc kubenswrapper[4736]: I0316 16:19:50.341023 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-hsz7s" podUID="b8fd5e0d-983e-4780-9a84-dc84a9766804" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:53 crc kubenswrapper[4736]: I0316 16:19:53.049342 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zqlgm"] Mar 16 16:19:53 crc kubenswrapper[4736]: I0316 16:19:53.588489 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqlgm" event={"ID":"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c","Type":"ContainerStarted","Data":"d0da1bdd229f1c89972ee84c6ebf5b82987f3ac7f8923b9cefc0a667b04f8898"} Mar 16 16:19:53 crc kubenswrapper[4736]: I0316 16:19:53.955935 4736 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:19:53 crc kubenswrapper[4736]: I0316 16:19:53.962536 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:54 crc kubenswrapper[4736]: I0316 16:19:54.429992 4736 patch_prober.go:28] interesting pod/console-b958d4498-zgmwr container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:19:54 crc kubenswrapper[4736]: I0316 16:19:54.430145 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-b958d4498-zgmwr" podUID="d4563a36-0b6c-48df-acc0-c0bcd50eb002" containerName="console" probeResult="failure" output="Get \"https://10.217.0.37:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:54 crc kubenswrapper[4736]: I0316 16:19:54.608145 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqlgm" event={"ID":"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c","Type":"ContainerDied","Data":"1d5995eae133d88946ca7ca66dbe22d82c1583eece44d75e0535450d71273010"} Mar 16 16:19:54 crc kubenswrapper[4736]: I0316 16:19:54.611067 4736 generic.go:334] "Generic (PLEG): container finished" podID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" containerID="1d5995eae133d88946ca7ca66dbe22d82c1583eece44d75e0535450d71273010" exitCode=0 Mar 16 16:19:56 crc kubenswrapper[4736]: I0316 16:19:56.048898 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:19:56 crc kubenswrapper[4736]: I0316 16:19:56.049038 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:19:56 crc kubenswrapper[4736]: I0316 16:19:56.052715 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:56 crc kubenswrapper[4736]: I0316 16:19:56.052808 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:56 crc kubenswrapper[4736]: I0316 16:19:56.824407 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqlgm" event={"ID":"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c","Type":"ContainerStarted","Data":"6ef7c77a661475ba99a47bed14eb9cd79e1d1f2adf269d2a5d133bf49fe3b3be"} Mar 16 16:19:56 crc kubenswrapper[4736]: I0316 16:19:56.991313 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n" podUID="1ae22b3c-97a5-4592-b263-557131818155" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:57 crc kubenswrapper[4736]: I0316 16:19:57.207368 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2" podUID="2d48b057-960e-445a-bc66-b6d3dbfb56f9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:57 crc kubenswrapper[4736]: I0316 16:19:57.385401 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" podUID="99d86cbe-cf17-42a7-bc5b-d692609fff64" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:57 crc kubenswrapper[4736]: I0316 16:19:57.471435 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" podUID="b1ae843c-f1b5-4ee2-8300-55f93941ba2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:57 crc kubenswrapper[4736]: I0316 16:19:57.512324 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" podUID="d77bc7ac-fb08-4603-8453-677c6be6916d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.64:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:58 crc kubenswrapper[4736]: I0316 16:19:58.303273 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z" podUID="bdcce941-5cae-42fe-9dc5-a71e1e55790e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:58 crc kubenswrapper[4736]: I0316 16:19:58.842613 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="02f0ab2b-3871-4319-a39a-2c1d13a8c6e6" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.221:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:59 crc kubenswrapper[4736]: I0316 16:19:59.032311 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:19:59 crc kubenswrapper[4736]: I0316 16:19:59.032308 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:19:59 crc kubenswrapper[4736]: I0316 16:19:59.045226 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:59 crc kubenswrapper[4736]: I0316 16:19:59.045281 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:59 crc kubenswrapper[4736]: I0316 16:19:59.269380 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" podUID="b7e58b81-1f06-4844-adbe-ade114adc726" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:59 crc kubenswrapper[4736]: I0316 16:19:59.269744 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" podUID="b7e58b81-1f06-4844-adbe-ade114adc726" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:59 crc kubenswrapper[4736]: I0316 16:19:59.666339 4736 patch_prober.go:28] interesting pod/controller-manager-697bfb5dcc-2mrmq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:19:59 crc kubenswrapper[4736]: I0316 16:19:59.666404 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" podUID="7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:59 crc kubenswrapper[4736]: I0316 16:19:59.666410 4736 patch_prober.go:28] interesting pod/controller-manager-697bfb5dcc-2mrmq container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:19:59 crc kubenswrapper[4736]: I0316 16:19:59.666336 4736 patch_prober.go:28] interesting pod/route-controller-manager-6c78568d4c-5mng5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:19:59 crc kubenswrapper[4736]: I0316 16:19:59.666465 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" podUID="8689f548-c815-44d2-bd27-6d3358162480" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:59 crc kubenswrapper[4736]: I0316 16:19:59.666475 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" podUID="7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:19:59 crc kubenswrapper[4736]: I0316 16:19:59.666521 4736 patch_prober.go:28] interesting pod/route-controller-manager-6c78568d4c-5mng5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:19:59 crc kubenswrapper[4736]: I0316 16:19:59.666536 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" podUID="8689f548-c815-44d2-bd27-6d3358162480" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:00 crc kubenswrapper[4736]: I0316 16:20:00.180378 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" podUID="0a9b1e66-192c-4eab-a960-7fbd08759f54" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:00 crc kubenswrapper[4736]: I0316 16:20:00.180397 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" podUID="0a9b1e66-192c-4eab-a960-7fbd08759f54" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:00 crc kubenswrapper[4736]: I0316 16:20:00.355728 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-ztkrd" podUID="aeb1e197-872b-4ade-b3e4-425a5e52433f" containerName="registry-server" probeResult="failure" output=< Mar 16 16:20:00 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:20:00 crc kubenswrapper[4736]: > Mar 16 16:20:00 crc kubenswrapper[4736]: I0316 16:20:00.355728 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-ztkrd" podUID="aeb1e197-872b-4ade-b3e4-425a5e52433f" containerName="registry-server" probeResult="failure" output=< Mar 16 16:20:00 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:20:00 crc kubenswrapper[4736]: > Mar 16 16:20:00 crc kubenswrapper[4736]: I0316 16:20:00.502303 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" podUID="21bc5f54-2767-431f-add2-433724ea4408" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:00 crc kubenswrapper[4736]: I0316 16:20:00.502303 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-hsz7s" podUID="b8fd5e0d-983e-4780-9a84-dc84a9766804" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:00 crc kubenswrapper[4736]: I0316 16:20:00.502394 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-hsz7s" podUID="b8fd5e0d-983e-4780-9a84-dc84a9766804" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:00 crc kubenswrapper[4736]: I0316 16:20:00.502375 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-hsz7s" podUID="b8fd5e0d-983e-4780-9a84-dc84a9766804" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:00 crc kubenswrapper[4736]: I0316 16:20:00.502832 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" podUID="21bc5f54-2767-431f-add2-433724ea4408" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:01 crc kubenswrapper[4736]: I0316 16:20:01.483327 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-djs8w" podUID="5ab46a17-c761-4952-b743-9ede5877674a" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:01 crc kubenswrapper[4736]: I0316 16:20:01.483382 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-djs8w" podUID="5ab46a17-c761-4952-b743-9ede5877674a" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:01 crc kubenswrapper[4736]: I0316 16:20:01.565295 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" podUID="d2eb8b3d-8b48-4110-bab7-66fc20948ee5" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.80:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:01 crc kubenswrapper[4736]: I0316 16:20:01.565294 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" podUID="d2eb8b3d-8b48-4110-bab7-66fc20948ee5" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.80:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:01 crc kubenswrapper[4736]: I0316 16:20:01.891440 4736 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-5jx6b container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:01 crc kubenswrapper[4736]: I0316 16:20:01.893300 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" podUID="77900183-2391-4a47-8468-b36847297446" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:01 crc kubenswrapper[4736]: I0316 16:20:01.950318 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:01 crc kubenswrapper[4736]: I0316 16:20:01.950387 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:01 crc kubenswrapper[4736]: I0316 16:20:01.951057 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:01 crc kubenswrapper[4736]: I0316 16:20:01.951374 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:01 crc kubenswrapper[4736]: I0316 16:20:01.956141 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 16:20:01 crc kubenswrapper[4736]: I0316 16:20:01.956249 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 16:20:01 crc kubenswrapper[4736]: I0316 16:20:01.962770 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"59416ede21a1396d91e4fa5bf37227c3fabee285caf735e9b9fe1c63ce3882b9"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 16 16:20:01 crc kubenswrapper[4736]: I0316 16:20:01.964803 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" containerID="cri-o://59416ede21a1396d91e4fa5bf37227c3fabee285caf735e9b9fe1c63ce3882b9" gracePeriod=30 Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.319953 4736 patch_prober.go:28] interesting pod/console-operator-58897d9998-2j5dh container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.320306 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-2j5dh" podUID="fcef29de-d881-4b9e-871e-6a2cc33484b6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.320024 4736 patch_prober.go:28] interesting pod/console-operator-58897d9998-2j5dh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.320370 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2j5dh" podUID="fcef29de-d881-4b9e-871e-6a2cc33484b6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.755358 4736 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4qcc8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.69:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.755459 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" podUID="7c040e8d-b247-49a6-93bd-f928c704b135" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.69:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.755368 4736 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4qcc8 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.69:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.755562 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-4qcc8" podUID="7c040e8d-b247-49a6-93bd-f928c704b135" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.69:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.793996 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="51e06fc2-19ee-4e32-8118-d4596cb6b124" containerName="galera" probeResult="failure" output="command timed out" Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.794118 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="51e06fc2-19ee-4e32-8118-d4596cb6b124" containerName="galera" probeResult="failure" output="command timed out" Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.883855 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.883911 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.883949 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.883993 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.997272 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:02 crc kubenswrapper[4736]: I0316 16:20:02.997597 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.412645 4736 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnszh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.412711 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" podUID="6d00785d-6730-42d9-8004-f1bbc451d581" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.412839 4736 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnszh container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.412911 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" podUID="6d00785d-6730-42d9-8004-f1bbc451d581" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.436041 4736 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xzwwg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.436492 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" podUID="f8234512-ffa3-4229-b3d5-be9360b59dac" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.436092 4736 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xzwwg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": context deadline exceeded" start-of-body= Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.436730 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xzwwg" podUID="f8234512-ffa3-4229-b3d5-be9360b59dac" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": context deadline exceeded" Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.476297 4736 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hxs96 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.476747 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" podUID="db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.477215 4736 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hxs96 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": context deadline exceeded" start-of-body= Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.477251 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" podUID="db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": context deadline exceeded" Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.580274 4736 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-wtl7f container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.580326 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" podUID="4aa7778f-ab77-4cbe-ac51-99c2f2206b15" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.580292 4736 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-wtl7f container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.580891 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" podUID="4aa7778f-ab77-4cbe-ac51-99c2f2206b15" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.797024 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="9243d80f-05dc-4dff-a328-780f64a121af" containerName="galera" probeResult="failure" output="command timed out" Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.797033 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="9243d80f-05dc-4dff-a328-780f64a121af" containerName="galera" probeResult="failure" output="command timed out" Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.925318 4736 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.926616 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.977521 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 16 16:20:03 crc kubenswrapper[4736]: I0316 16:20:03.978035 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 16 16:20:04 crc kubenswrapper[4736]: I0316 16:20:04.417651 4736 patch_prober.go:28] interesting pod/console-b958d4498-zgmwr container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:04 crc kubenswrapper[4736]: I0316 16:20:04.417727 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-b958d4498-zgmwr" podUID="d4563a36-0b6c-48df-acc0-c0bcd50eb002" containerName="console" probeResult="failure" output="Get \"https://10.217.0.37:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:04 crc kubenswrapper[4736]: I0316 16:20:04.518637 4736 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-xbp5l container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.22:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:04 crc kubenswrapper[4736]: I0316 16:20:04.518706 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" podUID="88c72b3e-a013-4f44-ae5f-93e44846f22a" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.22:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:04 crc kubenswrapper[4736]: I0316 16:20:04.802725 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="321d2397-bb79-4799-8725-95081269785f" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 16 16:20:05 crc kubenswrapper[4736]: I0316 16:20:05.685192 4736 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:05 crc kubenswrapper[4736]: I0316 16:20:05.685261 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:06 crc kubenswrapper[4736]: I0316 16:20:06.444347 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-5kgqs" podUID="9fba4db8-97e3-4e96-b22b-0aca71b4217f" containerName="registry-server" probeResult="failure" output=< Mar 16 16:20:06 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:20:06 crc kubenswrapper[4736]: > Mar 16 16:20:06 crc kubenswrapper[4736]: I0316 16:20:06.444534 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-ftntd" podUID="44843712-11b8-4d11-b61f-00678e344b30" containerName="registry-server" probeResult="failure" output=< Mar 16 16:20:06 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:20:06 crc kubenswrapper[4736]: > Mar 16 16:20:06 crc kubenswrapper[4736]: I0316 16:20:06.444586 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-48gwj" podUID="785f2d20-3733-4b65-827e-45a047ecc4c6" containerName="registry-server" probeResult="failure" output=< Mar 16 16:20:06 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:20:06 crc kubenswrapper[4736]: > Mar 16 16:20:06 crc kubenswrapper[4736]: I0316 16:20:06.445581 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-j9zbt" podUID="64d704b6-37a3-4ea1-bbe6-af675569eb7a" containerName="registry-server" probeResult="failure" output=< Mar 16 16:20:06 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:20:06 crc kubenswrapper[4736]: > Mar 16 16:20:06 crc kubenswrapper[4736]: I0316 16:20:06.446280 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-j9zbt" podUID="64d704b6-37a3-4ea1-bbe6-af675569eb7a" containerName="registry-server" probeResult="failure" output=< Mar 16 16:20:06 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:20:06 crc kubenswrapper[4736]: > Mar 16 16:20:06 crc kubenswrapper[4736]: I0316 16:20:06.446592 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-48gwj" podUID="785f2d20-3733-4b65-827e-45a047ecc4c6" containerName="registry-server" probeResult="failure" output=< Mar 16 16:20:06 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:20:06 crc kubenswrapper[4736]: > Mar 16 16:20:06 crc kubenswrapper[4736]: I0316 16:20:06.449909 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-5kgqs" podUID="9fba4db8-97e3-4e96-b22b-0aca71b4217f" containerName="registry-server" probeResult="failure" output=< Mar 16 16:20:06 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:20:06 crc kubenswrapper[4736]: > Mar 16 16:20:06 crc kubenswrapper[4736]: I0316 16:20:06.487305 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" podUID="d2eb8b3d-8b48-4110-bab7-66fc20948ee5" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.80:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:06 crc kubenswrapper[4736]: I0316 16:20:06.891327 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm" podUID="8163ef92-862a-4de1-a443-8ac84a5ba0c9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:06 crc kubenswrapper[4736]: I0316 16:20:06.951439 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-ftntd" podUID="44843712-11b8-4d11-b61f-00678e344b30" containerName="registry-server" probeResult="failure" output=< Mar 16 16:20:06 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:20:06 crc kubenswrapper[4736]: > Mar 16 16:20:06 crc kubenswrapper[4736]: I0316 16:20:06.974311 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-tgqsm" podUID="8163ef92-862a-4de1-a443-8ac84a5ba0c9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:06 crc kubenswrapper[4736]: I0316 16:20:06.974312 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt" podUID="99a35a5a-103f-4e00-9b39-d4f86531f5f7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.56:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:06 crc kubenswrapper[4736]: I0316 16:20:06.974923 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-w4ppt" podUID="99a35a5a-103f-4e00-9b39-d4f86531f5f7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.56:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.041543 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n" podUID="1ae22b3c-97a5-4592-b263-557131818155" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.128403 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-d6m9n" podUID="1ae22b3c-97a5-4592-b263-557131818155" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.128504 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj" podUID="aac26090-af84-496a-afdf-efdb24694811" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.62:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.128644 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-fd7xj" podUID="aac26090-af84-496a-afdf-efdb24694811" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.62:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.138948 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.139026 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.333393 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q" podUID="9d7909e1-3088-4a9e-b2ac-286927abd741" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.333486 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2" podUID="2d48b057-960e-445a-bc66-b6d3dbfb56f9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.333642 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-c6tc2" podUID="2d48b057-960e-445a-bc66-b6d3dbfb56f9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.416402 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5" podUID="569449b8-1135-4dd6-b6fe-ad66844b413e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.68:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.416435 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-z9l9q" podUID="9d7909e1-3088-4a9e-b2ac-286927abd741" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.580302 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-55f864c847-sjpl5" podUID="569449b8-1135-4dd6-b6fe-ad66844b413e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.68:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.580381 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" podUID="99d86cbe-cf17-42a7-bc5b-d692609fff64" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.744338 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" podUID="d77bc7ac-fb08-4603-8453-677c6be6916d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.64:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.744334 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-767865f676-bqghw" podUID="99d86cbe-cf17-42a7-bc5b-d692609fff64" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.826766 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2" podUID="f8308a1a-301e-40b9-8a0e-b7e267e74a10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.63:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.826888 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-9kzj2" podUID="f8308a1a-301e-40b9-8a0e-b7e267e74a10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.63:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.826899 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg" podUID="e7971b38-1b13-4984-a055-2cc52b34bf6b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.992315 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" podUID="40be2c61-bd71-46b6-b837-abf09d8d5aeb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.992448 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" podUID="b1ae843c-f1b5-4ee2-8300-55f93941ba2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.992489 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-7hrfc" podUID="b1ae843c-f1b5-4ee2-8300-55f93941ba2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:07 crc kubenswrapper[4736]: I0316 16:20:07.992534 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-7gd6n" podUID="d77bc7ac-fb08-4603-8453-677c6be6916d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.64:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.070907 4736 generic.go:334] "Generic (PLEG): container finished" podID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerID="59416ede21a1396d91e4fa5bf37227c3fabee285caf735e9b9fe1c63ce3882b9" exitCode=0 Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.071453 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" event={"ID":"bfbf1c26-496a-4fc3-a248-9c2db09bf334","Type":"ContainerDied","Data":"59416ede21a1396d91e4fa5bf37227c3fabee285caf735e9b9fe1c63ce3882b9"} Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.075319 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" podUID="534a3ae8-6587-4e8a-b454-b084edbfeb21" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.075333 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-47kkg" podUID="e7971b38-1b13-4984-a055-2cc52b34bf6b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.158402 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm" podUID="6cbcdd30-245d-4732-8986-77f861f1f568" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.158507 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq" podUID="285f243f-b886-440f-8a92-b1ddf60bf6e6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.158636 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-bgvjq" podUID="285f243f-b886-440f-8a92-b1ddf60bf6e6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.158682 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-6z9rj" podUID="fff6882e-3a77-462f-b12e-25192ea56328" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.234440 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9" podUID="93d0e3bc-0e33-4254-b52e-31f28fdff357" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.235495 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-884679f54-vfqg8" podUID="40be2c61-bd71-46b6-b837-abf09d8d5aeb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.235581 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-c674c5965-7lk9g" podUID="534a3ae8-6587-4e8a-b454-b084edbfeb21" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.235647 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-5784578c99-6fjhm" podUID="6cbcdd30-245d-4732-8986-77f861f1f568" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.235965 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-5ngf9" podUID="93d0e3bc-0e33-4254-b52e-31f28fdff357" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.345354 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z" podUID="bdcce941-5cae-42fe-9dc5-a71e1e55790e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.345354 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-pj69z" podUID="bdcce941-5cae-42fe-9dc5-a71e1e55790e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.715296 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" podUID="109b033e-a4ea-474a-9e79-e895cc75666e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.46:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.793137 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-tbvd2" podUID="eea9e7aa-6f24-4b45-b7b4-347a38dccb64" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.823401 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" podUID="634ac783-1fe6-4191-b432-f22ad5d84357" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.834590 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="02f0ab2b-3871-4319-a39a-2c1d13a8c6e6" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.221:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:08 crc kubenswrapper[4736]: I0316 16:20:08.835834 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="02f0ab2b-3871-4319-a39a-2c1d13a8c6e6" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.221:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.082167 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" event={"ID":"bfbf1c26-496a-4fc3-a248-9c2db09bf334","Type":"ContainerStarted","Data":"a4ef714de22748b9dee6e09ee368547ab7d01e18e204fd44cbf5adfe188a68f0"} Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.082866 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.268268 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" podUID="b7e58b81-1f06-4844-adbe-ade114adc726" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.268295 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-9c55cfcd7-trkfb" podUID="b7e58b81-1f06-4844-adbe-ade114adc726" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.443369 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-pc5vv" podUID="62d536a1-c184-4077-a6f8-4285c3ebe5db" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.443360 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x" podUID="34b67803-050a-457b-80ff-64455949a26d" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.53:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.490853 4736 patch_prober.go:28] interesting pod/oauth-openshift-5d97c857-tmkjv container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.490845 4736 patch_prober.go:28] interesting pod/oauth-openshift-5d97c857-tmkjv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.491896 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" podUID="3aded1fa-a7f1-4247-9f8b-3638bbbc47c7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.492463 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-5d97c857-tmkjv" podUID="3aded1fa-a7f1-4247-9f8b-3638bbbc47c7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.498796 4736 patch_prober.go:28] interesting pod/controller-manager-697bfb5dcc-2mrmq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.498846 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" podUID="7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.498894 4736 patch_prober.go:28] interesting pod/controller-manager-697bfb5dcc-2mrmq container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.498906 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-697bfb5dcc-2mrmq" podUID="7a81bbc2-e0cc-4d5e-9be9-adfa5d6c6cd9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.510257 4736 patch_prober.go:28] interesting pod/route-controller-manager-6c78568d4c-5mng5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.510279 4736 patch_prober.go:28] interesting pod/route-controller-manager-6c78568d4c-5mng5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.510304 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" podUID="8689f548-c815-44d2-bd27-6d3358162480" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.510344 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" podUID="8689f548-c815-44d2-bd27-6d3358162480" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.510388 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.511900 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"81297ce68d472c73984a413a9341745498f245b0ecfc34a81e953fc7ebd531ab"} pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.515010 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" podUID="8689f548-c815-44d2-bd27-6d3358162480" containerName="route-controller-manager" containerID="cri-o://81297ce68d472c73984a413a9341745498f245b0ecfc34a81e953fc7ebd531ab" gracePeriod=30 Mar 16 16:20:09 crc kubenswrapper[4736]: I0316 16:20:09.799507 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="321d2397-bb79-4799-8725-95081269785f" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 16 16:20:10 crc kubenswrapper[4736]: I0316 16:20:10.140516 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-5467877-vhgh7" podUID="0a9b1e66-192c-4eab-a960-7fbd08759f54" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:10 crc kubenswrapper[4736]: I0316 16:20:10.377423 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-hsz7s" podUID="b8fd5e0d-983e-4780-9a84-dc84a9766804" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:10 crc kubenswrapper[4736]: I0316 16:20:10.377554 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-hsz7s" Mar 16 16:20:10 crc kubenswrapper[4736]: I0316 16:20:10.379186 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"c7a10ab660a1f06be57f3a3de187f2c5127f5f2f0fb46e446813c5266bc49990"} pod="metallb-system/frr-k8s-hsz7s" containerMessage="Container frr failed liveness probe, will be restarted" Mar 16 16:20:10 crc kubenswrapper[4736]: I0316 16:20:10.379357 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-hsz7s" podUID="b8fd5e0d-983e-4780-9a84-dc84a9766804" containerName="frr" containerID="cri-o://c7a10ab660a1f06be57f3a3de187f2c5127f5f2f0fb46e446813c5266bc49990" gracePeriod=2 Mar 16 16:20:10 crc kubenswrapper[4736]: I0316 16:20:10.500408 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" podUID="21bc5f54-2767-431f-add2-433724ea4408" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:10 crc kubenswrapper[4736]: I0316 16:20:10.500516 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-hsz7s" podUID="b8fd5e0d-983e-4780-9a84-dc84a9766804" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:10 crc kubenswrapper[4736]: I0316 16:20:10.500747 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-hsz7s" podUID="b8fd5e0d-983e-4780-9a84-dc84a9766804" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:10 crc kubenswrapper[4736]: I0316 16:20:10.501400 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-vxgc7" podUID="21bc5f54-2767-431f-add2-433724ea4408" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:11 crc kubenswrapper[4736]: I0316 16:20:11.114443 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hsz7s" event={"ID":"b8fd5e0d-983e-4780-9a84-dc84a9766804","Type":"ContainerDied","Data":"c7a10ab660a1f06be57f3a3de187f2c5127f5f2f0fb46e446813c5266bc49990"} Mar 16 16:20:11 crc kubenswrapper[4736]: I0316 16:20:11.116309 4736 generic.go:334] "Generic (PLEG): container finished" podID="b8fd5e0d-983e-4780-9a84-dc84a9766804" containerID="c7a10ab660a1f06be57f3a3de187f2c5127f5f2f0fb46e446813c5266bc49990" exitCode=143 Mar 16 16:20:11 crc kubenswrapper[4736]: I0316 16:20:11.483273 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-djs8w" podUID="5ab46a17-c761-4952-b743-9ede5877674a" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:11 crc kubenswrapper[4736]: I0316 16:20:11.483617 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-djs8w" podUID="5ab46a17-c761-4952-b743-9ede5877674a" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:11 crc kubenswrapper[4736]: I0316 16:20:11.524323 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" podUID="d2eb8b3d-8b48-4110-bab7-66fc20948ee5" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.80:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:11 crc kubenswrapper[4736]: I0316 16:20:11.565288 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" podUID="d2eb8b3d-8b48-4110-bab7-66fc20948ee5" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.80:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:11 crc kubenswrapper[4736]: I0316 16:20:11.565392 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" Mar 16 16:20:11 crc kubenswrapper[4736]: I0316 16:20:11.889627 4736 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-5jx6b container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:11 crc kubenswrapper[4736]: I0316 16:20:11.889747 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-5jx6b" podUID="77900183-2391-4a47-8468-b36847297446" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.133947 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hsz7s" event={"ID":"b8fd5e0d-983e-4780-9a84-dc84a9766804","Type":"ContainerStarted","Data":"3f7ffec80f3ec03cc902714518472bfe34ec08ceb92b8c096d734212420bd7c8"} Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.244140 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-qrt7l" Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.402016 4736 patch_prober.go:28] interesting pod/console-operator-58897d9998-2j5dh container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.402027 4736 patch_prober.go:28] interesting pod/console-operator-58897d9998-2j5dh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.402077 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-2j5dh" podUID="fcef29de-d881-4b9e-871e-6a2cc33484b6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.402096 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2j5dh" podUID="fcef29de-d881-4b9e-871e-6a2cc33484b6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.794084 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="51e06fc2-19ee-4e32-8118-d4596cb6b124" containerName="galera" probeResult="failure" output="command timed out" Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.794089 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="51e06fc2-19ee-4e32-8118-d4596cb6b124" containerName="galera" probeResult="failure" output="command timed out" Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.797330 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-ztkrd" podUID="aeb1e197-872b-4ade-b3e4-425a5e52433f" containerName="registry-server" probeResult="failure" output="command timed out" Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.797729 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-ztkrd" podUID="aeb1e197-872b-4ade-b3e4-425a5e52433f" containerName="registry-server" probeResult="failure" output="command timed out" Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.881278 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.881293 4736 patch_prober.go:28] interesting pod/router-default-5444994796-ns4mr container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.881413 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.881342 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-ns4mr" podUID="584c55a3-9d43-42ab-9fcd-b3a938b52dc1" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.947432 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.947442 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.947878 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 16 16:20:12 crc kubenswrapper[4736]: I0316 16:20:12.947781 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.175231 4736 generic.go:334] "Generic (PLEG): container finished" podID="8689f548-c815-44d2-bd27-6d3358162480" containerID="81297ce68d472c73984a413a9341745498f245b0ecfc34a81e953fc7ebd531ab" exitCode=0 Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.176289 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" event={"ID":"8689f548-c815-44d2-bd27-6d3358162480","Type":"ContainerDied","Data":"81297ce68d472c73984a413a9341745498f245b0ecfc34a81e953fc7ebd531ab"} Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.412918 4736 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnszh container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.412990 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" podUID="6d00785d-6730-42d9-8004-f1bbc451d581" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.413062 4736 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-qnszh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.413075 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-qnszh" podUID="6d00785d-6730-42d9-8004-f1bbc451d581" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.486387 4736 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hxs96 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.486442 4736 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hxs96 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.486517 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" podUID="db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.486451 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hxs96" podUID="db54a2a9-0da1-4c84-9d00-ec3cbb9f2e74" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.580243 4736 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-wtl7f container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.580306 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" podUID="4aa7778f-ab77-4cbe-ac51-99c2f2206b15" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.580332 4736 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-wtl7f container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.580414 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wtl7f" podUID="4aa7778f-ab77-4cbe-ac51-99c2f2206b15" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.793330 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="9243d80f-05dc-4dff-a328-780f64a121af" containerName="galera" probeResult="failure" output="command timed out" Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.793349 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="9243d80f-05dc-4dff-a328-780f64a121af" containerName="galera" probeResult="failure" output="command timed out" Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.922272 4736 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.922389 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:13 crc kubenswrapper[4736]: I0316 16:20:13.922540 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 16 16:20:14 crc kubenswrapper[4736]: I0316 16:20:14.202317 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" event={"ID":"8689f548-c815-44d2-bd27-6d3358162480","Type":"ContainerStarted","Data":"1b90bcb18a5f07eca8919e75ecfdfdcaf92061286471ee47419374a9c3912898"} Mar 16 16:20:14 crc kubenswrapper[4736]: I0316 16:20:14.203494 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 16:20:14 crc kubenswrapper[4736]: I0316 16:20:14.204120 4736 patch_prober.go:28] interesting pod/route-controller-manager-6c78568d4c-5mng5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Mar 16 16:20:14 crc kubenswrapper[4736]: I0316 16:20:14.204170 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" podUID="8689f548-c815-44d2-bd27-6d3358162480" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Mar 16 16:20:14 crc kubenswrapper[4736]: I0316 16:20:14.294649 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hsz7s" Mar 16 16:20:14 crc kubenswrapper[4736]: I0316 16:20:14.371132 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hsz7s" Mar 16 16:20:14 crc kubenswrapper[4736]: I0316 16:20:14.488199 4736 patch_prober.go:28] interesting pod/nmstate-webhook-5f558f5558-xbp5l container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.22:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 16 16:20:14 crc kubenswrapper[4736]: I0316 16:20:14.488567 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f558f5558-xbp5l" podUID="88c72b3e-a013-4f44-ae5f-93e44846f22a" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.22:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 16 16:20:14 crc kubenswrapper[4736]: I0316 16:20:14.633353 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 16 16:20:14 crc kubenswrapper[4736]: I0316 16:20:14.797917 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="321d2397-bb79-4799-8725-95081269785f" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 16 16:20:14 crc kubenswrapper[4736]: I0316 16:20:14.797989 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Mar 16 16:20:14 crc kubenswrapper[4736]: I0316 16:20:14.801994 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"7bea4df300efab67b4cee39b2e7427609216a49525f1efed6d3324f694c73f79"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Mar 16 16:20:14 crc kubenswrapper[4736]: I0316 16:20:14.802099 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="321d2397-bb79-4799-8725-95081269785f" containerName="ceilometer-central-agent" containerID="cri-o://7bea4df300efab67b4cee39b2e7427609216a49525f1efed6d3324f694c73f79" gracePeriod=30 Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.157156 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561300-tt5hr"] Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.187473 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561300-tt5hr" Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.211901 4736 patch_prober.go:28] interesting pod/route-controller-manager-6c78568d4c-5mng5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.212864 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" podUID="8689f548-c815-44d2-bd27-6d3358162480" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.222003 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsxqx\" (UniqueName: \"kubernetes.io/projected/4012606f-be73-4dc2-9c22-1dc076985291-kube-api-access-nsxqx\") pod \"auto-csr-approver-29561300-tt5hr\" (UID: \"4012606f-be73-4dc2-9c22-1dc076985291\") " pod="openshift-infra/auto-csr-approver-29561300-tt5hr" Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.333558 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsxqx\" (UniqueName: \"kubernetes.io/projected/4012606f-be73-4dc2-9c22-1dc076985291-kube-api-access-nsxqx\") pod \"auto-csr-approver-29561300-tt5hr\" (UID: \"4012606f-be73-4dc2-9c22-1dc076985291\") " pod="openshift-infra/auto-csr-approver-29561300-tt5hr" Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.364233 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.364264 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.364236 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.563921 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsxqx\" (UniqueName: \"kubernetes.io/projected/4012606f-be73-4dc2-9c22-1dc076985291-kube-api-access-nsxqx\") pod \"auto-csr-approver-29561300-tt5hr\" (UID: \"4012606f-be73-4dc2-9c22-1dc076985291\") " pod="openshift-infra/auto-csr-approver-29561300-tt5hr" Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.857306 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561300-tt5hr" Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.946664 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.946698 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.946706 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 16 16:20:15 crc kubenswrapper[4736]: I0316 16:20:15.946730 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 16 16:20:17 crc kubenswrapper[4736]: I0316 16:20:17.005279 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561300-tt5hr"] Mar 16 16:20:18 crc kubenswrapper[4736]: I0316 16:20:18.298293 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"321d2397-bb79-4799-8725-95081269785f","Type":"ContainerDied","Data":"7bea4df300efab67b4cee39b2e7427609216a49525f1efed6d3324f694c73f79"} Mar 16 16:20:18 crc kubenswrapper[4736]: I0316 16:20:18.302026 4736 generic.go:334] "Generic (PLEG): container finished" podID="321d2397-bb79-4799-8725-95081269785f" containerID="7bea4df300efab67b4cee39b2e7427609216a49525f1efed6d3324f694c73f79" exitCode=0 Mar 16 16:20:18 crc kubenswrapper[4736]: I0316 16:20:18.512987 4736 patch_prober.go:28] interesting pod/route-controller-manager-6c78568d4c-5mng5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Mar 16 16:20:18 crc kubenswrapper[4736]: I0316 16:20:18.513919 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" podUID="8689f548-c815-44d2-bd27-6d3358162480" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Mar 16 16:20:18 crc kubenswrapper[4736]: I0316 16:20:18.947267 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 16 16:20:18 crc kubenswrapper[4736]: I0316 16:20:18.947333 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 16 16:20:18 crc kubenswrapper[4736]: I0316 16:20:18.947267 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 16 16:20:18 crc kubenswrapper[4736]: I0316 16:20:18.947383 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 16 16:20:18 crc kubenswrapper[4736]: I0316 16:20:18.947443 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 16:20:18 crc kubenswrapper[4736]: I0316 16:20:18.948026 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 16 16:20:18 crc kubenswrapper[4736]: I0316 16:20:18.948054 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 16 16:20:18 crc kubenswrapper[4736]: I0316 16:20:18.949062 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"a4ef714de22748b9dee6e09ee368547ab7d01e18e204fd44cbf5adfe188a68f0"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 16 16:20:18 crc kubenswrapper[4736]: I0316 16:20:18.949138 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" containerID="cri-o://a4ef714de22748b9dee6e09ee368547ab7d01e18e204fd44cbf5adfe188a68f0" gracePeriod=30 Mar 16 16:20:19 crc kubenswrapper[4736]: I0316 16:20:19.333603 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-j58gg_bfbf1c26-496a-4fc3-a248-9c2db09bf334/openshift-config-operator/1.log" Mar 16 16:20:19 crc kubenswrapper[4736]: I0316 16:20:19.338731 4736 generic.go:334] "Generic (PLEG): container finished" podID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerID="a4ef714de22748b9dee6e09ee368547ab7d01e18e204fd44cbf5adfe188a68f0" exitCode=2 Mar 16 16:20:19 crc kubenswrapper[4736]: I0316 16:20:19.338797 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" event={"ID":"bfbf1c26-496a-4fc3-a248-9c2db09bf334","Type":"ContainerDied","Data":"a4ef714de22748b9dee6e09ee368547ab7d01e18e204fd44cbf5adfe188a68f0"} Mar 16 16:20:19 crc kubenswrapper[4736]: I0316 16:20:19.338824 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" event={"ID":"bfbf1c26-496a-4fc3-a248-9c2db09bf334","Type":"ContainerStarted","Data":"e62bf31c3056bb156c48953250c726b661e0a11f35921e469f20b1d7da15a30f"} Mar 16 16:20:19 crc kubenswrapper[4736]: I0316 16:20:19.341853 4736 scope.go:117] "RemoveContainer" containerID="59416ede21a1396d91e4fa5bf37227c3fabee285caf735e9b9fe1c63ce3882b9" Mar 16 16:20:19 crc kubenswrapper[4736]: I0316 16:20:19.343297 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"321d2397-bb79-4799-8725-95081269785f","Type":"ContainerStarted","Data":"06d8d083e50f91af2a992182ca8101c5b35a61632b8268097ab15653aea2697c"} Mar 16 16:20:20 crc kubenswrapper[4736]: I0316 16:20:20.295923 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561300-tt5hr"] Mar 16 16:20:20 crc kubenswrapper[4736]: I0316 16:20:20.361847 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-j58gg_bfbf1c26-496a-4fc3-a248-9c2db09bf334/openshift-config-operator/1.log" Mar 16 16:20:20 crc kubenswrapper[4736]: W0316 16:20:20.364292 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4012606f_be73_4dc2_9c22_1dc076985291.slice/crio-e50eb317c4593ac3a47ccd83b4452d55ddd58997b23a120f07a357052cd71bcb WatchSource:0}: Error finding container e50eb317c4593ac3a47ccd83b4452d55ddd58997b23a120f07a357052cd71bcb: Status 404 returned error can't find the container with id e50eb317c4593ac3a47ccd83b4452d55ddd58997b23a120f07a357052cd71bcb Mar 16 16:20:20 crc kubenswrapper[4736]: I0316 16:20:20.365202 4736 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j58gg container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Mar 16 16:20:20 crc kubenswrapper[4736]: I0316 16:20:20.365240 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" podUID="bfbf1c26-496a-4fc3-a248-9c2db09bf334" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Mar 16 16:20:21 crc kubenswrapper[4736]: I0316 16:20:21.374185 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561300-tt5hr" event={"ID":"4012606f-be73-4dc2-9c22-1dc076985291","Type":"ContainerStarted","Data":"e50eb317c4593ac3a47ccd83b4452d55ddd58997b23a120f07a357052cd71bcb"} Mar 16 16:20:21 crc kubenswrapper[4736]: I0316 16:20:21.376546 4736 generic.go:334] "Generic (PLEG): container finished" podID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" containerID="6ef7c77a661475ba99a47bed14eb9cd79e1d1f2adf269d2a5d133bf49fe3b3be" exitCode=0 Mar 16 16:20:21 crc kubenswrapper[4736]: I0316 16:20:21.376623 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqlgm" event={"ID":"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c","Type":"ContainerDied","Data":"6ef7c77a661475ba99a47bed14eb9cd79e1d1f2adf269d2a5d133bf49fe3b3be"} Mar 16 16:20:21 crc kubenswrapper[4736]: I0316 16:20:21.949348 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 16:20:23 crc kubenswrapper[4736]: I0316 16:20:23.397135 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqlgm" event={"ID":"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c","Type":"ContainerStarted","Data":"3ca8b40222c623edc289fdd8f5602767d51d88f220315fdb8d490e4b4db66269"} Mar 16 16:20:23 crc kubenswrapper[4736]: I0316 16:20:23.401187 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561300-tt5hr" event={"ID":"4012606f-be73-4dc2-9c22-1dc076985291","Type":"ContainerStarted","Data":"a130e7a117153b4016e6c6d75b16db2983215e5a3159b822f86f2040aceae8ee"} Mar 16 16:20:23 crc kubenswrapper[4736]: I0316 16:20:23.423990 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zqlgm" podStartSLOduration=10.940030111 podStartE2EDuration="38.423970083s" podCreationTimestamp="2026-03-16 16:19:45 +0000 UTC" firstStartedPulling="2026-03-16 16:19:54.653392794 +0000 UTC m=+3996.380783081" lastFinishedPulling="2026-03-16 16:20:22.137332766 +0000 UTC m=+4023.864723053" observedRunningTime="2026-03-16 16:20:23.420830597 +0000 UTC m=+4025.148220894" watchObservedRunningTime="2026-03-16 16:20:23.423970083 +0000 UTC m=+4025.151360370" Mar 16 16:20:23 crc kubenswrapper[4736]: I0316 16:20:23.453240 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561300-tt5hr" podStartSLOduration=9.354256846 podStartE2EDuration="10.453214691s" podCreationTimestamp="2026-03-16 16:20:13 +0000 UTC" firstStartedPulling="2026-03-16 16:20:20.369625638 +0000 UTC m=+4022.097015925" lastFinishedPulling="2026-03-16 16:20:21.468583483 +0000 UTC m=+4023.195973770" observedRunningTime="2026-03-16 16:20:23.44074022 +0000 UTC m=+4025.168130507" watchObservedRunningTime="2026-03-16 16:20:23.453214691 +0000 UTC m=+4025.180604988" Mar 16 16:20:24 crc kubenswrapper[4736]: I0316 16:20:24.420186 4736 generic.go:334] "Generic (PLEG): container finished" podID="4012606f-be73-4dc2-9c22-1dc076985291" containerID="a130e7a117153b4016e6c6d75b16db2983215e5a3159b822f86f2040aceae8ee" exitCode=0 Mar 16 16:20:24 crc kubenswrapper[4736]: I0316 16:20:24.421717 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561300-tt5hr" event={"ID":"4012606f-be73-4dc2-9c22-1dc076985291","Type":"ContainerDied","Data":"a130e7a117153b4016e6c6d75b16db2983215e5a3159b822f86f2040aceae8ee"} Mar 16 16:20:25 crc kubenswrapper[4736]: I0316 16:20:25.002620 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j58gg" Mar 16 16:20:25 crc kubenswrapper[4736]: E0316 16:20:25.646972 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfbf1c26_496a_4fc3_a248_9c2db09bf334.slice/crio-conmon-a4ef714de22748b9dee6e09ee368547ab7d01e18e204fd44cbf5adfe188a68f0.scope\": RecentStats: unable to find data in memory cache]" Mar 16 16:20:26 crc kubenswrapper[4736]: I0316 16:20:26.071027 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:20:26 crc kubenswrapper[4736]: I0316 16:20:26.071767 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:20:27 crc kubenswrapper[4736]: I0316 16:20:27.216142 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zqlgm" podUID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" containerName="registry-server" probeResult="failure" output=< Mar 16 16:20:27 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:20:27 crc kubenswrapper[4736]: > Mar 16 16:20:27 crc kubenswrapper[4736]: I0316 16:20:27.497029 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561300-tt5hr" event={"ID":"4012606f-be73-4dc2-9c22-1dc076985291","Type":"ContainerDied","Data":"e50eb317c4593ac3a47ccd83b4452d55ddd58997b23a120f07a357052cd71bcb"} Mar 16 16:20:27 crc kubenswrapper[4736]: I0316 16:20:27.499696 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e50eb317c4593ac3a47ccd83b4452d55ddd58997b23a120f07a357052cd71bcb" Mar 16 16:20:27 crc kubenswrapper[4736]: I0316 16:20:27.556689 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561300-tt5hr" Mar 16 16:20:27 crc kubenswrapper[4736]: I0316 16:20:27.724932 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsxqx\" (UniqueName: \"kubernetes.io/projected/4012606f-be73-4dc2-9c22-1dc076985291-kube-api-access-nsxqx\") pod \"4012606f-be73-4dc2-9c22-1dc076985291\" (UID: \"4012606f-be73-4dc2-9c22-1dc076985291\") " Mar 16 16:20:27 crc kubenswrapper[4736]: I0316 16:20:27.775435 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4012606f-be73-4dc2-9c22-1dc076985291-kube-api-access-nsxqx" (OuterVolumeSpecName: "kube-api-access-nsxqx") pod "4012606f-be73-4dc2-9c22-1dc076985291" (UID: "4012606f-be73-4dc2-9c22-1dc076985291"). InnerVolumeSpecName "kube-api-access-nsxqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:20:27 crc kubenswrapper[4736]: I0316 16:20:27.830501 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsxqx\" (UniqueName: \"kubernetes.io/projected/4012606f-be73-4dc2-9c22-1dc076985291-kube-api-access-nsxqx\") on node \"crc\" DevicePath \"\"" Mar 16 16:20:28 crc kubenswrapper[4736]: I0316 16:20:28.503900 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561300-tt5hr" Mar 16 16:20:28 crc kubenswrapper[4736]: I0316 16:20:28.850492 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c78568d4c-5mng5" Mar 16 16:20:29 crc kubenswrapper[4736]: I0316 16:20:29.273995 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561294-j8kff"] Mar 16 16:20:29 crc kubenswrapper[4736]: I0316 16:20:29.297180 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561294-j8kff"] Mar 16 16:20:31 crc kubenswrapper[4736]: I0316 16:20:31.002467 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d80b32-f733-4f90-b312-3be0b8443fe7" path="/var/lib/kubelet/pods/92d80b32-f733-4f90-b312-3be0b8443fe7/volumes" Mar 16 16:20:36 crc kubenswrapper[4736]: E0316 16:20:36.153448 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfbf1c26_496a_4fc3_a248_9c2db09bf334.slice/crio-conmon-a4ef714de22748b9dee6e09ee368547ab7d01e18e204fd44cbf5adfe188a68f0.scope\": RecentStats: unable to find data in memory cache]" Mar 16 16:20:37 crc kubenswrapper[4736]: I0316 16:20:37.129910 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zqlgm" podUID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" containerName="registry-server" probeResult="failure" output=< Mar 16 16:20:37 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:20:37 crc kubenswrapper[4736]: > Mar 16 16:20:46 crc kubenswrapper[4736]: E0316 16:20:46.611835 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfbf1c26_496a_4fc3_a248_9c2db09bf334.slice/crio-conmon-a4ef714de22748b9dee6e09ee368547ab7d01e18e204fd44cbf5adfe188a68f0.scope\": RecentStats: unable to find data in memory cache]" Mar 16 16:20:47 crc kubenswrapper[4736]: I0316 16:20:47.137755 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zqlgm" podUID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" containerName="registry-server" probeResult="failure" output=< Mar 16 16:20:47 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:20:47 crc kubenswrapper[4736]: > Mar 16 16:20:54 crc kubenswrapper[4736]: I0316 16:20:54.600505 4736 scope.go:117] "RemoveContainer" containerID="6fb3f577668c57014f6894cad5aa889bc53590d2fefdd1a4e0bc0928c9c9c6bf" Mar 16 16:20:56 crc kubenswrapper[4736]: I0316 16:20:56.150297 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:20:56 crc kubenswrapper[4736]: I0316 16:20:56.206499 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:20:56 crc kubenswrapper[4736]: I0316 16:20:56.704165 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zqlgm"] Mar 16 16:20:57 crc kubenswrapper[4736]: E0316 16:20:57.051040 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfbf1c26_496a_4fc3_a248_9c2db09bf334.slice/crio-conmon-a4ef714de22748b9dee6e09ee368547ab7d01e18e204fd44cbf5adfe188a68f0.scope\": RecentStats: unable to find data in memory cache]" Mar 16 16:20:57 crc kubenswrapper[4736]: I0316 16:20:57.799942 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zqlgm" podUID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" containerName="registry-server" containerID="cri-o://3ca8b40222c623edc289fdd8f5602767d51d88f220315fdb8d490e4b4db66269" gracePeriod=2 Mar 16 16:20:58 crc kubenswrapper[4736]: I0316 16:20:58.812393 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqlgm" event={"ID":"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c","Type":"ContainerDied","Data":"3ca8b40222c623edc289fdd8f5602767d51d88f220315fdb8d490e4b4db66269"} Mar 16 16:20:58 crc kubenswrapper[4736]: I0316 16:20:58.813800 4736 generic.go:334] "Generic (PLEG): container finished" podID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" containerID="3ca8b40222c623edc289fdd8f5602767d51d88f220315fdb8d490e4b4db66269" exitCode=0 Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.112287 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.270509 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-catalog-content\") pod \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\" (UID: \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\") " Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.271131 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-utilities\") pod \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\" (UID: \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\") " Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.271183 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mkdk\" (UniqueName: \"kubernetes.io/projected/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-kube-api-access-8mkdk\") pod \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\" (UID: \"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c\") " Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.275170 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-utilities" (OuterVolumeSpecName: "utilities") pod "c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" (UID: "c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.302854 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-kube-api-access-8mkdk" (OuterVolumeSpecName: "kube-api-access-8mkdk") pod "c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" (UID: "c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c"). InnerVolumeSpecName "kube-api-access-8mkdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.319050 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" (UID: "c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.373786 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.374061 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.374079 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mkdk\" (UniqueName: \"kubernetes.io/projected/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c-kube-api-access-8mkdk\") on node \"crc\" DevicePath \"\"" Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.830744 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zqlgm" event={"ID":"c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c","Type":"ContainerDied","Data":"d0da1bdd229f1c89972ee84c6ebf5b82987f3ac7f8923b9cefc0a667b04f8898"} Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.831532 4736 scope.go:117] "RemoveContainer" containerID="3ca8b40222c623edc289fdd8f5602767d51d88f220315fdb8d490e4b4db66269" Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.832297 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zqlgm" Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.912722 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zqlgm"] Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.936632 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zqlgm"] Mar 16 16:20:59 crc kubenswrapper[4736]: I0316 16:20:59.974327 4736 scope.go:117] "RemoveContainer" containerID="6ef7c77a661475ba99a47bed14eb9cd79e1d1f2adf269d2a5d133bf49fe3b3be" Mar 16 16:21:00 crc kubenswrapper[4736]: I0316 16:21:00.021341 4736 scope.go:117] "RemoveContainer" containerID="1d5995eae133d88946ca7ca66dbe22d82c1583eece44d75e0535450d71273010" Mar 16 16:21:00 crc kubenswrapper[4736]: I0316 16:21:00.991755 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" path="/var/lib/kubelet/pods/c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c/volumes" Mar 16 16:21:07 crc kubenswrapper[4736]: E0316 16:21:07.336989 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfbf1c26_496a_4fc3_a248_9c2db09bf334.slice/crio-conmon-a4ef714de22748b9dee6e09ee368547ab7d01e18e204fd44cbf5adfe188a68f0.scope\": RecentStats: unable to find data in memory cache]" Mar 16 16:21:17 crc kubenswrapper[4736]: E0316 16:21:17.648477 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfbf1c26_496a_4fc3_a248_9c2db09bf334.slice/crio-conmon-a4ef714de22748b9dee6e09ee368547ab7d01e18e204fd44cbf5adfe188a68f0.scope\": RecentStats: unable to find data in memory cache]" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.487312 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zkmpl"] Mar 16 16:21:24 crc kubenswrapper[4736]: E0316 16:21:24.496792 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" containerName="extract-utilities" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.496830 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" containerName="extract-utilities" Mar 16 16:21:24 crc kubenswrapper[4736]: E0316 16:21:24.496891 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" containerName="extract-content" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.496898 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" containerName="extract-content" Mar 16 16:21:24 crc kubenswrapper[4736]: E0316 16:21:24.496910 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" containerName="registry-server" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.496918 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" containerName="registry-server" Mar 16 16:21:24 crc kubenswrapper[4736]: E0316 16:21:24.496932 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4012606f-be73-4dc2-9c22-1dc076985291" containerName="oc" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.496938 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4012606f-be73-4dc2-9c22-1dc076985291" containerName="oc" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.499392 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21e77d0-cb1e-4d50-a80c-ef64c3e60e5c" containerName="registry-server" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.499830 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4012606f-be73-4dc2-9c22-1dc076985291" containerName="oc" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.508290 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.562488 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-utilities\") pod \"redhat-marketplace-zkmpl\" (UID: \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\") " pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.562647 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-catalog-content\") pod \"redhat-marketplace-zkmpl\" (UID: \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\") " pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.562667 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4v9v\" (UniqueName: \"kubernetes.io/projected/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-kube-api-access-t4v9v\") pod \"redhat-marketplace-zkmpl\" (UID: \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\") " pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.658406 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zkmpl"] Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.664631 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-catalog-content\") pod \"redhat-marketplace-zkmpl\" (UID: \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\") " pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.664677 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4v9v\" (UniqueName: \"kubernetes.io/projected/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-kube-api-access-t4v9v\") pod \"redhat-marketplace-zkmpl\" (UID: \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\") " pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.664786 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-utilities\") pod \"redhat-marketplace-zkmpl\" (UID: \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\") " pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.672225 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-catalog-content\") pod \"redhat-marketplace-zkmpl\" (UID: \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\") " pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.673437 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-utilities\") pod \"redhat-marketplace-zkmpl\" (UID: \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\") " pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.709972 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4v9v\" (UniqueName: \"kubernetes.io/projected/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-kube-api-access-t4v9v\") pod \"redhat-marketplace-zkmpl\" (UID: \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\") " pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:24 crc kubenswrapper[4736]: I0316 16:21:24.831021 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:26 crc kubenswrapper[4736]: I0316 16:21:26.490770 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zkmpl"] Mar 16 16:21:27 crc kubenswrapper[4736]: I0316 16:21:27.084676 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zkmpl" event={"ID":"3d18fce9-5855-4ccf-bcc4-debbf5b9876b","Type":"ContainerDied","Data":"9928bee2339228557fcc79fce7c6f75f6cd858607a44216b6c25de12b2e5fb3f"} Mar 16 16:21:27 crc kubenswrapper[4736]: I0316 16:21:27.085738 4736 generic.go:334] "Generic (PLEG): container finished" podID="3d18fce9-5855-4ccf-bcc4-debbf5b9876b" containerID="9928bee2339228557fcc79fce7c6f75f6cd858607a44216b6c25de12b2e5fb3f" exitCode=0 Mar 16 16:21:27 crc kubenswrapper[4736]: I0316 16:21:27.086132 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zkmpl" event={"ID":"3d18fce9-5855-4ccf-bcc4-debbf5b9876b","Type":"ContainerStarted","Data":"341652f6322faacb54b8f40fb66930c8c9cc7ecc06b13ccd2b9bfbd227cc86c6"} Mar 16 16:21:27 crc kubenswrapper[4736]: I0316 16:21:27.091529 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 16:21:28 crc kubenswrapper[4736]: I0316 16:21:28.095945 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zkmpl" event={"ID":"3d18fce9-5855-4ccf-bcc4-debbf5b9876b","Type":"ContainerStarted","Data":"1b297ceeaa7fbcafc6fa2c9111146273884314f0ca1f3f9aed2265b0430ae17d"} Mar 16 16:21:30 crc kubenswrapper[4736]: I0316 16:21:30.116241 4736 generic.go:334] "Generic (PLEG): container finished" podID="3d18fce9-5855-4ccf-bcc4-debbf5b9876b" containerID="1b297ceeaa7fbcafc6fa2c9111146273884314f0ca1f3f9aed2265b0430ae17d" exitCode=0 Mar 16 16:21:30 crc kubenswrapper[4736]: I0316 16:21:30.116324 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zkmpl" event={"ID":"3d18fce9-5855-4ccf-bcc4-debbf5b9876b","Type":"ContainerDied","Data":"1b297ceeaa7fbcafc6fa2c9111146273884314f0ca1f3f9aed2265b0430ae17d"} Mar 16 16:21:31 crc kubenswrapper[4736]: I0316 16:21:31.132968 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zkmpl" event={"ID":"3d18fce9-5855-4ccf-bcc4-debbf5b9876b","Type":"ContainerStarted","Data":"6e37f19ae1051e49a3da9df84b9286d4f3ce41b0de9ac9845b33b42535f307e1"} Mar 16 16:21:33 crc kubenswrapper[4736]: I0316 16:21:33.186113 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zkmpl" podStartSLOduration=5.486485413 podStartE2EDuration="9.184982189s" podCreationTimestamp="2026-03-16 16:21:24 +0000 UTC" firstStartedPulling="2026-03-16 16:21:27.087580927 +0000 UTC m=+4088.814971214" lastFinishedPulling="2026-03-16 16:21:30.786077673 +0000 UTC m=+4092.513467990" observedRunningTime="2026-03-16 16:21:31.168965433 +0000 UTC m=+4092.896355720" watchObservedRunningTime="2026-03-16 16:21:33.184982189 +0000 UTC m=+4094.912372476" Mar 16 16:21:33 crc kubenswrapper[4736]: I0316 16:21:33.195319 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nmdjd"] Mar 16 16:21:33 crc kubenswrapper[4736]: I0316 16:21:33.214042 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:21:33 crc kubenswrapper[4736]: I0316 16:21:33.294532 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmdjd"] Mar 16 16:21:33 crc kubenswrapper[4736]: I0316 16:21:33.330727 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjttl\" (UniqueName: \"kubernetes.io/projected/8bc901e7-285d-4be2-a795-8af462ef5bc0-kube-api-access-zjttl\") pod \"community-operators-nmdjd\" (UID: \"8bc901e7-285d-4be2-a795-8af462ef5bc0\") " pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:21:33 crc kubenswrapper[4736]: I0316 16:21:33.330915 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bc901e7-285d-4be2-a795-8af462ef5bc0-catalog-content\") pod \"community-operators-nmdjd\" (UID: \"8bc901e7-285d-4be2-a795-8af462ef5bc0\") " pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:21:33 crc kubenswrapper[4736]: I0316 16:21:33.330954 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bc901e7-285d-4be2-a795-8af462ef5bc0-utilities\") pod \"community-operators-nmdjd\" (UID: \"8bc901e7-285d-4be2-a795-8af462ef5bc0\") " pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:21:33 crc kubenswrapper[4736]: I0316 16:21:33.432747 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjttl\" (UniqueName: \"kubernetes.io/projected/8bc901e7-285d-4be2-a795-8af462ef5bc0-kube-api-access-zjttl\") pod \"community-operators-nmdjd\" (UID: \"8bc901e7-285d-4be2-a795-8af462ef5bc0\") " pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:21:33 crc kubenswrapper[4736]: I0316 16:21:33.432864 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bc901e7-285d-4be2-a795-8af462ef5bc0-catalog-content\") pod \"community-operators-nmdjd\" (UID: \"8bc901e7-285d-4be2-a795-8af462ef5bc0\") " pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:21:33 crc kubenswrapper[4736]: I0316 16:21:33.432887 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bc901e7-285d-4be2-a795-8af462ef5bc0-utilities\") pod \"community-operators-nmdjd\" (UID: \"8bc901e7-285d-4be2-a795-8af462ef5bc0\") " pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:21:33 crc kubenswrapper[4736]: I0316 16:21:33.439555 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bc901e7-285d-4be2-a795-8af462ef5bc0-catalog-content\") pod \"community-operators-nmdjd\" (UID: \"8bc901e7-285d-4be2-a795-8af462ef5bc0\") " pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:21:33 crc kubenswrapper[4736]: I0316 16:21:33.441050 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bc901e7-285d-4be2-a795-8af462ef5bc0-utilities\") pod \"community-operators-nmdjd\" (UID: \"8bc901e7-285d-4be2-a795-8af462ef5bc0\") " pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:21:33 crc kubenswrapper[4736]: I0316 16:21:33.504002 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjttl\" (UniqueName: \"kubernetes.io/projected/8bc901e7-285d-4be2-a795-8af462ef5bc0-kube-api-access-zjttl\") pod \"community-operators-nmdjd\" (UID: \"8bc901e7-285d-4be2-a795-8af462ef5bc0\") " pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:21:33 crc kubenswrapper[4736]: I0316 16:21:33.580325 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:21:34 crc kubenswrapper[4736]: I0316 16:21:34.833297 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:34 crc kubenswrapper[4736]: I0316 16:21:34.833868 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:35 crc kubenswrapper[4736]: I0316 16:21:35.037030 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmdjd"] Mar 16 16:21:35 crc kubenswrapper[4736]: I0316 16:21:35.181175 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmdjd" event={"ID":"8bc901e7-285d-4be2-a795-8af462ef5bc0","Type":"ContainerStarted","Data":"c0f9bc782aa30a2fc401f35dc74ed38969a35d14c2e44d3810de0101a2777b2b"} Mar 16 16:21:35 crc kubenswrapper[4736]: I0316 16:21:35.881366 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zkmpl" podUID="3d18fce9-5855-4ccf-bcc4-debbf5b9876b" containerName="registry-server" probeResult="failure" output=< Mar 16 16:21:35 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:21:35 crc kubenswrapper[4736]: > Mar 16 16:21:36 crc kubenswrapper[4736]: I0316 16:21:36.191018 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmdjd" event={"ID":"8bc901e7-285d-4be2-a795-8af462ef5bc0","Type":"ContainerDied","Data":"43c253fdee63f9924256e9db414b02d0072b4131d820a30b0ea48dbdb70e275f"} Mar 16 16:21:36 crc kubenswrapper[4736]: I0316 16:21:36.193138 4736 generic.go:334] "Generic (PLEG): container finished" podID="8bc901e7-285d-4be2-a795-8af462ef5bc0" containerID="43c253fdee63f9924256e9db414b02d0072b4131d820a30b0ea48dbdb70e275f" exitCode=0 Mar 16 16:21:38 crc kubenswrapper[4736]: I0316 16:21:38.218350 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmdjd" event={"ID":"8bc901e7-285d-4be2-a795-8af462ef5bc0","Type":"ContainerStarted","Data":"d3f52053470803583d4d6c700637eee32ad40dffacf1bac9794a6831f7073080"} Mar 16 16:21:40 crc kubenswrapper[4736]: I0316 16:21:40.235962 4736 generic.go:334] "Generic (PLEG): container finished" podID="8bc901e7-285d-4be2-a795-8af462ef5bc0" containerID="d3f52053470803583d4d6c700637eee32ad40dffacf1bac9794a6831f7073080" exitCode=0 Mar 16 16:21:40 crc kubenswrapper[4736]: I0316 16:21:40.236056 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmdjd" event={"ID":"8bc901e7-285d-4be2-a795-8af462ef5bc0","Type":"ContainerDied","Data":"d3f52053470803583d4d6c700637eee32ad40dffacf1bac9794a6831f7073080"} Mar 16 16:21:41 crc kubenswrapper[4736]: I0316 16:21:41.248286 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmdjd" event={"ID":"8bc901e7-285d-4be2-a795-8af462ef5bc0","Type":"ContainerStarted","Data":"b30547d5bb81c40226ebb9bcda2bf1e1a959a7a66e5c01ec1bb8cb2794c0dd5e"} Mar 16 16:21:41 crc kubenswrapper[4736]: I0316 16:21:41.285372 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nmdjd" podStartSLOduration=3.815412185 podStartE2EDuration="8.277477263s" podCreationTimestamp="2026-03-16 16:21:33 +0000 UTC" firstStartedPulling="2026-03-16 16:21:36.194793017 +0000 UTC m=+4097.922183304" lastFinishedPulling="2026-03-16 16:21:40.656858095 +0000 UTC m=+4102.384248382" observedRunningTime="2026-03-16 16:21:41.277123063 +0000 UTC m=+4103.004513350" watchObservedRunningTime="2026-03-16 16:21:41.277477263 +0000 UTC m=+4103.004867550" Mar 16 16:21:43 crc kubenswrapper[4736]: I0316 16:21:43.581929 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:21:43 crc kubenswrapper[4736]: I0316 16:21:43.582350 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:21:44 crc kubenswrapper[4736]: I0316 16:21:44.632724 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nmdjd" podUID="8bc901e7-285d-4be2-a795-8af462ef5bc0" containerName="registry-server" probeResult="failure" output=< Mar 16 16:21:44 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:21:44 crc kubenswrapper[4736]: > Mar 16 16:21:45 crc kubenswrapper[4736]: I0316 16:21:45.881714 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zkmpl" podUID="3d18fce9-5855-4ccf-bcc4-debbf5b9876b" containerName="registry-server" probeResult="failure" output=< Mar 16 16:21:45 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:21:45 crc kubenswrapper[4736]: > Mar 16 16:21:54 crc kubenswrapper[4736]: I0316 16:21:54.754138 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nmdjd" podUID="8bc901e7-285d-4be2-a795-8af462ef5bc0" containerName="registry-server" probeResult="failure" output=< Mar 16 16:21:54 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:21:54 crc kubenswrapper[4736]: > Mar 16 16:21:54 crc kubenswrapper[4736]: I0316 16:21:54.893121 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:54 crc kubenswrapper[4736]: I0316 16:21:54.954715 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:55 crc kubenswrapper[4736]: I0316 16:21:55.065223 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zkmpl"] Mar 16 16:21:56 crc kubenswrapper[4736]: I0316 16:21:56.403135 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zkmpl" podUID="3d18fce9-5855-4ccf-bcc4-debbf5b9876b" containerName="registry-server" containerID="cri-o://6e37f19ae1051e49a3da9df84b9286d4f3ce41b0de9ac9845b33b42535f307e1" gracePeriod=2 Mar 16 16:21:57 crc kubenswrapper[4736]: I0316 16:21:57.415520 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zkmpl" event={"ID":"3d18fce9-5855-4ccf-bcc4-debbf5b9876b","Type":"ContainerDied","Data":"6e37f19ae1051e49a3da9df84b9286d4f3ce41b0de9ac9845b33b42535f307e1"} Mar 16 16:21:57 crc kubenswrapper[4736]: I0316 16:21:57.416063 4736 generic.go:334] "Generic (PLEG): container finished" podID="3d18fce9-5855-4ccf-bcc4-debbf5b9876b" containerID="6e37f19ae1051e49a3da9df84b9286d4f3ce41b0de9ac9845b33b42535f307e1" exitCode=0 Mar 16 16:21:57 crc kubenswrapper[4736]: I0316 16:21:57.702774 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:57 crc kubenswrapper[4736]: I0316 16:21:57.740175 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4v9v\" (UniqueName: \"kubernetes.io/projected/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-kube-api-access-t4v9v\") pod \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\" (UID: \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\") " Mar 16 16:21:57 crc kubenswrapper[4736]: I0316 16:21:57.740233 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-catalog-content\") pod \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\" (UID: \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\") " Mar 16 16:21:57 crc kubenswrapper[4736]: I0316 16:21:57.740426 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-utilities\") pod \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\" (UID: \"3d18fce9-5855-4ccf-bcc4-debbf5b9876b\") " Mar 16 16:21:57 crc kubenswrapper[4736]: I0316 16:21:57.754045 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-utilities" (OuterVolumeSpecName: "utilities") pod "3d18fce9-5855-4ccf-bcc4-debbf5b9876b" (UID: "3d18fce9-5855-4ccf-bcc4-debbf5b9876b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:21:57 crc kubenswrapper[4736]: I0316 16:21:57.799168 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-kube-api-access-t4v9v" (OuterVolumeSpecName: "kube-api-access-t4v9v") pod "3d18fce9-5855-4ccf-bcc4-debbf5b9876b" (UID: "3d18fce9-5855-4ccf-bcc4-debbf5b9876b"). InnerVolumeSpecName "kube-api-access-t4v9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:21:57 crc kubenswrapper[4736]: I0316 16:21:57.830240 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d18fce9-5855-4ccf-bcc4-debbf5b9876b" (UID: "3d18fce9-5855-4ccf-bcc4-debbf5b9876b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:21:57 crc kubenswrapper[4736]: I0316 16:21:57.846763 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:21:57 crc kubenswrapper[4736]: I0316 16:21:57.846822 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4v9v\" (UniqueName: \"kubernetes.io/projected/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-kube-api-access-t4v9v\") on node \"crc\" DevicePath \"\"" Mar 16 16:21:57 crc kubenswrapper[4736]: I0316 16:21:57.846837 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d18fce9-5855-4ccf-bcc4-debbf5b9876b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:21:58 crc kubenswrapper[4736]: I0316 16:21:58.429718 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zkmpl" event={"ID":"3d18fce9-5855-4ccf-bcc4-debbf5b9876b","Type":"ContainerDied","Data":"341652f6322faacb54b8f40fb66930c8c9cc7ecc06b13ccd2b9bfbd227cc86c6"} Mar 16 16:21:58 crc kubenswrapper[4736]: I0316 16:21:58.429775 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zkmpl" Mar 16 16:21:58 crc kubenswrapper[4736]: I0316 16:21:58.434080 4736 scope.go:117] "RemoveContainer" containerID="6e37f19ae1051e49a3da9df84b9286d4f3ce41b0de9ac9845b33b42535f307e1" Mar 16 16:21:58 crc kubenswrapper[4736]: I0316 16:21:58.482738 4736 scope.go:117] "RemoveContainer" containerID="1b297ceeaa7fbcafc6fa2c9111146273884314f0ca1f3f9aed2265b0430ae17d" Mar 16 16:21:58 crc kubenswrapper[4736]: I0316 16:21:58.484415 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zkmpl"] Mar 16 16:21:58 crc kubenswrapper[4736]: I0316 16:21:58.500588 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zkmpl"] Mar 16 16:21:58 crc kubenswrapper[4736]: I0316 16:21:58.523765 4736 scope.go:117] "RemoveContainer" containerID="9928bee2339228557fcc79fce7c6f75f6cd858607a44216b6c25de12b2e5fb3f" Mar 16 16:21:58 crc kubenswrapper[4736]: I0316 16:21:58.997412 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d18fce9-5855-4ccf-bcc4-debbf5b9876b" path="/var/lib/kubelet/pods/3d18fce9-5855-4ccf-bcc4-debbf5b9876b/volumes" Mar 16 16:22:00 crc kubenswrapper[4736]: I0316 16:22:00.509667 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561302-27jnj"] Mar 16 16:22:00 crc kubenswrapper[4736]: E0316 16:22:00.515252 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d18fce9-5855-4ccf-bcc4-debbf5b9876b" containerName="extract-content" Mar 16 16:22:00 crc kubenswrapper[4736]: I0316 16:22:00.515296 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d18fce9-5855-4ccf-bcc4-debbf5b9876b" containerName="extract-content" Mar 16 16:22:00 crc kubenswrapper[4736]: E0316 16:22:00.515332 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d18fce9-5855-4ccf-bcc4-debbf5b9876b" containerName="registry-server" Mar 16 16:22:00 crc kubenswrapper[4736]: I0316 16:22:00.515341 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d18fce9-5855-4ccf-bcc4-debbf5b9876b" containerName="registry-server" Mar 16 16:22:00 crc kubenswrapper[4736]: E0316 16:22:00.515363 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d18fce9-5855-4ccf-bcc4-debbf5b9876b" containerName="extract-utilities" Mar 16 16:22:00 crc kubenswrapper[4736]: I0316 16:22:00.515374 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d18fce9-5855-4ccf-bcc4-debbf5b9876b" containerName="extract-utilities" Mar 16 16:22:00 crc kubenswrapper[4736]: I0316 16:22:00.516441 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d18fce9-5855-4ccf-bcc4-debbf5b9876b" containerName="registry-server" Mar 16 16:22:00 crc kubenswrapper[4736]: I0316 16:22:00.525440 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561302-27jnj" Mar 16 16:22:00 crc kubenswrapper[4736]: I0316 16:22:00.555072 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:22:00 crc kubenswrapper[4736]: I0316 16:22:00.555818 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:22:00 crc kubenswrapper[4736]: I0316 16:22:00.561665 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:22:00 crc kubenswrapper[4736]: I0316 16:22:00.630172 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmrtt\" (UniqueName: \"kubernetes.io/projected/20d63530-b676-4044-8ecb-bf5f833a2b82-kube-api-access-tmrtt\") pod \"auto-csr-approver-29561302-27jnj\" (UID: \"20d63530-b676-4044-8ecb-bf5f833a2b82\") " pod="openshift-infra/auto-csr-approver-29561302-27jnj" Mar 16 16:22:00 crc kubenswrapper[4736]: I0316 16:22:00.724743 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561302-27jnj"] Mar 16 16:22:00 crc kubenswrapper[4736]: I0316 16:22:00.731837 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmrtt\" (UniqueName: \"kubernetes.io/projected/20d63530-b676-4044-8ecb-bf5f833a2b82-kube-api-access-tmrtt\") pod \"auto-csr-approver-29561302-27jnj\" (UID: \"20d63530-b676-4044-8ecb-bf5f833a2b82\") " pod="openshift-infra/auto-csr-approver-29561302-27jnj" Mar 16 16:22:00 crc kubenswrapper[4736]: I0316 16:22:00.817594 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmrtt\" (UniqueName: \"kubernetes.io/projected/20d63530-b676-4044-8ecb-bf5f833a2b82-kube-api-access-tmrtt\") pod \"auto-csr-approver-29561302-27jnj\" (UID: \"20d63530-b676-4044-8ecb-bf5f833a2b82\") " pod="openshift-infra/auto-csr-approver-29561302-27jnj" Mar 16 16:22:00 crc kubenswrapper[4736]: I0316 16:22:00.864308 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561302-27jnj" Mar 16 16:22:02 crc kubenswrapper[4736]: I0316 16:22:02.021892 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561302-27jnj"] Mar 16 16:22:02 crc kubenswrapper[4736]: W0316 16:22:02.031126 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20d63530_b676_4044_8ecb_bf5f833a2b82.slice/crio-305e2ba749b15ac9a6b86a6f04a9d55f7fbe7d8bd9955a00c5801708c7587510 WatchSource:0}: Error finding container 305e2ba749b15ac9a6b86a6f04a9d55f7fbe7d8bd9955a00c5801708c7587510: Status 404 returned error can't find the container with id 305e2ba749b15ac9a6b86a6f04a9d55f7fbe7d8bd9955a00c5801708c7587510 Mar 16 16:22:02 crc kubenswrapper[4736]: I0316 16:22:02.473421 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561302-27jnj" event={"ID":"20d63530-b676-4044-8ecb-bf5f833a2b82","Type":"ContainerStarted","Data":"305e2ba749b15ac9a6b86a6f04a9d55f7fbe7d8bd9955a00c5801708c7587510"} Mar 16 16:22:03 crc kubenswrapper[4736]: I0316 16:22:03.697456 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:22:03 crc kubenswrapper[4736]: I0316 16:22:03.760682 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:22:04 crc kubenswrapper[4736]: I0316 16:22:04.394004 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nmdjd"] Mar 16 16:22:04 crc kubenswrapper[4736]: I0316 16:22:04.492917 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561302-27jnj" event={"ID":"20d63530-b676-4044-8ecb-bf5f833a2b82","Type":"ContainerStarted","Data":"8bb308ca494f1b8dc87e5ef73c2c867ea3995c8e824c3661ce7855bfb39a1859"} Mar 16 16:22:04 crc kubenswrapper[4736]: I0316 16:22:04.517353 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561302-27jnj" podStartSLOduration=3.542900662 podStartE2EDuration="4.515668793s" podCreationTimestamp="2026-03-16 16:22:00 +0000 UTC" firstStartedPulling="2026-03-16 16:22:02.046457908 +0000 UTC m=+4123.773848195" lastFinishedPulling="2026-03-16 16:22:03.019226029 +0000 UTC m=+4124.746616326" observedRunningTime="2026-03-16 16:22:04.511728045 +0000 UTC m=+4126.239118332" watchObservedRunningTime="2026-03-16 16:22:04.515668793 +0000 UTC m=+4126.243059080" Mar 16 16:22:05 crc kubenswrapper[4736]: I0316 16:22:05.509233 4736 generic.go:334] "Generic (PLEG): container finished" podID="20d63530-b676-4044-8ecb-bf5f833a2b82" containerID="8bb308ca494f1b8dc87e5ef73c2c867ea3995c8e824c3661ce7855bfb39a1859" exitCode=0 Mar 16 16:22:05 crc kubenswrapper[4736]: I0316 16:22:05.509802 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nmdjd" podUID="8bc901e7-285d-4be2-a795-8af462ef5bc0" containerName="registry-server" containerID="cri-o://b30547d5bb81c40226ebb9bcda2bf1e1a959a7a66e5c01ec1bb8cb2794c0dd5e" gracePeriod=2 Mar 16 16:22:05 crc kubenswrapper[4736]: I0316 16:22:05.509354 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561302-27jnj" event={"ID":"20d63530-b676-4044-8ecb-bf5f833a2b82","Type":"ContainerDied","Data":"8bb308ca494f1b8dc87e5ef73c2c867ea3995c8e824c3661ce7855bfb39a1859"} Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.249159 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.345161 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjttl\" (UniqueName: \"kubernetes.io/projected/8bc901e7-285d-4be2-a795-8af462ef5bc0-kube-api-access-zjttl\") pod \"8bc901e7-285d-4be2-a795-8af462ef5bc0\" (UID: \"8bc901e7-285d-4be2-a795-8af462ef5bc0\") " Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.345314 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bc901e7-285d-4be2-a795-8af462ef5bc0-catalog-content\") pod \"8bc901e7-285d-4be2-a795-8af462ef5bc0\" (UID: \"8bc901e7-285d-4be2-a795-8af462ef5bc0\") " Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.345346 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bc901e7-285d-4be2-a795-8af462ef5bc0-utilities\") pod \"8bc901e7-285d-4be2-a795-8af462ef5bc0\" (UID: \"8bc901e7-285d-4be2-a795-8af462ef5bc0\") " Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.346481 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bc901e7-285d-4be2-a795-8af462ef5bc0-utilities" (OuterVolumeSpecName: "utilities") pod "8bc901e7-285d-4be2-a795-8af462ef5bc0" (UID: "8bc901e7-285d-4be2-a795-8af462ef5bc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.357012 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bc901e7-285d-4be2-a795-8af462ef5bc0-kube-api-access-zjttl" (OuterVolumeSpecName: "kube-api-access-zjttl") pod "8bc901e7-285d-4be2-a795-8af462ef5bc0" (UID: "8bc901e7-285d-4be2-a795-8af462ef5bc0"). InnerVolumeSpecName "kube-api-access-zjttl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.427801 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bc901e7-285d-4be2-a795-8af462ef5bc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bc901e7-285d-4be2-a795-8af462ef5bc0" (UID: "8bc901e7-285d-4be2-a795-8af462ef5bc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.447529 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjttl\" (UniqueName: \"kubernetes.io/projected/8bc901e7-285d-4be2-a795-8af462ef5bc0-kube-api-access-zjttl\") on node \"crc\" DevicePath \"\"" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.447572 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bc901e7-285d-4be2-a795-8af462ef5bc0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.447581 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bc901e7-285d-4be2-a795-8af462ef5bc0-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.524075 4736 generic.go:334] "Generic (PLEG): container finished" podID="8bc901e7-285d-4be2-a795-8af462ef5bc0" containerID="b30547d5bb81c40226ebb9bcda2bf1e1a959a7a66e5c01ec1bb8cb2794c0dd5e" exitCode=0 Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.524375 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmdjd" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.525847 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmdjd" event={"ID":"8bc901e7-285d-4be2-a795-8af462ef5bc0","Type":"ContainerDied","Data":"b30547d5bb81c40226ebb9bcda2bf1e1a959a7a66e5c01ec1bb8cb2794c0dd5e"} Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.525909 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmdjd" event={"ID":"8bc901e7-285d-4be2-a795-8af462ef5bc0","Type":"ContainerDied","Data":"c0f9bc782aa30a2fc401f35dc74ed38969a35d14c2e44d3810de0101a2777b2b"} Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.525932 4736 scope.go:117] "RemoveContainer" containerID="b30547d5bb81c40226ebb9bcda2bf1e1a959a7a66e5c01ec1bb8cb2794c0dd5e" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.571630 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nmdjd"] Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.580350 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nmdjd"] Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.582237 4736 scope.go:117] "RemoveContainer" containerID="d3f52053470803583d4d6c700637eee32ad40dffacf1bac9794a6831f7073080" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.611635 4736 scope.go:117] "RemoveContainer" containerID="43c253fdee63f9924256e9db414b02d0072b4131d820a30b0ea48dbdb70e275f" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.675815 4736 scope.go:117] "RemoveContainer" containerID="b30547d5bb81c40226ebb9bcda2bf1e1a959a7a66e5c01ec1bb8cb2794c0dd5e" Mar 16 16:22:06 crc kubenswrapper[4736]: E0316 16:22:06.677216 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b30547d5bb81c40226ebb9bcda2bf1e1a959a7a66e5c01ec1bb8cb2794c0dd5e\": container with ID starting with b30547d5bb81c40226ebb9bcda2bf1e1a959a7a66e5c01ec1bb8cb2794c0dd5e not found: ID does not exist" containerID="b30547d5bb81c40226ebb9bcda2bf1e1a959a7a66e5c01ec1bb8cb2794c0dd5e" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.677252 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30547d5bb81c40226ebb9bcda2bf1e1a959a7a66e5c01ec1bb8cb2794c0dd5e"} err="failed to get container status \"b30547d5bb81c40226ebb9bcda2bf1e1a959a7a66e5c01ec1bb8cb2794c0dd5e\": rpc error: code = NotFound desc = could not find container \"b30547d5bb81c40226ebb9bcda2bf1e1a959a7a66e5c01ec1bb8cb2794c0dd5e\": container with ID starting with b30547d5bb81c40226ebb9bcda2bf1e1a959a7a66e5c01ec1bb8cb2794c0dd5e not found: ID does not exist" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.677276 4736 scope.go:117] "RemoveContainer" containerID="d3f52053470803583d4d6c700637eee32ad40dffacf1bac9794a6831f7073080" Mar 16 16:22:06 crc kubenswrapper[4736]: E0316 16:22:06.677924 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3f52053470803583d4d6c700637eee32ad40dffacf1bac9794a6831f7073080\": container with ID starting with d3f52053470803583d4d6c700637eee32ad40dffacf1bac9794a6831f7073080 not found: ID does not exist" containerID="d3f52053470803583d4d6c700637eee32ad40dffacf1bac9794a6831f7073080" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.677979 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3f52053470803583d4d6c700637eee32ad40dffacf1bac9794a6831f7073080"} err="failed to get container status \"d3f52053470803583d4d6c700637eee32ad40dffacf1bac9794a6831f7073080\": rpc error: code = NotFound desc = could not find container \"d3f52053470803583d4d6c700637eee32ad40dffacf1bac9794a6831f7073080\": container with ID starting with d3f52053470803583d4d6c700637eee32ad40dffacf1bac9794a6831f7073080 not found: ID does not exist" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.678008 4736 scope.go:117] "RemoveContainer" containerID="43c253fdee63f9924256e9db414b02d0072b4131d820a30b0ea48dbdb70e275f" Mar 16 16:22:06 crc kubenswrapper[4736]: E0316 16:22:06.678480 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43c253fdee63f9924256e9db414b02d0072b4131d820a30b0ea48dbdb70e275f\": container with ID starting with 43c253fdee63f9924256e9db414b02d0072b4131d820a30b0ea48dbdb70e275f not found: ID does not exist" containerID="43c253fdee63f9924256e9db414b02d0072b4131d820a30b0ea48dbdb70e275f" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.678510 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c253fdee63f9924256e9db414b02d0072b4131d820a30b0ea48dbdb70e275f"} err="failed to get container status \"43c253fdee63f9924256e9db414b02d0072b4131d820a30b0ea48dbdb70e275f\": rpc error: code = NotFound desc = could not find container \"43c253fdee63f9924256e9db414b02d0072b4131d820a30b0ea48dbdb70e275f\": container with ID starting with 43c253fdee63f9924256e9db414b02d0072b4131d820a30b0ea48dbdb70e275f not found: ID does not exist" Mar 16 16:22:06 crc kubenswrapper[4736]: I0316 16:22:06.990052 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bc901e7-285d-4be2-a795-8af462ef5bc0" path="/var/lib/kubelet/pods/8bc901e7-285d-4be2-a795-8af462ef5bc0/volumes" Mar 16 16:22:07 crc kubenswrapper[4736]: I0316 16:22:07.037835 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561302-27jnj" Mar 16 16:22:07 crc kubenswrapper[4736]: I0316 16:22:07.058183 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmrtt\" (UniqueName: \"kubernetes.io/projected/20d63530-b676-4044-8ecb-bf5f833a2b82-kube-api-access-tmrtt\") pod \"20d63530-b676-4044-8ecb-bf5f833a2b82\" (UID: \"20d63530-b676-4044-8ecb-bf5f833a2b82\") " Mar 16 16:22:07 crc kubenswrapper[4736]: I0316 16:22:07.063217 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d63530-b676-4044-8ecb-bf5f833a2b82-kube-api-access-tmrtt" (OuterVolumeSpecName: "kube-api-access-tmrtt") pod "20d63530-b676-4044-8ecb-bf5f833a2b82" (UID: "20d63530-b676-4044-8ecb-bf5f833a2b82"). InnerVolumeSpecName "kube-api-access-tmrtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:22:07 crc kubenswrapper[4736]: I0316 16:22:07.160589 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmrtt\" (UniqueName: \"kubernetes.io/projected/20d63530-b676-4044-8ecb-bf5f833a2b82-kube-api-access-tmrtt\") on node \"crc\" DevicePath \"\"" Mar 16 16:22:07 crc kubenswrapper[4736]: I0316 16:22:07.534358 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561302-27jnj" event={"ID":"20d63530-b676-4044-8ecb-bf5f833a2b82","Type":"ContainerDied","Data":"305e2ba749b15ac9a6b86a6f04a9d55f7fbe7d8bd9955a00c5801708c7587510"} Mar 16 16:22:07 crc kubenswrapper[4736]: I0316 16:22:07.534406 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="305e2ba749b15ac9a6b86a6f04a9d55f7fbe7d8bd9955a00c5801708c7587510" Mar 16 16:22:07 crc kubenswrapper[4736]: I0316 16:22:07.534471 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561302-27jnj" Mar 16 16:22:07 crc kubenswrapper[4736]: I0316 16:22:07.661233 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561296-tgdlv"] Mar 16 16:22:07 crc kubenswrapper[4736]: I0316 16:22:07.670560 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561296-tgdlv"] Mar 16 16:22:08 crc kubenswrapper[4736]: I0316 16:22:08.509042 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:22:08 crc kubenswrapper[4736]: I0316 16:22:08.509541 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:22:08 crc kubenswrapper[4736]: I0316 16:22:08.992667 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac3d572-9b5f-4831-b0b9-3694ed742563" path="/var/lib/kubelet/pods/9ac3d572-9b5f-4831-b0b9-3694ed742563/volumes" Mar 16 16:22:38 crc kubenswrapper[4736]: I0316 16:22:38.510497 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:22:38 crc kubenswrapper[4736]: I0316 16:22:38.512749 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:22:55 crc kubenswrapper[4736]: I0316 16:22:55.302079 4736 scope.go:117] "RemoveContainer" containerID="ec5740b8b51fc9d7e80eb844a6129a1ba4b3ca651801fe19c5acdc38451c3fdb" Mar 16 16:23:05 crc kubenswrapper[4736]: I0316 16:23:05.866815 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-678dd4f677-jxtsk" podUID="bccee937-d642-4483-87fb-033b157cf68c" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 16 16:23:08 crc kubenswrapper[4736]: I0316 16:23:08.507975 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:23:08 crc kubenswrapper[4736]: I0316 16:23:08.508371 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:23:08 crc kubenswrapper[4736]: I0316 16:23:08.509287 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 16:23:08 crc kubenswrapper[4736]: I0316 16:23:08.513015 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02cf557e228638c4bab6b5abe16ceb58a685cf32678b273a9ddb78e9f99373fe"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 16:23:08 crc kubenswrapper[4736]: I0316 16:23:08.513795 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://02cf557e228638c4bab6b5abe16ceb58a685cf32678b273a9ddb78e9f99373fe" gracePeriod=600 Mar 16 16:23:09 crc kubenswrapper[4736]: I0316 16:23:09.544942 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="02cf557e228638c4bab6b5abe16ceb58a685cf32678b273a9ddb78e9f99373fe" exitCode=0 Mar 16 16:23:09 crc kubenswrapper[4736]: I0316 16:23:09.545011 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"02cf557e228638c4bab6b5abe16ceb58a685cf32678b273a9ddb78e9f99373fe"} Mar 16 16:23:09 crc kubenswrapper[4736]: I0316 16:23:09.545720 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619"} Mar 16 16:23:09 crc kubenswrapper[4736]: I0316 16:23:09.545746 4736 scope.go:117] "RemoveContainer" containerID="a65953eeb61d33348399aa5dfd427858cb26f488f2ab0414f9cff257e6e8bfb7" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.338069 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561304-kjhg7"] Mar 16 16:24:00 crc kubenswrapper[4736]: E0316 16:24:00.342188 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d63530-b676-4044-8ecb-bf5f833a2b82" containerName="oc" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.342221 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d63530-b676-4044-8ecb-bf5f833a2b82" containerName="oc" Mar 16 16:24:00 crc kubenswrapper[4736]: E0316 16:24:00.342256 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bc901e7-285d-4be2-a795-8af462ef5bc0" containerName="extract-content" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.342265 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bc901e7-285d-4be2-a795-8af462ef5bc0" containerName="extract-content" Mar 16 16:24:00 crc kubenswrapper[4736]: E0316 16:24:00.342281 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bc901e7-285d-4be2-a795-8af462ef5bc0" containerName="registry-server" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.342292 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bc901e7-285d-4be2-a795-8af462ef5bc0" containerName="registry-server" Mar 16 16:24:00 crc kubenswrapper[4736]: E0316 16:24:00.342305 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bc901e7-285d-4be2-a795-8af462ef5bc0" containerName="extract-utilities" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.342313 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bc901e7-285d-4be2-a795-8af462ef5bc0" containerName="extract-utilities" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.342578 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d63530-b676-4044-8ecb-bf5f833a2b82" containerName="oc" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.342597 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bc901e7-285d-4be2-a795-8af462ef5bc0" containerName="registry-server" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.349306 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561304-kjhg7" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.359373 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.359380 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.359377 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.396798 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561304-kjhg7"] Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.461454 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nqkb\" (UniqueName: \"kubernetes.io/projected/0db2cb20-28f6-4aa3-a4b9-a58a09a5e863-kube-api-access-5nqkb\") pod \"auto-csr-approver-29561304-kjhg7\" (UID: \"0db2cb20-28f6-4aa3-a4b9-a58a09a5e863\") " pod="openshift-infra/auto-csr-approver-29561304-kjhg7" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.564974 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nqkb\" (UniqueName: \"kubernetes.io/projected/0db2cb20-28f6-4aa3-a4b9-a58a09a5e863-kube-api-access-5nqkb\") pod \"auto-csr-approver-29561304-kjhg7\" (UID: \"0db2cb20-28f6-4aa3-a4b9-a58a09a5e863\") " pod="openshift-infra/auto-csr-approver-29561304-kjhg7" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.611681 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nqkb\" (UniqueName: \"kubernetes.io/projected/0db2cb20-28f6-4aa3-a4b9-a58a09a5e863-kube-api-access-5nqkb\") pod \"auto-csr-approver-29561304-kjhg7\" (UID: \"0db2cb20-28f6-4aa3-a4b9-a58a09a5e863\") " pod="openshift-infra/auto-csr-approver-29561304-kjhg7" Mar 16 16:24:00 crc kubenswrapper[4736]: I0316 16:24:00.681661 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561304-kjhg7" Mar 16 16:24:02 crc kubenswrapper[4736]: I0316 16:24:02.016791 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561304-kjhg7"] Mar 16 16:24:03 crc kubenswrapper[4736]: I0316 16:24:03.044074 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561304-kjhg7" event={"ID":"0db2cb20-28f6-4aa3-a4b9-a58a09a5e863","Type":"ContainerStarted","Data":"8cc990968fe5a20434f38fcc4ded663d8c574cabb36f10c96a309e7c1bad1acd"} Mar 16 16:24:05 crc kubenswrapper[4736]: I0316 16:24:05.068726 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561304-kjhg7" event={"ID":"0db2cb20-28f6-4aa3-a4b9-a58a09a5e863","Type":"ContainerStarted","Data":"18fb03053e130ab411f51828f3a605f41e6cbd0270669024a8e6e4b2d25730e3"} Mar 16 16:24:06 crc kubenswrapper[4736]: I0316 16:24:06.080135 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561304-kjhg7" event={"ID":"0db2cb20-28f6-4aa3-a4b9-a58a09a5e863","Type":"ContainerDied","Data":"18fb03053e130ab411f51828f3a605f41e6cbd0270669024a8e6e4b2d25730e3"} Mar 16 16:24:06 crc kubenswrapper[4736]: I0316 16:24:06.080695 4736 generic.go:334] "Generic (PLEG): container finished" podID="0db2cb20-28f6-4aa3-a4b9-a58a09a5e863" containerID="18fb03053e130ab411f51828f3a605f41e6cbd0270669024a8e6e4b2d25730e3" exitCode=0 Mar 16 16:24:07 crc kubenswrapper[4736]: I0316 16:24:07.489992 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561304-kjhg7" Mar 16 16:24:07 crc kubenswrapper[4736]: I0316 16:24:07.602193 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nqkb\" (UniqueName: \"kubernetes.io/projected/0db2cb20-28f6-4aa3-a4b9-a58a09a5e863-kube-api-access-5nqkb\") pod \"0db2cb20-28f6-4aa3-a4b9-a58a09a5e863\" (UID: \"0db2cb20-28f6-4aa3-a4b9-a58a09a5e863\") " Mar 16 16:24:07 crc kubenswrapper[4736]: I0316 16:24:07.614726 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db2cb20-28f6-4aa3-a4b9-a58a09a5e863-kube-api-access-5nqkb" (OuterVolumeSpecName: "kube-api-access-5nqkb") pod "0db2cb20-28f6-4aa3-a4b9-a58a09a5e863" (UID: "0db2cb20-28f6-4aa3-a4b9-a58a09a5e863"). InnerVolumeSpecName "kube-api-access-5nqkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:24:07 crc kubenswrapper[4736]: I0316 16:24:07.704998 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nqkb\" (UniqueName: \"kubernetes.io/projected/0db2cb20-28f6-4aa3-a4b9-a58a09a5e863-kube-api-access-5nqkb\") on node \"crc\" DevicePath \"\"" Mar 16 16:24:08 crc kubenswrapper[4736]: I0316 16:24:08.098867 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561304-kjhg7" event={"ID":"0db2cb20-28f6-4aa3-a4b9-a58a09a5e863","Type":"ContainerDied","Data":"8cc990968fe5a20434f38fcc4ded663d8c574cabb36f10c96a309e7c1bad1acd"} Mar 16 16:24:08 crc kubenswrapper[4736]: I0316 16:24:08.099048 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561304-kjhg7" Mar 16 16:24:08 crc kubenswrapper[4736]: I0316 16:24:08.100131 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cc990968fe5a20434f38fcc4ded663d8c574cabb36f10c96a309e7c1bad1acd" Mar 16 16:24:08 crc kubenswrapper[4736]: I0316 16:24:08.197668 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561298-rcgzr"] Mar 16 16:24:08 crc kubenswrapper[4736]: I0316 16:24:08.213370 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561298-rcgzr"] Mar 16 16:24:08 crc kubenswrapper[4736]: I0316 16:24:08.993927 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad9d14ec-5a0a-4f26-bbc5-969d76f2c364" path="/var/lib/kubelet/pods/ad9d14ec-5a0a-4f26-bbc5-969d76f2c364/volumes" Mar 16 16:24:55 crc kubenswrapper[4736]: I0316 16:24:55.859528 4736 scope.go:117] "RemoveContainer" containerID="ce83b4bec7fb35f84676bf5ba52f0d4c489f6c6ded8d15937e7f419b835c5a93" Mar 16 16:25:08 crc kubenswrapper[4736]: I0316 16:25:08.509390 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:25:08 crc kubenswrapper[4736]: I0316 16:25:08.515173 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:25:38 crc kubenswrapper[4736]: I0316 16:25:38.508309 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:25:38 crc kubenswrapper[4736]: I0316 16:25:38.508999 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:26:00 crc kubenswrapper[4736]: I0316 16:26:00.177538 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561306-c75pn"] Mar 16 16:26:00 crc kubenswrapper[4736]: E0316 16:26:00.180334 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db2cb20-28f6-4aa3-a4b9-a58a09a5e863" containerName="oc" Mar 16 16:26:00 crc kubenswrapper[4736]: I0316 16:26:00.180373 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db2cb20-28f6-4aa3-a4b9-a58a09a5e863" containerName="oc" Mar 16 16:26:00 crc kubenswrapper[4736]: I0316 16:26:00.180579 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0db2cb20-28f6-4aa3-a4b9-a58a09a5e863" containerName="oc" Mar 16 16:26:00 crc kubenswrapper[4736]: I0316 16:26:00.181199 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561306-c75pn" Mar 16 16:26:00 crc kubenswrapper[4736]: I0316 16:26:00.184301 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:26:00 crc kubenswrapper[4736]: I0316 16:26:00.185553 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:26:00 crc kubenswrapper[4736]: I0316 16:26:00.211580 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:26:00 crc kubenswrapper[4736]: I0316 16:26:00.303464 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561306-c75pn"] Mar 16 16:26:00 crc kubenswrapper[4736]: I0316 16:26:00.356792 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbfpq\" (UniqueName: \"kubernetes.io/projected/d0de3b91-e956-46ee-b977-c8417aa86d2c-kube-api-access-zbfpq\") pod \"auto-csr-approver-29561306-c75pn\" (UID: \"d0de3b91-e956-46ee-b977-c8417aa86d2c\") " pod="openshift-infra/auto-csr-approver-29561306-c75pn" Mar 16 16:26:00 crc kubenswrapper[4736]: I0316 16:26:00.458610 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbfpq\" (UniqueName: \"kubernetes.io/projected/d0de3b91-e956-46ee-b977-c8417aa86d2c-kube-api-access-zbfpq\") pod \"auto-csr-approver-29561306-c75pn\" (UID: \"d0de3b91-e956-46ee-b977-c8417aa86d2c\") " pod="openshift-infra/auto-csr-approver-29561306-c75pn" Mar 16 16:26:00 crc kubenswrapper[4736]: I0316 16:26:00.485885 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbfpq\" (UniqueName: \"kubernetes.io/projected/d0de3b91-e956-46ee-b977-c8417aa86d2c-kube-api-access-zbfpq\") pod \"auto-csr-approver-29561306-c75pn\" (UID: \"d0de3b91-e956-46ee-b977-c8417aa86d2c\") " pod="openshift-infra/auto-csr-approver-29561306-c75pn" Mar 16 16:26:00 crc kubenswrapper[4736]: I0316 16:26:00.568911 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561306-c75pn" Mar 16 16:26:01 crc kubenswrapper[4736]: I0316 16:26:01.217011 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561306-c75pn"] Mar 16 16:26:02 crc kubenswrapper[4736]: I0316 16:26:02.174498 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561306-c75pn" event={"ID":"d0de3b91-e956-46ee-b977-c8417aa86d2c","Type":"ContainerStarted","Data":"79c8d466bf02e2598138589dd0f8ea38f9edb2f5973935d017e98119ae382bce"} Mar 16 16:26:04 crc kubenswrapper[4736]: I0316 16:26:04.198131 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561306-c75pn" event={"ID":"d0de3b91-e956-46ee-b977-c8417aa86d2c","Type":"ContainerStarted","Data":"1b85a3062ebabaa1328dfa0288420a69f610be04180d4f43323868e6723efc9f"} Mar 16 16:26:04 crc kubenswrapper[4736]: I0316 16:26:04.222584 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561306-c75pn" podStartSLOduration=3.407107607 podStartE2EDuration="4.220508184s" podCreationTimestamp="2026-03-16 16:26:00 +0000 UTC" firstStartedPulling="2026-03-16 16:26:01.229033859 +0000 UTC m=+4362.956424146" lastFinishedPulling="2026-03-16 16:26:02.042434436 +0000 UTC m=+4363.769824723" observedRunningTime="2026-03-16 16:26:04.212987329 +0000 UTC m=+4365.940377616" watchObservedRunningTime="2026-03-16 16:26:04.220508184 +0000 UTC m=+4365.947898471" Mar 16 16:26:05 crc kubenswrapper[4736]: I0316 16:26:05.208070 4736 generic.go:334] "Generic (PLEG): container finished" podID="d0de3b91-e956-46ee-b977-c8417aa86d2c" containerID="1b85a3062ebabaa1328dfa0288420a69f610be04180d4f43323868e6723efc9f" exitCode=0 Mar 16 16:26:05 crc kubenswrapper[4736]: I0316 16:26:05.208253 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561306-c75pn" event={"ID":"d0de3b91-e956-46ee-b977-c8417aa86d2c","Type":"ContainerDied","Data":"1b85a3062ebabaa1328dfa0288420a69f610be04180d4f43323868e6723efc9f"} Mar 16 16:26:06 crc kubenswrapper[4736]: I0316 16:26:06.596803 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561306-c75pn" Mar 16 16:26:06 crc kubenswrapper[4736]: I0316 16:26:06.672157 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbfpq\" (UniqueName: \"kubernetes.io/projected/d0de3b91-e956-46ee-b977-c8417aa86d2c-kube-api-access-zbfpq\") pod \"d0de3b91-e956-46ee-b977-c8417aa86d2c\" (UID: \"d0de3b91-e956-46ee-b977-c8417aa86d2c\") " Mar 16 16:26:06 crc kubenswrapper[4736]: I0316 16:26:06.685649 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0de3b91-e956-46ee-b977-c8417aa86d2c-kube-api-access-zbfpq" (OuterVolumeSpecName: "kube-api-access-zbfpq") pod "d0de3b91-e956-46ee-b977-c8417aa86d2c" (UID: "d0de3b91-e956-46ee-b977-c8417aa86d2c"). InnerVolumeSpecName "kube-api-access-zbfpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:26:06 crc kubenswrapper[4736]: I0316 16:26:06.773967 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbfpq\" (UniqueName: \"kubernetes.io/projected/d0de3b91-e956-46ee-b977-c8417aa86d2c-kube-api-access-zbfpq\") on node \"crc\" DevicePath \"\"" Mar 16 16:26:07 crc kubenswrapper[4736]: I0316 16:26:07.229536 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561306-c75pn" event={"ID":"d0de3b91-e956-46ee-b977-c8417aa86d2c","Type":"ContainerDied","Data":"79c8d466bf02e2598138589dd0f8ea38f9edb2f5973935d017e98119ae382bce"} Mar 16 16:26:07 crc kubenswrapper[4736]: I0316 16:26:07.229846 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79c8d466bf02e2598138589dd0f8ea38f9edb2f5973935d017e98119ae382bce" Mar 16 16:26:07 crc kubenswrapper[4736]: I0316 16:26:07.229714 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561306-c75pn" Mar 16 16:26:07 crc kubenswrapper[4736]: I0316 16:26:07.285215 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561300-tt5hr"] Mar 16 16:26:07 crc kubenswrapper[4736]: I0316 16:26:07.293070 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561300-tt5hr"] Mar 16 16:26:08 crc kubenswrapper[4736]: I0316 16:26:08.508596 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:26:08 crc kubenswrapper[4736]: I0316 16:26:08.509012 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:26:08 crc kubenswrapper[4736]: I0316 16:26:08.509066 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 16:26:08 crc kubenswrapper[4736]: I0316 16:26:08.511353 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 16:26:08 crc kubenswrapper[4736]: I0316 16:26:08.511807 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" gracePeriod=600 Mar 16 16:26:08 crc kubenswrapper[4736]: E0316 16:26:08.642670 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:26:09 crc kubenswrapper[4736]: I0316 16:26:09.008775 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4012606f-be73-4dc2-9c22-1dc076985291" path="/var/lib/kubelet/pods/4012606f-be73-4dc2-9c22-1dc076985291/volumes" Mar 16 16:26:09 crc kubenswrapper[4736]: I0316 16:26:09.255059 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" exitCode=0 Mar 16 16:26:09 crc kubenswrapper[4736]: I0316 16:26:09.255170 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619"} Mar 16 16:26:09 crc kubenswrapper[4736]: I0316 16:26:09.255265 4736 scope.go:117] "RemoveContainer" containerID="02cf557e228638c4bab6b5abe16ceb58a685cf32678b273a9ddb78e9f99373fe" Mar 16 16:26:09 crc kubenswrapper[4736]: I0316 16:26:09.256916 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:26:09 crc kubenswrapper[4736]: E0316 16:26:09.257708 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:26:23 crc kubenswrapper[4736]: I0316 16:26:23.978209 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:26:23 crc kubenswrapper[4736]: E0316 16:26:23.978999 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:26:34 crc kubenswrapper[4736]: I0316 16:26:34.983194 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:26:34 crc kubenswrapper[4736]: E0316 16:26:34.983836 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:26:46 crc kubenswrapper[4736]: I0316 16:26:46.978725 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:26:46 crc kubenswrapper[4736]: E0316 16:26:46.979608 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:26:56 crc kubenswrapper[4736]: I0316 16:26:56.071306 4736 scope.go:117] "RemoveContainer" containerID="a130e7a117153b4016e6c6d75b16db2983215e5a3159b822f86f2040aceae8ee" Mar 16 16:27:00 crc kubenswrapper[4736]: I0316 16:27:00.978697 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:27:00 crc kubenswrapper[4736]: E0316 16:27:00.979633 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.327891 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-42qms"] Mar 16 16:27:11 crc kubenswrapper[4736]: E0316 16:27:11.334505 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0de3b91-e956-46ee-b977-c8417aa86d2c" containerName="oc" Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.335291 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0de3b91-e956-46ee-b977-c8417aa86d2c" containerName="oc" Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.336586 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0de3b91-e956-46ee-b977-c8417aa86d2c" containerName="oc" Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.342312 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.342326 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-42qms"] Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.511232 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9pnr\" (UniqueName: \"kubernetes.io/projected/ce53390c-6f7c-4752-b0c8-e982af5a78a9-kube-api-access-z9pnr\") pod \"redhat-operators-42qms\" (UID: \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\") " pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.511351 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce53390c-6f7c-4752-b0c8-e982af5a78a9-catalog-content\") pod \"redhat-operators-42qms\" (UID: \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\") " pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.511461 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce53390c-6f7c-4752-b0c8-e982af5a78a9-utilities\") pod \"redhat-operators-42qms\" (UID: \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\") " pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.613644 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9pnr\" (UniqueName: \"kubernetes.io/projected/ce53390c-6f7c-4752-b0c8-e982af5a78a9-kube-api-access-z9pnr\") pod \"redhat-operators-42qms\" (UID: \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\") " pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.613744 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce53390c-6f7c-4752-b0c8-e982af5a78a9-catalog-content\") pod \"redhat-operators-42qms\" (UID: \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\") " pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.613826 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce53390c-6f7c-4752-b0c8-e982af5a78a9-utilities\") pod \"redhat-operators-42qms\" (UID: \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\") " pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.615806 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce53390c-6f7c-4752-b0c8-e982af5a78a9-catalog-content\") pod \"redhat-operators-42qms\" (UID: \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\") " pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.615952 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce53390c-6f7c-4752-b0c8-e982af5a78a9-utilities\") pod \"redhat-operators-42qms\" (UID: \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\") " pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.651639 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9pnr\" (UniqueName: \"kubernetes.io/projected/ce53390c-6f7c-4752-b0c8-e982af5a78a9-kube-api-access-z9pnr\") pod \"redhat-operators-42qms\" (UID: \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\") " pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:27:11 crc kubenswrapper[4736]: I0316 16:27:11.678512 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:27:12 crc kubenswrapper[4736]: I0316 16:27:12.902548 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-42qms"] Mar 16 16:27:13 crc kubenswrapper[4736]: I0316 16:27:13.848504 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42qms" event={"ID":"ce53390c-6f7c-4752-b0c8-e982af5a78a9","Type":"ContainerDied","Data":"e7de68d895cc732d6ac5d53d99d5d8505b095c8b0ec37bdc0505357879a753ca"} Mar 16 16:27:13 crc kubenswrapper[4736]: I0316 16:27:13.848862 4736 generic.go:334] "Generic (PLEG): container finished" podID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerID="e7de68d895cc732d6ac5d53d99d5d8505b095c8b0ec37bdc0505357879a753ca" exitCode=0 Mar 16 16:27:13 crc kubenswrapper[4736]: I0316 16:27:13.848988 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42qms" event={"ID":"ce53390c-6f7c-4752-b0c8-e982af5a78a9","Type":"ContainerStarted","Data":"9f97e52d09ee156d7ec50caf5036772a53763e3ac29791949dd0d821eab170d7"} Mar 16 16:27:13 crc kubenswrapper[4736]: I0316 16:27:13.858157 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 16:27:14 crc kubenswrapper[4736]: I0316 16:27:14.860319 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42qms" event={"ID":"ce53390c-6f7c-4752-b0c8-e982af5a78a9","Type":"ContainerStarted","Data":"57b2c04dc4f776f37292938e6c08df36adb149e571ac47223dea86e4ba179cfe"} Mar 16 16:27:15 crc kubenswrapper[4736]: I0316 16:27:15.978567 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:27:15 crc kubenswrapper[4736]: E0316 16:27:15.979172 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:27:20 crc kubenswrapper[4736]: I0316 16:27:20.914696 4736 generic.go:334] "Generic (PLEG): container finished" podID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerID="57b2c04dc4f776f37292938e6c08df36adb149e571ac47223dea86e4ba179cfe" exitCode=0 Mar 16 16:27:20 crc kubenswrapper[4736]: I0316 16:27:20.914773 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42qms" event={"ID":"ce53390c-6f7c-4752-b0c8-e982af5a78a9","Type":"ContainerDied","Data":"57b2c04dc4f776f37292938e6c08df36adb149e571ac47223dea86e4ba179cfe"} Mar 16 16:27:21 crc kubenswrapper[4736]: I0316 16:27:21.945320 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42qms" event={"ID":"ce53390c-6f7c-4752-b0c8-e982af5a78a9","Type":"ContainerStarted","Data":"8e4425fdb150b0742e12863bf454c766d46633445933afd668660c9086bbdd4c"} Mar 16 16:27:21 crc kubenswrapper[4736]: I0316 16:27:21.992604 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-42qms" podStartSLOduration=3.439787248 podStartE2EDuration="10.98850636s" podCreationTimestamp="2026-03-16 16:27:11 +0000 UTC" firstStartedPulling="2026-03-16 16:27:13.851603748 +0000 UTC m=+4435.578994045" lastFinishedPulling="2026-03-16 16:27:21.40032287 +0000 UTC m=+4443.127713157" observedRunningTime="2026-03-16 16:27:21.968049133 +0000 UTC m=+4443.695439420" watchObservedRunningTime="2026-03-16 16:27:21.98850636 +0000 UTC m=+4443.715896647" Mar 16 16:27:29 crc kubenswrapper[4736]: I0316 16:27:29.978011 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:27:29 crc kubenswrapper[4736]: E0316 16:27:29.978721 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:27:31 crc kubenswrapper[4736]: I0316 16:27:31.680214 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:27:31 crc kubenswrapper[4736]: I0316 16:27:31.680530 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:27:32 crc kubenswrapper[4736]: I0316 16:27:32.737733 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-42qms" podUID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerName="registry-server" probeResult="failure" output=< Mar 16 16:27:32 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:27:32 crc kubenswrapper[4736]: > Mar 16 16:27:40 crc kubenswrapper[4736]: I0316 16:27:40.978590 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:27:40 crc kubenswrapper[4736]: E0316 16:27:40.980490 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:27:42 crc kubenswrapper[4736]: I0316 16:27:42.804287 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-42qms" podUID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerName="registry-server" probeResult="failure" output=< Mar 16 16:27:42 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:27:42 crc kubenswrapper[4736]: > Mar 16 16:27:53 crc kubenswrapper[4736]: I0316 16:27:53.230893 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-42qms" podUID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerName="registry-server" probeResult="failure" output=< Mar 16 16:27:53 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:27:53 crc kubenswrapper[4736]: > Mar 16 16:27:55 crc kubenswrapper[4736]: I0316 16:27:55.978179 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:27:55 crc kubenswrapper[4736]: E0316 16:27:55.978604 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:28:00 crc kubenswrapper[4736]: I0316 16:28:00.448720 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561308-bw4k6"] Mar 16 16:28:00 crc kubenswrapper[4736]: I0316 16:28:00.463015 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561308-bw4k6" Mar 16 16:28:00 crc kubenswrapper[4736]: I0316 16:28:00.481354 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:28:00 crc kubenswrapper[4736]: I0316 16:28:00.481348 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:28:00 crc kubenswrapper[4736]: I0316 16:28:00.481427 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:28:00 crc kubenswrapper[4736]: I0316 16:28:00.544530 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csldr\" (UniqueName: \"kubernetes.io/projected/dd89b27a-c162-465d-b875-c1e672b27a67-kube-api-access-csldr\") pod \"auto-csr-approver-29561308-bw4k6\" (UID: \"dd89b27a-c162-465d-b875-c1e672b27a67\") " pod="openshift-infra/auto-csr-approver-29561308-bw4k6" Mar 16 16:28:00 crc kubenswrapper[4736]: I0316 16:28:00.585145 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561308-bw4k6"] Mar 16 16:28:00 crc kubenswrapper[4736]: I0316 16:28:00.646343 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csldr\" (UniqueName: \"kubernetes.io/projected/dd89b27a-c162-465d-b875-c1e672b27a67-kube-api-access-csldr\") pod \"auto-csr-approver-29561308-bw4k6\" (UID: \"dd89b27a-c162-465d-b875-c1e672b27a67\") " pod="openshift-infra/auto-csr-approver-29561308-bw4k6" Mar 16 16:28:00 crc kubenswrapper[4736]: I0316 16:28:00.691026 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csldr\" (UniqueName: \"kubernetes.io/projected/dd89b27a-c162-465d-b875-c1e672b27a67-kube-api-access-csldr\") pod \"auto-csr-approver-29561308-bw4k6\" (UID: \"dd89b27a-c162-465d-b875-c1e672b27a67\") " pod="openshift-infra/auto-csr-approver-29561308-bw4k6" Mar 16 16:28:00 crc kubenswrapper[4736]: I0316 16:28:00.818860 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561308-bw4k6" Mar 16 16:28:02 crc kubenswrapper[4736]: I0316 16:28:02.755439 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561308-bw4k6"] Mar 16 16:28:02 crc kubenswrapper[4736]: I0316 16:28:02.760779 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-42qms" podUID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerName="registry-server" probeResult="failure" output=< Mar 16 16:28:02 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:28:02 crc kubenswrapper[4736]: > Mar 16 16:28:03 crc kubenswrapper[4736]: W0316 16:28:03.115622 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd89b27a_c162_465d_b875_c1e672b27a67.slice/crio-6ca73b53a8923ab85e8ce335ad7b49bfd7e8ff0abf5e15fb9d7b6afd2f92daba WatchSource:0}: Error finding container 6ca73b53a8923ab85e8ce335ad7b49bfd7e8ff0abf5e15fb9d7b6afd2f92daba: Status 404 returned error can't find the container with id 6ca73b53a8923ab85e8ce335ad7b49bfd7e8ff0abf5e15fb9d7b6afd2f92daba Mar 16 16:28:03 crc kubenswrapper[4736]: I0316 16:28:03.342693 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561308-bw4k6" event={"ID":"dd89b27a-c162-465d-b875-c1e672b27a67","Type":"ContainerStarted","Data":"6ca73b53a8923ab85e8ce335ad7b49bfd7e8ff0abf5e15fb9d7b6afd2f92daba"} Mar 16 16:28:05 crc kubenswrapper[4736]: I0316 16:28:05.361845 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561308-bw4k6" event={"ID":"dd89b27a-c162-465d-b875-c1e672b27a67","Type":"ContainerStarted","Data":"b34f09fcd38e6f55fe8b42819aa3b082e66c5fd3c860971b28d514e3a71314b5"} Mar 16 16:28:05 crc kubenswrapper[4736]: I0316 16:28:05.396330 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561308-bw4k6" podStartSLOduration=4.329283256 podStartE2EDuration="5.396305782s" podCreationTimestamp="2026-03-16 16:28:00 +0000 UTC" firstStartedPulling="2026-03-16 16:28:03.124282738 +0000 UTC m=+4484.851673025" lastFinishedPulling="2026-03-16 16:28:04.191305244 +0000 UTC m=+4485.918695551" observedRunningTime="2026-03-16 16:28:05.386354881 +0000 UTC m=+4487.113745168" watchObservedRunningTime="2026-03-16 16:28:05.396305782 +0000 UTC m=+4487.123696079" Mar 16 16:28:06 crc kubenswrapper[4736]: I0316 16:28:06.375263 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561308-bw4k6" event={"ID":"dd89b27a-c162-465d-b875-c1e672b27a67","Type":"ContainerDied","Data":"b34f09fcd38e6f55fe8b42819aa3b082e66c5fd3c860971b28d514e3a71314b5"} Mar 16 16:28:06 crc kubenswrapper[4736]: I0316 16:28:06.375701 4736 generic.go:334] "Generic (PLEG): container finished" podID="dd89b27a-c162-465d-b875-c1e672b27a67" containerID="b34f09fcd38e6f55fe8b42819aa3b082e66c5fd3c860971b28d514e3a71314b5" exitCode=0 Mar 16 16:28:07 crc kubenswrapper[4736]: I0316 16:28:07.856808 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561308-bw4k6" Mar 16 16:28:07 crc kubenswrapper[4736]: I0316 16:28:07.889692 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csldr\" (UniqueName: \"kubernetes.io/projected/dd89b27a-c162-465d-b875-c1e672b27a67-kube-api-access-csldr\") pod \"dd89b27a-c162-465d-b875-c1e672b27a67\" (UID: \"dd89b27a-c162-465d-b875-c1e672b27a67\") " Mar 16 16:28:07 crc kubenswrapper[4736]: I0316 16:28:07.909312 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd89b27a-c162-465d-b875-c1e672b27a67-kube-api-access-csldr" (OuterVolumeSpecName: "kube-api-access-csldr") pod "dd89b27a-c162-465d-b875-c1e672b27a67" (UID: "dd89b27a-c162-465d-b875-c1e672b27a67"). InnerVolumeSpecName "kube-api-access-csldr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:28:07 crc kubenswrapper[4736]: I0316 16:28:07.979673 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:28:07 crc kubenswrapper[4736]: E0316 16:28:07.980414 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:28:07 crc kubenswrapper[4736]: I0316 16:28:07.992709 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csldr\" (UniqueName: \"kubernetes.io/projected/dd89b27a-c162-465d-b875-c1e672b27a67-kube-api-access-csldr\") on node \"crc\" DevicePath \"\"" Mar 16 16:28:08 crc kubenswrapper[4736]: I0316 16:28:08.396380 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561308-bw4k6" event={"ID":"dd89b27a-c162-465d-b875-c1e672b27a67","Type":"ContainerDied","Data":"6ca73b53a8923ab85e8ce335ad7b49bfd7e8ff0abf5e15fb9d7b6afd2f92daba"} Mar 16 16:28:08 crc kubenswrapper[4736]: I0316 16:28:08.396508 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561308-bw4k6" Mar 16 16:28:08 crc kubenswrapper[4736]: I0316 16:28:08.396971 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ca73b53a8923ab85e8ce335ad7b49bfd7e8ff0abf5e15fb9d7b6afd2f92daba" Mar 16 16:28:08 crc kubenswrapper[4736]: I0316 16:28:08.494883 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561302-27jnj"] Mar 16 16:28:08 crc kubenswrapper[4736]: I0316 16:28:08.508313 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561302-27jnj"] Mar 16 16:28:08 crc kubenswrapper[4736]: I0316 16:28:08.990601 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d63530-b676-4044-8ecb-bf5f833a2b82" path="/var/lib/kubelet/pods/20d63530-b676-4044-8ecb-bf5f833a2b82/volumes" Mar 16 16:28:11 crc kubenswrapper[4736]: I0316 16:28:11.778194 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:28:11 crc kubenswrapper[4736]: I0316 16:28:11.858943 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:28:12 crc kubenswrapper[4736]: I0316 16:28:12.563513 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-42qms"] Mar 16 16:28:13 crc kubenswrapper[4736]: I0316 16:28:13.449454 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-42qms" podUID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerName="registry-server" containerID="cri-o://8e4425fdb150b0742e12863bf454c766d46633445933afd668660c9086bbdd4c" gracePeriod=2 Mar 16 16:28:14 crc kubenswrapper[4736]: I0316 16:28:14.463968 4736 generic.go:334] "Generic (PLEG): container finished" podID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerID="8e4425fdb150b0742e12863bf454c766d46633445933afd668660c9086bbdd4c" exitCode=0 Mar 16 16:28:14 crc kubenswrapper[4736]: I0316 16:28:14.464196 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42qms" event={"ID":"ce53390c-6f7c-4752-b0c8-e982af5a78a9","Type":"ContainerDied","Data":"8e4425fdb150b0742e12863bf454c766d46633445933afd668660c9086bbdd4c"} Mar 16 16:28:14 crc kubenswrapper[4736]: I0316 16:28:14.790448 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:28:14 crc kubenswrapper[4736]: I0316 16:28:14.930806 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce53390c-6f7c-4752-b0c8-e982af5a78a9-utilities\") pod \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\" (UID: \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\") " Mar 16 16:28:14 crc kubenswrapper[4736]: I0316 16:28:14.930886 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce53390c-6f7c-4752-b0c8-e982af5a78a9-catalog-content\") pod \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\" (UID: \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\") " Mar 16 16:28:14 crc kubenswrapper[4736]: I0316 16:28:14.930929 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9pnr\" (UniqueName: \"kubernetes.io/projected/ce53390c-6f7c-4752-b0c8-e982af5a78a9-kube-api-access-z9pnr\") pod \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\" (UID: \"ce53390c-6f7c-4752-b0c8-e982af5a78a9\") " Mar 16 16:28:14 crc kubenswrapper[4736]: I0316 16:28:14.934485 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce53390c-6f7c-4752-b0c8-e982af5a78a9-utilities" (OuterVolumeSpecName: "utilities") pod "ce53390c-6f7c-4752-b0c8-e982af5a78a9" (UID: "ce53390c-6f7c-4752-b0c8-e982af5a78a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:28:14 crc kubenswrapper[4736]: I0316 16:28:14.950912 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce53390c-6f7c-4752-b0c8-e982af5a78a9-kube-api-access-z9pnr" (OuterVolumeSpecName: "kube-api-access-z9pnr") pod "ce53390c-6f7c-4752-b0c8-e982af5a78a9" (UID: "ce53390c-6f7c-4752-b0c8-e982af5a78a9"). InnerVolumeSpecName "kube-api-access-z9pnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:28:15 crc kubenswrapper[4736]: I0316 16:28:15.033930 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce53390c-6f7c-4752-b0c8-e982af5a78a9-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:28:15 crc kubenswrapper[4736]: I0316 16:28:15.034012 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9pnr\" (UniqueName: \"kubernetes.io/projected/ce53390c-6f7c-4752-b0c8-e982af5a78a9-kube-api-access-z9pnr\") on node \"crc\" DevicePath \"\"" Mar 16 16:28:15 crc kubenswrapper[4736]: I0316 16:28:15.129026 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce53390c-6f7c-4752-b0c8-e982af5a78a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce53390c-6f7c-4752-b0c8-e982af5a78a9" (UID: "ce53390c-6f7c-4752-b0c8-e982af5a78a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:28:15 crc kubenswrapper[4736]: I0316 16:28:15.135360 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce53390c-6f7c-4752-b0c8-e982af5a78a9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:28:15 crc kubenswrapper[4736]: I0316 16:28:15.479552 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42qms" event={"ID":"ce53390c-6f7c-4752-b0c8-e982af5a78a9","Type":"ContainerDied","Data":"9f97e52d09ee156d7ec50caf5036772a53763e3ac29791949dd0d821eab170d7"} Mar 16 16:28:15 crc kubenswrapper[4736]: I0316 16:28:15.479605 4736 scope.go:117] "RemoveContainer" containerID="8e4425fdb150b0742e12863bf454c766d46633445933afd668660c9086bbdd4c" Mar 16 16:28:15 crc kubenswrapper[4736]: I0316 16:28:15.480557 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42qms" Mar 16 16:28:15 crc kubenswrapper[4736]: I0316 16:28:15.556731 4736 scope.go:117] "RemoveContainer" containerID="57b2c04dc4f776f37292938e6c08df36adb149e571ac47223dea86e4ba179cfe" Mar 16 16:28:15 crc kubenswrapper[4736]: I0316 16:28:15.584193 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-42qms"] Mar 16 16:28:15 crc kubenswrapper[4736]: I0316 16:28:15.609682 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-42qms"] Mar 16 16:28:15 crc kubenswrapper[4736]: I0316 16:28:15.808840 4736 scope.go:117] "RemoveContainer" containerID="e7de68d895cc732d6ac5d53d99d5d8505b095c8b0ec37bdc0505357879a753ca" Mar 16 16:28:16 crc kubenswrapper[4736]: I0316 16:28:16.990023 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" path="/var/lib/kubelet/pods/ce53390c-6f7c-4752-b0c8-e982af5a78a9/volumes" Mar 16 16:28:19 crc kubenswrapper[4736]: I0316 16:28:19.979088 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:28:19 crc kubenswrapper[4736]: E0316 16:28:19.979941 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:28:34 crc kubenswrapper[4736]: I0316 16:28:34.978230 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:28:34 crc kubenswrapper[4736]: E0316 16:28:34.978947 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:28:48 crc kubenswrapper[4736]: I0316 16:28:48.985073 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:28:48 crc kubenswrapper[4736]: E0316 16:28:48.985821 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:28:56 crc kubenswrapper[4736]: I0316 16:28:56.354799 4736 scope.go:117] "RemoveContainer" containerID="8bb308ca494f1b8dc87e5ef73c2c867ea3995c8e824c3661ce7855bfb39a1859" Mar 16 16:29:02 crc kubenswrapper[4736]: I0316 16:29:02.980752 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:29:02 crc kubenswrapper[4736]: E0316 16:29:02.981625 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:29:16 crc kubenswrapper[4736]: I0316 16:29:16.978702 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:29:16 crc kubenswrapper[4736]: E0316 16:29:16.979531 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:29:31 crc kubenswrapper[4736]: I0316 16:29:31.978459 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:29:31 crc kubenswrapper[4736]: E0316 16:29:31.979325 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:29:43 crc kubenswrapper[4736]: I0316 16:29:43.978052 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:29:43 crc kubenswrapper[4736]: E0316 16:29:43.978807 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:29:54 crc kubenswrapper[4736]: I0316 16:29:54.978119 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:29:54 crc kubenswrapper[4736]: E0316 16:29:54.979723 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.224640 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561310-8jbq2"] Mar 16 16:30:00 crc kubenswrapper[4736]: E0316 16:30:00.227338 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerName="extract-content" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.227395 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerName="extract-content" Mar 16 16:30:00 crc kubenswrapper[4736]: E0316 16:30:00.227427 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerName="registry-server" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.227440 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerName="registry-server" Mar 16 16:30:00 crc kubenswrapper[4736]: E0316 16:30:00.227464 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerName="extract-utilities" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.227476 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerName="extract-utilities" Mar 16 16:30:00 crc kubenswrapper[4736]: E0316 16:30:00.227510 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd89b27a-c162-465d-b875-c1e672b27a67" containerName="oc" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.227522 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd89b27a-c162-465d-b875-c1e672b27a67" containerName="oc" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.231165 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd89b27a-c162-465d-b875-c1e672b27a67" containerName="oc" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.231276 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce53390c-6f7c-4752-b0c8-e982af5a78a9" containerName="registry-server" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.235235 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561310-8jbq2" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.238503 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7"] Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.240209 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.242879 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.242894 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.242895 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.244656 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.244958 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.252245 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561310-8jbq2"] Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.283262 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7"] Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.377016 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c2f5143-440b-4a2c-90c5-ce6b185936c3-config-volume\") pod \"collect-profiles-29561310-g7hc7\" (UID: \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.377085 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmpbl\" (UniqueName: \"kubernetes.io/projected/1aa1b59c-f1ce-4bad-ad42-d1c383855885-kube-api-access-rmpbl\") pod \"auto-csr-approver-29561310-8jbq2\" (UID: \"1aa1b59c-f1ce-4bad-ad42-d1c383855885\") " pod="openshift-infra/auto-csr-approver-29561310-8jbq2" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.377214 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c2f5143-440b-4a2c-90c5-ce6b185936c3-secret-volume\") pod \"collect-profiles-29561310-g7hc7\" (UID: \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.377282 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4m4s\" (UniqueName: \"kubernetes.io/projected/1c2f5143-440b-4a2c-90c5-ce6b185936c3-kube-api-access-j4m4s\") pod \"collect-profiles-29561310-g7hc7\" (UID: \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.478547 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c2f5143-440b-4a2c-90c5-ce6b185936c3-secret-volume\") pod \"collect-profiles-29561310-g7hc7\" (UID: \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.478636 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4m4s\" (UniqueName: \"kubernetes.io/projected/1c2f5143-440b-4a2c-90c5-ce6b185936c3-kube-api-access-j4m4s\") pod \"collect-profiles-29561310-g7hc7\" (UID: \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.478745 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c2f5143-440b-4a2c-90c5-ce6b185936c3-config-volume\") pod \"collect-profiles-29561310-g7hc7\" (UID: \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.478780 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmpbl\" (UniqueName: \"kubernetes.io/projected/1aa1b59c-f1ce-4bad-ad42-d1c383855885-kube-api-access-rmpbl\") pod \"auto-csr-approver-29561310-8jbq2\" (UID: \"1aa1b59c-f1ce-4bad-ad42-d1c383855885\") " pod="openshift-infra/auto-csr-approver-29561310-8jbq2" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.481010 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c2f5143-440b-4a2c-90c5-ce6b185936c3-config-volume\") pod \"collect-profiles-29561310-g7hc7\" (UID: \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.487998 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c2f5143-440b-4a2c-90c5-ce6b185936c3-secret-volume\") pod \"collect-profiles-29561310-g7hc7\" (UID: \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.993321 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmpbl\" (UniqueName: \"kubernetes.io/projected/1aa1b59c-f1ce-4bad-ad42-d1c383855885-kube-api-access-rmpbl\") pod \"auto-csr-approver-29561310-8jbq2\" (UID: \"1aa1b59c-f1ce-4bad-ad42-d1c383855885\") " pod="openshift-infra/auto-csr-approver-29561310-8jbq2" Mar 16 16:30:00 crc kubenswrapper[4736]: I0316 16:30:00.994734 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4m4s\" (UniqueName: \"kubernetes.io/projected/1c2f5143-440b-4a2c-90c5-ce6b185936c3-kube-api-access-j4m4s\") pod \"collect-profiles-29561310-g7hc7\" (UID: \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" Mar 16 16:30:01 crc kubenswrapper[4736]: I0316 16:30:01.165050 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561310-8jbq2" Mar 16 16:30:01 crc kubenswrapper[4736]: I0316 16:30:01.178871 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" Mar 16 16:30:02 crc kubenswrapper[4736]: I0316 16:30:02.321305 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561310-8jbq2"] Mar 16 16:30:02 crc kubenswrapper[4736]: I0316 16:30:02.335165 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7"] Mar 16 16:30:03 crc kubenswrapper[4736]: I0316 16:30:03.227733 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561310-8jbq2" event={"ID":"1aa1b59c-f1ce-4bad-ad42-d1c383855885","Type":"ContainerStarted","Data":"82fb31a0d13a2f3fae1ea8796f423258f4a69298230c66a2c5fd4345d13d4af9"} Mar 16 16:30:03 crc kubenswrapper[4736]: I0316 16:30:03.230384 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" event={"ID":"1c2f5143-440b-4a2c-90c5-ce6b185936c3","Type":"ContainerStarted","Data":"aea39b907a8280104e5e7136644af7e5dff399e28cbed88f33714c3c958ce09c"} Mar 16 16:30:03 crc kubenswrapper[4736]: I0316 16:30:03.230449 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" event={"ID":"1c2f5143-440b-4a2c-90c5-ce6b185936c3","Type":"ContainerStarted","Data":"cfc7a1616fc3d8ebcd3dda8d767e3c4789022b31b91b4aa80fb06b14e9f088c0"} Mar 16 16:30:03 crc kubenswrapper[4736]: I0316 16:30:03.254505 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" podStartSLOduration=3.252333246 podStartE2EDuration="3.252333246s" podCreationTimestamp="2026-03-16 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 16:30:03.24765462 +0000 UTC m=+4604.975044927" watchObservedRunningTime="2026-03-16 16:30:03.252333246 +0000 UTC m=+4604.979723543" Mar 16 16:30:04 crc kubenswrapper[4736]: E0316 16:30:04.023156 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c2f5143_440b_4a2c_90c5_ce6b185936c3.slice/crio-conmon-aea39b907a8280104e5e7136644af7e5dff399e28cbed88f33714c3c958ce09c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c2f5143_440b_4a2c_90c5_ce6b185936c3.slice/crio-aea39b907a8280104e5e7136644af7e5dff399e28cbed88f33714c3c958ce09c.scope\": RecentStats: unable to find data in memory cache]" Mar 16 16:30:04 crc kubenswrapper[4736]: I0316 16:30:04.240277 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" event={"ID":"1c2f5143-440b-4a2c-90c5-ce6b185936c3","Type":"ContainerDied","Data":"aea39b907a8280104e5e7136644af7e5dff399e28cbed88f33714c3c958ce09c"} Mar 16 16:30:04 crc kubenswrapper[4736]: I0316 16:30:04.240456 4736 generic.go:334] "Generic (PLEG): container finished" podID="1c2f5143-440b-4a2c-90c5-ce6b185936c3" containerID="aea39b907a8280104e5e7136644af7e5dff399e28cbed88f33714c3c958ce09c" exitCode=0 Mar 16 16:30:05 crc kubenswrapper[4736]: I0316 16:30:05.715595 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" Mar 16 16:30:05 crc kubenswrapper[4736]: I0316 16:30:05.777803 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4m4s\" (UniqueName: \"kubernetes.io/projected/1c2f5143-440b-4a2c-90c5-ce6b185936c3-kube-api-access-j4m4s\") pod \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\" (UID: \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\") " Mar 16 16:30:05 crc kubenswrapper[4736]: I0316 16:30:05.777905 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c2f5143-440b-4a2c-90c5-ce6b185936c3-secret-volume\") pod \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\" (UID: \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\") " Mar 16 16:30:05 crc kubenswrapper[4736]: I0316 16:30:05.778002 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c2f5143-440b-4a2c-90c5-ce6b185936c3-config-volume\") pod \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\" (UID: \"1c2f5143-440b-4a2c-90c5-ce6b185936c3\") " Mar 16 16:30:05 crc kubenswrapper[4736]: I0316 16:30:05.782095 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c2f5143-440b-4a2c-90c5-ce6b185936c3-config-volume" (OuterVolumeSpecName: "config-volume") pod "1c2f5143-440b-4a2c-90c5-ce6b185936c3" (UID: "1c2f5143-440b-4a2c-90c5-ce6b185936c3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 16:30:05 crc kubenswrapper[4736]: I0316 16:30:05.787297 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c2f5143-440b-4a2c-90c5-ce6b185936c3-kube-api-access-j4m4s" (OuterVolumeSpecName: "kube-api-access-j4m4s") pod "1c2f5143-440b-4a2c-90c5-ce6b185936c3" (UID: "1c2f5143-440b-4a2c-90c5-ce6b185936c3"). InnerVolumeSpecName "kube-api-access-j4m4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:30:05 crc kubenswrapper[4736]: I0316 16:30:05.789368 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c2f5143-440b-4a2c-90c5-ce6b185936c3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1c2f5143-440b-4a2c-90c5-ce6b185936c3" (UID: "1c2f5143-440b-4a2c-90c5-ce6b185936c3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 16:30:05 crc kubenswrapper[4736]: I0316 16:30:05.881004 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4m4s\" (UniqueName: \"kubernetes.io/projected/1c2f5143-440b-4a2c-90c5-ce6b185936c3-kube-api-access-j4m4s\") on node \"crc\" DevicePath \"\"" Mar 16 16:30:05 crc kubenswrapper[4736]: I0316 16:30:05.881040 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c2f5143-440b-4a2c-90c5-ce6b185936c3-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 16:30:05 crc kubenswrapper[4736]: I0316 16:30:05.881049 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c2f5143-440b-4a2c-90c5-ce6b185936c3-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 16:30:06 crc kubenswrapper[4736]: I0316 16:30:06.261084 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" event={"ID":"1c2f5143-440b-4a2c-90c5-ce6b185936c3","Type":"ContainerDied","Data":"cfc7a1616fc3d8ebcd3dda8d767e3c4789022b31b91b4aa80fb06b14e9f088c0"} Mar 16 16:30:06 crc kubenswrapper[4736]: I0316 16:30:06.261126 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7" Mar 16 16:30:06 crc kubenswrapper[4736]: I0316 16:30:06.262502 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfc7a1616fc3d8ebcd3dda8d767e3c4789022b31b91b4aa80fb06b14e9f088c0" Mar 16 16:30:06 crc kubenswrapper[4736]: I0316 16:30:06.263220 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561310-8jbq2" event={"ID":"1aa1b59c-f1ce-4bad-ad42-d1c383855885","Type":"ContainerStarted","Data":"abbb4a051e1d8c6b638ef802f7af399490a05dcf11a96b271ce274b11aa7eb4e"} Mar 16 16:30:06 crc kubenswrapper[4736]: I0316 16:30:06.279563 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561310-8jbq2" podStartSLOduration=3.953486811 podStartE2EDuration="6.279540844s" podCreationTimestamp="2026-03-16 16:30:00 +0000 UTC" firstStartedPulling="2026-03-16 16:30:02.724282533 +0000 UTC m=+4604.451672840" lastFinishedPulling="2026-03-16 16:30:05.050336586 +0000 UTC m=+4606.777726873" observedRunningTime="2026-03-16 16:30:06.275538065 +0000 UTC m=+4608.002928352" watchObservedRunningTime="2026-03-16 16:30:06.279540844 +0000 UTC m=+4608.006931131" Mar 16 16:30:06 crc kubenswrapper[4736]: I0316 16:30:06.831690 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg"] Mar 16 16:30:06 crc kubenswrapper[4736]: I0316 16:30:06.876055 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561265-v59gg"] Mar 16 16:30:06 crc kubenswrapper[4736]: I0316 16:30:06.990539 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bfc0fbd-42db-46d1-9c49-32da6f56fef4" path="/var/lib/kubelet/pods/4bfc0fbd-42db-46d1-9c49-32da6f56fef4/volumes" Mar 16 16:30:07 crc kubenswrapper[4736]: I0316 16:30:07.272007 4736 generic.go:334] "Generic (PLEG): container finished" podID="1aa1b59c-f1ce-4bad-ad42-d1c383855885" containerID="abbb4a051e1d8c6b638ef802f7af399490a05dcf11a96b271ce274b11aa7eb4e" exitCode=0 Mar 16 16:30:07 crc kubenswrapper[4736]: I0316 16:30:07.272048 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561310-8jbq2" event={"ID":"1aa1b59c-f1ce-4bad-ad42-d1c383855885","Type":"ContainerDied","Data":"abbb4a051e1d8c6b638ef802f7af399490a05dcf11a96b271ce274b11aa7eb4e"} Mar 16 16:30:08 crc kubenswrapper[4736]: I0316 16:30:08.747889 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561310-8jbq2" Mar 16 16:30:08 crc kubenswrapper[4736]: I0316 16:30:08.837748 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmpbl\" (UniqueName: \"kubernetes.io/projected/1aa1b59c-f1ce-4bad-ad42-d1c383855885-kube-api-access-rmpbl\") pod \"1aa1b59c-f1ce-4bad-ad42-d1c383855885\" (UID: \"1aa1b59c-f1ce-4bad-ad42-d1c383855885\") " Mar 16 16:30:08 crc kubenswrapper[4736]: I0316 16:30:08.846032 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa1b59c-f1ce-4bad-ad42-d1c383855885-kube-api-access-rmpbl" (OuterVolumeSpecName: "kube-api-access-rmpbl") pod "1aa1b59c-f1ce-4bad-ad42-d1c383855885" (UID: "1aa1b59c-f1ce-4bad-ad42-d1c383855885"). InnerVolumeSpecName "kube-api-access-rmpbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:30:08 crc kubenswrapper[4736]: I0316 16:30:08.940179 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmpbl\" (UniqueName: \"kubernetes.io/projected/1aa1b59c-f1ce-4bad-ad42-d1c383855885-kube-api-access-rmpbl\") on node \"crc\" DevicePath \"\"" Mar 16 16:30:09 crc kubenswrapper[4736]: I0316 16:30:09.291872 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561310-8jbq2" event={"ID":"1aa1b59c-f1ce-4bad-ad42-d1c383855885","Type":"ContainerDied","Data":"82fb31a0d13a2f3fae1ea8796f423258f4a69298230c66a2c5fd4345d13d4af9"} Mar 16 16:30:09 crc kubenswrapper[4736]: I0316 16:30:09.291920 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82fb31a0d13a2f3fae1ea8796f423258f4a69298230c66a2c5fd4345d13d4af9" Mar 16 16:30:09 crc kubenswrapper[4736]: I0316 16:30:09.292325 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561310-8jbq2" Mar 16 16:30:09 crc kubenswrapper[4736]: I0316 16:30:09.360733 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561304-kjhg7"] Mar 16 16:30:09 crc kubenswrapper[4736]: I0316 16:30:09.371299 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561304-kjhg7"] Mar 16 16:30:09 crc kubenswrapper[4736]: I0316 16:30:09.977945 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:30:09 crc kubenswrapper[4736]: E0316 16:30:09.978565 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:30:10 crc kubenswrapper[4736]: I0316 16:30:10.988317 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0db2cb20-28f6-4aa3-a4b9-a58a09a5e863" path="/var/lib/kubelet/pods/0db2cb20-28f6-4aa3-a4b9-a58a09a5e863/volumes" Mar 16 16:30:24 crc kubenswrapper[4736]: I0316 16:30:24.979379 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:30:24 crc kubenswrapper[4736]: E0316 16:30:24.980114 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:30:38 crc kubenswrapper[4736]: I0316 16:30:38.985755 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:30:38 crc kubenswrapper[4736]: E0316 16:30:38.987633 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:30:49 crc kubenswrapper[4736]: I0316 16:30:49.978049 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:30:49 crc kubenswrapper[4736]: E0316 16:30:49.979605 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:30:56 crc kubenswrapper[4736]: I0316 16:30:56.622162 4736 scope.go:117] "RemoveContainer" containerID="f728562bfa062c53d64c656da6a15919972173ad0695ae030c04d1368a0482a1" Mar 16 16:30:56 crc kubenswrapper[4736]: I0316 16:30:56.676458 4736 scope.go:117] "RemoveContainer" containerID="18fb03053e130ab411f51828f3a605f41e6cbd0270669024a8e6e4b2d25730e3" Mar 16 16:31:03 crc kubenswrapper[4736]: I0316 16:31:03.978495 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:31:03 crc kubenswrapper[4736]: E0316 16:31:03.979651 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:31:14 crc kubenswrapper[4736]: I0316 16:31:14.983940 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:31:15 crc kubenswrapper[4736]: I0316 16:31:15.965438 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"1e237bbdcaf8b89ad27d65e5e28ca7e3b5e4e81946b16727b0c800dc524e8350"} Mar 16 16:31:19 crc kubenswrapper[4736]: I0316 16:31:19.458945 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x" podUID="34b67803-050a-457b-80ff-64455949a26d" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.53:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:31:19 crc kubenswrapper[4736]: I0316 16:31:19.463875 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-5dbd94f64-hsp7x" podUID="34b67803-050a-457b-80ff-64455949a26d" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.53:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 16:31:19 crc kubenswrapper[4736]: I0316 16:31:19.629592 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-tbvd2" podUID="eea9e7aa-6f24-4b45-b7b4-347a38dccb64" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 16 16:31:42 crc kubenswrapper[4736]: I0316 16:31:42.992208 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jkqzr"] Mar 16 16:31:43 crc kubenswrapper[4736]: E0316 16:31:42.998597 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa1b59c-f1ce-4bad-ad42-d1c383855885" containerName="oc" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:42.999035 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa1b59c-f1ce-4bad-ad42-d1c383855885" containerName="oc" Mar 16 16:31:43 crc kubenswrapper[4736]: E0316 16:31:42.999055 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c2f5143-440b-4a2c-90c5-ce6b185936c3" containerName="collect-profiles" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:42.999069 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c2f5143-440b-4a2c-90c5-ce6b185936c3" containerName="collect-profiles" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:42.999991 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aa1b59c-f1ce-4bad-ad42-d1c383855885" containerName="oc" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:43.000024 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c2f5143-440b-4a2c-90c5-ce6b185936c3" containerName="collect-profiles" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:43.016619 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:43.123890 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jkqzr"] Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:43.161040 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-utilities\") pod \"certified-operators-jkqzr\" (UID: \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\") " pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:43.161284 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-catalog-content\") pod \"certified-operators-jkqzr\" (UID: \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\") " pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:43.161501 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpb44\" (UniqueName: \"kubernetes.io/projected/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-kube-api-access-mpb44\") pod \"certified-operators-jkqzr\" (UID: \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\") " pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:43.263785 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-utilities\") pod \"certified-operators-jkqzr\" (UID: \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\") " pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:43.263916 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-catalog-content\") pod \"certified-operators-jkqzr\" (UID: \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\") " pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:43.263998 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpb44\" (UniqueName: \"kubernetes.io/projected/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-kube-api-access-mpb44\") pod \"certified-operators-jkqzr\" (UID: \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\") " pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:43.267200 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-catalog-content\") pod \"certified-operators-jkqzr\" (UID: \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\") " pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:43.267209 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-utilities\") pod \"certified-operators-jkqzr\" (UID: \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\") " pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:43.300086 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpb44\" (UniqueName: \"kubernetes.io/projected/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-kube-api-access-mpb44\") pod \"certified-operators-jkqzr\" (UID: \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\") " pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:43.363687 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:31:43 crc kubenswrapper[4736]: I0316 16:31:43.954551 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jkqzr"] Mar 16 16:31:44 crc kubenswrapper[4736]: I0316 16:31:44.245730 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkqzr" event={"ID":"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e","Type":"ContainerStarted","Data":"87b93cb6da27f5f133f3ca30beb1491a056c856d7454094f9834db6b86ac812e"} Mar 16 16:31:45 crc kubenswrapper[4736]: I0316 16:31:45.257623 4736 generic.go:334] "Generic (PLEG): container finished" podID="3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" containerID="4d7ee6e8b11698306901a8146cc631f78fdc28f5a0a31604feb5c5cb3b77d5f2" exitCode=0 Mar 16 16:31:45 crc kubenswrapper[4736]: I0316 16:31:45.257710 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkqzr" event={"ID":"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e","Type":"ContainerDied","Data":"4d7ee6e8b11698306901a8146cc631f78fdc28f5a0a31604feb5c5cb3b77d5f2"} Mar 16 16:31:47 crc kubenswrapper[4736]: I0316 16:31:47.279014 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkqzr" event={"ID":"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e","Type":"ContainerStarted","Data":"d83bba6d503768261af5ffbf68b3f5ccf4cf4278cda0df2e6ff543950b98456f"} Mar 16 16:31:48 crc kubenswrapper[4736]: I0316 16:31:48.288087 4736 generic.go:334] "Generic (PLEG): container finished" podID="3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" containerID="d83bba6d503768261af5ffbf68b3f5ccf4cf4278cda0df2e6ff543950b98456f" exitCode=0 Mar 16 16:31:48 crc kubenswrapper[4736]: I0316 16:31:48.288276 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkqzr" event={"ID":"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e","Type":"ContainerDied","Data":"d83bba6d503768261af5ffbf68b3f5ccf4cf4278cda0df2e6ff543950b98456f"} Mar 16 16:31:49 crc kubenswrapper[4736]: I0316 16:31:49.299376 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkqzr" event={"ID":"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e","Type":"ContainerStarted","Data":"b77870ab28f9adef29d2dbb1fc25496913579619ac828bcbfc6978dafc124393"} Mar 16 16:31:49 crc kubenswrapper[4736]: I0316 16:31:49.331376 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jkqzr" podStartSLOduration=3.921310046 podStartE2EDuration="7.331339727s" podCreationTimestamp="2026-03-16 16:31:42 +0000 UTC" firstStartedPulling="2026-03-16 16:31:45.259593246 +0000 UTC m=+4706.986983523" lastFinishedPulling="2026-03-16 16:31:48.669622917 +0000 UTC m=+4710.397013204" observedRunningTime="2026-03-16 16:31:49.327156963 +0000 UTC m=+4711.054547250" watchObservedRunningTime="2026-03-16 16:31:49.331339727 +0000 UTC m=+4711.058730014" Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.249060 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4gdwr"] Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.253924 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.265454 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gdwr"] Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.364276 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.364325 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.368482 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flw6r\" (UniqueName: \"kubernetes.io/projected/1ee11689-20f5-463a-afe8-84179f46bf57-kube-api-access-flw6r\") pod \"redhat-marketplace-4gdwr\" (UID: \"1ee11689-20f5-463a-afe8-84179f46bf57\") " pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.368670 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee11689-20f5-463a-afe8-84179f46bf57-utilities\") pod \"redhat-marketplace-4gdwr\" (UID: \"1ee11689-20f5-463a-afe8-84179f46bf57\") " pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.368691 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee11689-20f5-463a-afe8-84179f46bf57-catalog-content\") pod \"redhat-marketplace-4gdwr\" (UID: \"1ee11689-20f5-463a-afe8-84179f46bf57\") " pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.471467 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee11689-20f5-463a-afe8-84179f46bf57-utilities\") pod \"redhat-marketplace-4gdwr\" (UID: \"1ee11689-20f5-463a-afe8-84179f46bf57\") " pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.471733 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee11689-20f5-463a-afe8-84179f46bf57-catalog-content\") pod \"redhat-marketplace-4gdwr\" (UID: \"1ee11689-20f5-463a-afe8-84179f46bf57\") " pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.471828 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flw6r\" (UniqueName: \"kubernetes.io/projected/1ee11689-20f5-463a-afe8-84179f46bf57-kube-api-access-flw6r\") pod \"redhat-marketplace-4gdwr\" (UID: \"1ee11689-20f5-463a-afe8-84179f46bf57\") " pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.472601 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee11689-20f5-463a-afe8-84179f46bf57-catalog-content\") pod \"redhat-marketplace-4gdwr\" (UID: \"1ee11689-20f5-463a-afe8-84179f46bf57\") " pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.472719 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee11689-20f5-463a-afe8-84179f46bf57-utilities\") pod \"redhat-marketplace-4gdwr\" (UID: \"1ee11689-20f5-463a-afe8-84179f46bf57\") " pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.500998 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flw6r\" (UniqueName: \"kubernetes.io/projected/1ee11689-20f5-463a-afe8-84179f46bf57-kube-api-access-flw6r\") pod \"redhat-marketplace-4gdwr\" (UID: \"1ee11689-20f5-463a-afe8-84179f46bf57\") " pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:31:53 crc kubenswrapper[4736]: I0316 16:31:53.609556 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:31:54 crc kubenswrapper[4736]: I0316 16:31:54.417552 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jkqzr" podUID="3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" containerName="registry-server" probeResult="failure" output=< Mar 16 16:31:54 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:31:54 crc kubenswrapper[4736]: > Mar 16 16:31:56 crc kubenswrapper[4736]: I0316 16:31:55.382069 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gdwr"] Mar 16 16:31:56 crc kubenswrapper[4736]: I0316 16:31:56.377668 4736 generic.go:334] "Generic (PLEG): container finished" podID="1ee11689-20f5-463a-afe8-84179f46bf57" containerID="7707d834fdcb59cd0fb54303745ec96d7b1a86851d395f9ce2eb36dc7823dd4e" exitCode=0 Mar 16 16:31:56 crc kubenswrapper[4736]: I0316 16:31:56.377776 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gdwr" event={"ID":"1ee11689-20f5-463a-afe8-84179f46bf57","Type":"ContainerDied","Data":"7707d834fdcb59cd0fb54303745ec96d7b1a86851d395f9ce2eb36dc7823dd4e"} Mar 16 16:31:56 crc kubenswrapper[4736]: I0316 16:31:56.378326 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gdwr" event={"ID":"1ee11689-20f5-463a-afe8-84179f46bf57","Type":"ContainerStarted","Data":"151bb6ac0ba3b424a24449e021cf4f12b59715ae7605ea846afbd7d8a52c8fb2"} Mar 16 16:31:58 crc kubenswrapper[4736]: I0316 16:31:58.396189 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gdwr" event={"ID":"1ee11689-20f5-463a-afe8-84179f46bf57","Type":"ContainerStarted","Data":"1e5b7ac234383d62bf6fd55d8bc6a4ae0c9653159c1604f9675b9b6a890a99ca"} Mar 16 16:31:59 crc kubenswrapper[4736]: I0316 16:31:59.404578 4736 generic.go:334] "Generic (PLEG): container finished" podID="1ee11689-20f5-463a-afe8-84179f46bf57" containerID="1e5b7ac234383d62bf6fd55d8bc6a4ae0c9653159c1604f9675b9b6a890a99ca" exitCode=0 Mar 16 16:31:59 crc kubenswrapper[4736]: I0316 16:31:59.404629 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gdwr" event={"ID":"1ee11689-20f5-463a-afe8-84179f46bf57","Type":"ContainerDied","Data":"1e5b7ac234383d62bf6fd55d8bc6a4ae0c9653159c1604f9675b9b6a890a99ca"} Mar 16 16:32:00 crc kubenswrapper[4736]: I0316 16:32:00.284227 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561312-j9q5c"] Mar 16 16:32:00 crc kubenswrapper[4736]: I0316 16:32:00.285835 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561312-j9q5c" Mar 16 16:32:00 crc kubenswrapper[4736]: I0316 16:32:00.299440 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561312-j9q5c"] Mar 16 16:32:00 crc kubenswrapper[4736]: I0316 16:32:00.308322 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:32:00 crc kubenswrapper[4736]: I0316 16:32:00.308322 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:32:00 crc kubenswrapper[4736]: I0316 16:32:00.309038 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:32:00 crc kubenswrapper[4736]: I0316 16:32:00.423182 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gdwr" event={"ID":"1ee11689-20f5-463a-afe8-84179f46bf57","Type":"ContainerStarted","Data":"7a22a69f7cb36edb72b12403acb157c13e1adff73a05ddc3afb5db7697930e5e"} Mar 16 16:32:00 crc kubenswrapper[4736]: I0316 16:32:00.442913 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt4mt\" (UniqueName: \"kubernetes.io/projected/0b3b4f6d-ba34-470f-bb2f-0fc933d54d70-kube-api-access-lt4mt\") pod \"auto-csr-approver-29561312-j9q5c\" (UID: \"0b3b4f6d-ba34-470f-bb2f-0fc933d54d70\") " pod="openshift-infra/auto-csr-approver-29561312-j9q5c" Mar 16 16:32:00 crc kubenswrapper[4736]: I0316 16:32:00.447771 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4gdwr" podStartSLOduration=3.958176893 podStartE2EDuration="7.447753658s" podCreationTimestamp="2026-03-16 16:31:53 +0000 UTC" firstStartedPulling="2026-03-16 16:31:56.386556194 +0000 UTC m=+4718.113946471" lastFinishedPulling="2026-03-16 16:31:59.876132949 +0000 UTC m=+4721.603523236" observedRunningTime="2026-03-16 16:32:00.438734693 +0000 UTC m=+4722.166124990" watchObservedRunningTime="2026-03-16 16:32:00.447753658 +0000 UTC m=+4722.175143945" Mar 16 16:32:00 crc kubenswrapper[4736]: I0316 16:32:00.544891 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt4mt\" (UniqueName: \"kubernetes.io/projected/0b3b4f6d-ba34-470f-bb2f-0fc933d54d70-kube-api-access-lt4mt\") pod \"auto-csr-approver-29561312-j9q5c\" (UID: \"0b3b4f6d-ba34-470f-bb2f-0fc933d54d70\") " pod="openshift-infra/auto-csr-approver-29561312-j9q5c" Mar 16 16:32:00 crc kubenswrapper[4736]: I0316 16:32:00.567621 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt4mt\" (UniqueName: \"kubernetes.io/projected/0b3b4f6d-ba34-470f-bb2f-0fc933d54d70-kube-api-access-lt4mt\") pod \"auto-csr-approver-29561312-j9q5c\" (UID: \"0b3b4f6d-ba34-470f-bb2f-0fc933d54d70\") " pod="openshift-infra/auto-csr-approver-29561312-j9q5c" Mar 16 16:32:00 crc kubenswrapper[4736]: I0316 16:32:00.608934 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561312-j9q5c" Mar 16 16:32:01 crc kubenswrapper[4736]: I0316 16:32:01.274130 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561312-j9q5c"] Mar 16 16:32:01 crc kubenswrapper[4736]: W0316 16:32:01.300274 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b3b4f6d_ba34_470f_bb2f_0fc933d54d70.slice/crio-19d51cafdaa86c24efde451d1cdef77a33ca0a059346e43a97e78c41895f534a WatchSource:0}: Error finding container 19d51cafdaa86c24efde451d1cdef77a33ca0a059346e43a97e78c41895f534a: Status 404 returned error can't find the container with id 19d51cafdaa86c24efde451d1cdef77a33ca0a059346e43a97e78c41895f534a Mar 16 16:32:01 crc kubenswrapper[4736]: I0316 16:32:01.433793 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561312-j9q5c" event={"ID":"0b3b4f6d-ba34-470f-bb2f-0fc933d54d70","Type":"ContainerStarted","Data":"19d51cafdaa86c24efde451d1cdef77a33ca0a059346e43a97e78c41895f534a"} Mar 16 16:32:03 crc kubenswrapper[4736]: I0316 16:32:03.611494 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:32:03 crc kubenswrapper[4736]: I0316 16:32:03.612119 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:32:04 crc kubenswrapper[4736]: I0316 16:32:04.413588 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jkqzr" podUID="3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" containerName="registry-server" probeResult="failure" output=< Mar 16 16:32:04 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:32:04 crc kubenswrapper[4736]: > Mar 16 16:32:04 crc kubenswrapper[4736]: I0316 16:32:04.458710 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561312-j9q5c" event={"ID":"0b3b4f6d-ba34-470f-bb2f-0fc933d54d70","Type":"ContainerStarted","Data":"a93ce01159af2488c2b9f0ee2c194dee6b94d92d73f69547983c88b774036fad"} Mar 16 16:32:04 crc kubenswrapper[4736]: I0316 16:32:04.477521 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561312-j9q5c" podStartSLOduration=3.332096769 podStartE2EDuration="4.477494937s" podCreationTimestamp="2026-03-16 16:32:00 +0000 UTC" firstStartedPulling="2026-03-16 16:32:01.301942294 +0000 UTC m=+4723.029332581" lastFinishedPulling="2026-03-16 16:32:02.447340462 +0000 UTC m=+4724.174730749" observedRunningTime="2026-03-16 16:32:04.472741997 +0000 UTC m=+4726.200132314" watchObservedRunningTime="2026-03-16 16:32:04.477494937 +0000 UTC m=+4726.204885224" Mar 16 16:32:04 crc kubenswrapper[4736]: I0316 16:32:04.666807 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4gdwr" podUID="1ee11689-20f5-463a-afe8-84179f46bf57" containerName="registry-server" probeResult="failure" output=< Mar 16 16:32:04 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:32:04 crc kubenswrapper[4736]: > Mar 16 16:32:05 crc kubenswrapper[4736]: I0316 16:32:05.470040 4736 generic.go:334] "Generic (PLEG): container finished" podID="0b3b4f6d-ba34-470f-bb2f-0fc933d54d70" containerID="a93ce01159af2488c2b9f0ee2c194dee6b94d92d73f69547983c88b774036fad" exitCode=0 Mar 16 16:32:05 crc kubenswrapper[4736]: I0316 16:32:05.470426 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561312-j9q5c" event={"ID":"0b3b4f6d-ba34-470f-bb2f-0fc933d54d70","Type":"ContainerDied","Data":"a93ce01159af2488c2b9f0ee2c194dee6b94d92d73f69547983c88b774036fad"} Mar 16 16:32:07 crc kubenswrapper[4736]: I0316 16:32:07.201401 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561312-j9q5c" Mar 16 16:32:07 crc kubenswrapper[4736]: I0316 16:32:07.302646 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt4mt\" (UniqueName: \"kubernetes.io/projected/0b3b4f6d-ba34-470f-bb2f-0fc933d54d70-kube-api-access-lt4mt\") pod \"0b3b4f6d-ba34-470f-bb2f-0fc933d54d70\" (UID: \"0b3b4f6d-ba34-470f-bb2f-0fc933d54d70\") " Mar 16 16:32:07 crc kubenswrapper[4736]: I0316 16:32:07.322432 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3b4f6d-ba34-470f-bb2f-0fc933d54d70-kube-api-access-lt4mt" (OuterVolumeSpecName: "kube-api-access-lt4mt") pod "0b3b4f6d-ba34-470f-bb2f-0fc933d54d70" (UID: "0b3b4f6d-ba34-470f-bb2f-0fc933d54d70"). InnerVolumeSpecName "kube-api-access-lt4mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:32:07 crc kubenswrapper[4736]: I0316 16:32:07.405081 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt4mt\" (UniqueName: \"kubernetes.io/projected/0b3b4f6d-ba34-470f-bb2f-0fc933d54d70-kube-api-access-lt4mt\") on node \"crc\" DevicePath \"\"" Mar 16 16:32:07 crc kubenswrapper[4736]: I0316 16:32:07.514097 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561312-j9q5c" event={"ID":"0b3b4f6d-ba34-470f-bb2f-0fc933d54d70","Type":"ContainerDied","Data":"19d51cafdaa86c24efde451d1cdef77a33ca0a059346e43a97e78c41895f534a"} Mar 16 16:32:07 crc kubenswrapper[4736]: I0316 16:32:07.514522 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561312-j9q5c" Mar 16 16:32:07 crc kubenswrapper[4736]: I0316 16:32:07.515202 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19d51cafdaa86c24efde451d1cdef77a33ca0a059346e43a97e78c41895f534a" Mar 16 16:32:07 crc kubenswrapper[4736]: I0316 16:32:07.674818 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561306-c75pn"] Mar 16 16:32:07 crc kubenswrapper[4736]: I0316 16:32:07.683401 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561306-c75pn"] Mar 16 16:32:09 crc kubenswrapper[4736]: I0316 16:32:09.003255 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0de3b91-e956-46ee-b977-c8417aa86d2c" path="/var/lib/kubelet/pods/d0de3b91-e956-46ee-b977-c8417aa86d2c/volumes" Mar 16 16:32:13 crc kubenswrapper[4736]: I0316 16:32:13.522508 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:32:13 crc kubenswrapper[4736]: I0316 16:32:13.571556 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:32:14 crc kubenswrapper[4736]: I0316 16:32:14.203640 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jkqzr"] Mar 16 16:32:14 crc kubenswrapper[4736]: I0316 16:32:14.588853 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jkqzr" podUID="3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" containerName="registry-server" containerID="cri-o://b77870ab28f9adef29d2dbb1fc25496913579619ac828bcbfc6978dafc124393" gracePeriod=2 Mar 16 16:32:14 crc kubenswrapper[4736]: I0316 16:32:14.652336 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4gdwr" podUID="1ee11689-20f5-463a-afe8-84179f46bf57" containerName="registry-server" probeResult="failure" output=< Mar 16 16:32:14 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:32:14 crc kubenswrapper[4736]: > Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.136529 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.256292 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpb44\" (UniqueName: \"kubernetes.io/projected/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-kube-api-access-mpb44\") pod \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\" (UID: \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\") " Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.256405 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-utilities\") pod \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\" (UID: \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\") " Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.256495 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-catalog-content\") pod \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\" (UID: \"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e\") " Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.258150 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-utilities" (OuterVolumeSpecName: "utilities") pod "3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" (UID: "3ffbe5cd-842d-4623-b8cf-45e0aee43d7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.264553 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-kube-api-access-mpb44" (OuterVolumeSpecName: "kube-api-access-mpb44") pod "3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" (UID: "3ffbe5cd-842d-4623-b8cf-45e0aee43d7e"). InnerVolumeSpecName "kube-api-access-mpb44". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.329894 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" (UID: "3ffbe5cd-842d-4623-b8cf-45e0aee43d7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.358609 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.358639 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpb44\" (UniqueName: \"kubernetes.io/projected/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-kube-api-access-mpb44\") on node \"crc\" DevicePath \"\"" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.358649 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.600756 4736 generic.go:334] "Generic (PLEG): container finished" podID="3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" containerID="b77870ab28f9adef29d2dbb1fc25496913579619ac828bcbfc6978dafc124393" exitCode=0 Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.600805 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkqzr" event={"ID":"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e","Type":"ContainerDied","Data":"b77870ab28f9adef29d2dbb1fc25496913579619ac828bcbfc6978dafc124393"} Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.600845 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkqzr" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.600876 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkqzr" event={"ID":"3ffbe5cd-842d-4623-b8cf-45e0aee43d7e","Type":"ContainerDied","Data":"87b93cb6da27f5f133f3ca30beb1491a056c856d7454094f9834db6b86ac812e"} Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.602007 4736 scope.go:117] "RemoveContainer" containerID="b77870ab28f9adef29d2dbb1fc25496913579619ac828bcbfc6978dafc124393" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.652736 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jkqzr"] Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.654657 4736 scope.go:117] "RemoveContainer" containerID="d83bba6d503768261af5ffbf68b3f5ccf4cf4278cda0df2e6ff543950b98456f" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.660796 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jkqzr"] Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.691613 4736 scope.go:117] "RemoveContainer" containerID="4d7ee6e8b11698306901a8146cc631f78fdc28f5a0a31604feb5c5cb3b77d5f2" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.747064 4736 scope.go:117] "RemoveContainer" containerID="b77870ab28f9adef29d2dbb1fc25496913579619ac828bcbfc6978dafc124393" Mar 16 16:32:15 crc kubenswrapper[4736]: E0316 16:32:15.752542 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b77870ab28f9adef29d2dbb1fc25496913579619ac828bcbfc6978dafc124393\": container with ID starting with b77870ab28f9adef29d2dbb1fc25496913579619ac828bcbfc6978dafc124393 not found: ID does not exist" containerID="b77870ab28f9adef29d2dbb1fc25496913579619ac828bcbfc6978dafc124393" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.752622 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b77870ab28f9adef29d2dbb1fc25496913579619ac828bcbfc6978dafc124393"} err="failed to get container status \"b77870ab28f9adef29d2dbb1fc25496913579619ac828bcbfc6978dafc124393\": rpc error: code = NotFound desc = could not find container \"b77870ab28f9adef29d2dbb1fc25496913579619ac828bcbfc6978dafc124393\": container with ID starting with b77870ab28f9adef29d2dbb1fc25496913579619ac828bcbfc6978dafc124393 not found: ID does not exist" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.752653 4736 scope.go:117] "RemoveContainer" containerID="d83bba6d503768261af5ffbf68b3f5ccf4cf4278cda0df2e6ff543950b98456f" Mar 16 16:32:15 crc kubenswrapper[4736]: E0316 16:32:15.753095 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d83bba6d503768261af5ffbf68b3f5ccf4cf4278cda0df2e6ff543950b98456f\": container with ID starting with d83bba6d503768261af5ffbf68b3f5ccf4cf4278cda0df2e6ff543950b98456f not found: ID does not exist" containerID="d83bba6d503768261af5ffbf68b3f5ccf4cf4278cda0df2e6ff543950b98456f" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.753135 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d83bba6d503768261af5ffbf68b3f5ccf4cf4278cda0df2e6ff543950b98456f"} err="failed to get container status \"d83bba6d503768261af5ffbf68b3f5ccf4cf4278cda0df2e6ff543950b98456f\": rpc error: code = NotFound desc = could not find container \"d83bba6d503768261af5ffbf68b3f5ccf4cf4278cda0df2e6ff543950b98456f\": container with ID starting with d83bba6d503768261af5ffbf68b3f5ccf4cf4278cda0df2e6ff543950b98456f not found: ID does not exist" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.753150 4736 scope.go:117] "RemoveContainer" containerID="4d7ee6e8b11698306901a8146cc631f78fdc28f5a0a31604feb5c5cb3b77d5f2" Mar 16 16:32:15 crc kubenswrapper[4736]: E0316 16:32:15.753437 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d7ee6e8b11698306901a8146cc631f78fdc28f5a0a31604feb5c5cb3b77d5f2\": container with ID starting with 4d7ee6e8b11698306901a8146cc631f78fdc28f5a0a31604feb5c5cb3b77d5f2 not found: ID does not exist" containerID="4d7ee6e8b11698306901a8146cc631f78fdc28f5a0a31604feb5c5cb3b77d5f2" Mar 16 16:32:15 crc kubenswrapper[4736]: I0316 16:32:15.753459 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d7ee6e8b11698306901a8146cc631f78fdc28f5a0a31604feb5c5cb3b77d5f2"} err="failed to get container status \"4d7ee6e8b11698306901a8146cc631f78fdc28f5a0a31604feb5c5cb3b77d5f2\": rpc error: code = NotFound desc = could not find container \"4d7ee6e8b11698306901a8146cc631f78fdc28f5a0a31604feb5c5cb3b77d5f2\": container with ID starting with 4d7ee6e8b11698306901a8146cc631f78fdc28f5a0a31604feb5c5cb3b77d5f2 not found: ID does not exist" Mar 16 16:32:18 crc kubenswrapper[4736]: I0316 16:32:18.638362 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" path="/var/lib/kubelet/pods/3ffbe5cd-842d-4623-b8cf-45e0aee43d7e/volumes" Mar 16 16:32:23 crc kubenswrapper[4736]: I0316 16:32:23.694802 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:32:23 crc kubenswrapper[4736]: I0316 16:32:23.759399 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:32:24 crc kubenswrapper[4736]: I0316 16:32:24.471142 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gdwr"] Mar 16 16:32:25 crc kubenswrapper[4736]: I0316 16:32:25.692561 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4gdwr" podUID="1ee11689-20f5-463a-afe8-84179f46bf57" containerName="registry-server" containerID="cri-o://7a22a69f7cb36edb72b12403acb157c13e1adff73a05ddc3afb5db7697930e5e" gracePeriod=2 Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.280145 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.417676 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee11689-20f5-463a-afe8-84179f46bf57-utilities\") pod \"1ee11689-20f5-463a-afe8-84179f46bf57\" (UID: \"1ee11689-20f5-463a-afe8-84179f46bf57\") " Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.417751 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flw6r\" (UniqueName: \"kubernetes.io/projected/1ee11689-20f5-463a-afe8-84179f46bf57-kube-api-access-flw6r\") pod \"1ee11689-20f5-463a-afe8-84179f46bf57\" (UID: \"1ee11689-20f5-463a-afe8-84179f46bf57\") " Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.417928 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee11689-20f5-463a-afe8-84179f46bf57-catalog-content\") pod \"1ee11689-20f5-463a-afe8-84179f46bf57\" (UID: \"1ee11689-20f5-463a-afe8-84179f46bf57\") " Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.418746 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ee11689-20f5-463a-afe8-84179f46bf57-utilities" (OuterVolumeSpecName: "utilities") pod "1ee11689-20f5-463a-afe8-84179f46bf57" (UID: "1ee11689-20f5-463a-afe8-84179f46bf57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.428999 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee11689-20f5-463a-afe8-84179f46bf57-kube-api-access-flw6r" (OuterVolumeSpecName: "kube-api-access-flw6r") pod "1ee11689-20f5-463a-afe8-84179f46bf57" (UID: "1ee11689-20f5-463a-afe8-84179f46bf57"). InnerVolumeSpecName "kube-api-access-flw6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.437219 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ee11689-20f5-463a-afe8-84179f46bf57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ee11689-20f5-463a-afe8-84179f46bf57" (UID: "1ee11689-20f5-463a-afe8-84179f46bf57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.520561 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee11689-20f5-463a-afe8-84179f46bf57-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.520595 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flw6r\" (UniqueName: \"kubernetes.io/projected/1ee11689-20f5-463a-afe8-84179f46bf57-kube-api-access-flw6r\") on node \"crc\" DevicePath \"\"" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.520606 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee11689-20f5-463a-afe8-84179f46bf57-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.701849 4736 generic.go:334] "Generic (PLEG): container finished" podID="1ee11689-20f5-463a-afe8-84179f46bf57" containerID="7a22a69f7cb36edb72b12403acb157c13e1adff73a05ddc3afb5db7697930e5e" exitCode=0 Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.701903 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gdwr" event={"ID":"1ee11689-20f5-463a-afe8-84179f46bf57","Type":"ContainerDied","Data":"7a22a69f7cb36edb72b12403acb157c13e1adff73a05ddc3afb5db7697930e5e"} Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.701933 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4gdwr" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.701953 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4gdwr" event={"ID":"1ee11689-20f5-463a-afe8-84179f46bf57","Type":"ContainerDied","Data":"151bb6ac0ba3b424a24449e021cf4f12b59715ae7605ea846afbd7d8a52c8fb2"} Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.701976 4736 scope.go:117] "RemoveContainer" containerID="7a22a69f7cb36edb72b12403acb157c13e1adff73a05ddc3afb5db7697930e5e" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.753033 4736 scope.go:117] "RemoveContainer" containerID="1e5b7ac234383d62bf6fd55d8bc6a4ae0c9653159c1604f9675b9b6a890a99ca" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.758029 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gdwr"] Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.782894 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4gdwr"] Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.787669 4736 scope.go:117] "RemoveContainer" containerID="7707d834fdcb59cd0fb54303745ec96d7b1a86851d395f9ce2eb36dc7823dd4e" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.825317 4736 scope.go:117] "RemoveContainer" containerID="7a22a69f7cb36edb72b12403acb157c13e1adff73a05ddc3afb5db7697930e5e" Mar 16 16:32:26 crc kubenswrapper[4736]: E0316 16:32:26.825673 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a22a69f7cb36edb72b12403acb157c13e1adff73a05ddc3afb5db7697930e5e\": container with ID starting with 7a22a69f7cb36edb72b12403acb157c13e1adff73a05ddc3afb5db7697930e5e not found: ID does not exist" containerID="7a22a69f7cb36edb72b12403acb157c13e1adff73a05ddc3afb5db7697930e5e" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.825708 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a22a69f7cb36edb72b12403acb157c13e1adff73a05ddc3afb5db7697930e5e"} err="failed to get container status \"7a22a69f7cb36edb72b12403acb157c13e1adff73a05ddc3afb5db7697930e5e\": rpc error: code = NotFound desc = could not find container \"7a22a69f7cb36edb72b12403acb157c13e1adff73a05ddc3afb5db7697930e5e\": container with ID starting with 7a22a69f7cb36edb72b12403acb157c13e1adff73a05ddc3afb5db7697930e5e not found: ID does not exist" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.825734 4736 scope.go:117] "RemoveContainer" containerID="1e5b7ac234383d62bf6fd55d8bc6a4ae0c9653159c1604f9675b9b6a890a99ca" Mar 16 16:32:26 crc kubenswrapper[4736]: E0316 16:32:26.825964 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e5b7ac234383d62bf6fd55d8bc6a4ae0c9653159c1604f9675b9b6a890a99ca\": container with ID starting with 1e5b7ac234383d62bf6fd55d8bc6a4ae0c9653159c1604f9675b9b6a890a99ca not found: ID does not exist" containerID="1e5b7ac234383d62bf6fd55d8bc6a4ae0c9653159c1604f9675b9b6a890a99ca" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.825990 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e5b7ac234383d62bf6fd55d8bc6a4ae0c9653159c1604f9675b9b6a890a99ca"} err="failed to get container status \"1e5b7ac234383d62bf6fd55d8bc6a4ae0c9653159c1604f9675b9b6a890a99ca\": rpc error: code = NotFound desc = could not find container \"1e5b7ac234383d62bf6fd55d8bc6a4ae0c9653159c1604f9675b9b6a890a99ca\": container with ID starting with 1e5b7ac234383d62bf6fd55d8bc6a4ae0c9653159c1604f9675b9b6a890a99ca not found: ID does not exist" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.826004 4736 scope.go:117] "RemoveContainer" containerID="7707d834fdcb59cd0fb54303745ec96d7b1a86851d395f9ce2eb36dc7823dd4e" Mar 16 16:32:26 crc kubenswrapper[4736]: E0316 16:32:26.826236 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7707d834fdcb59cd0fb54303745ec96d7b1a86851d395f9ce2eb36dc7823dd4e\": container with ID starting with 7707d834fdcb59cd0fb54303745ec96d7b1a86851d395f9ce2eb36dc7823dd4e not found: ID does not exist" containerID="7707d834fdcb59cd0fb54303745ec96d7b1a86851d395f9ce2eb36dc7823dd4e" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.826268 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7707d834fdcb59cd0fb54303745ec96d7b1a86851d395f9ce2eb36dc7823dd4e"} err="failed to get container status \"7707d834fdcb59cd0fb54303745ec96d7b1a86851d395f9ce2eb36dc7823dd4e\": rpc error: code = NotFound desc = could not find container \"7707d834fdcb59cd0fb54303745ec96d7b1a86851d395f9ce2eb36dc7823dd4e\": container with ID starting with 7707d834fdcb59cd0fb54303745ec96d7b1a86851d395f9ce2eb36dc7823dd4e not found: ID does not exist" Mar 16 16:32:26 crc kubenswrapper[4736]: I0316 16:32:26.993221 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ee11689-20f5-463a-afe8-84179f46bf57" path="/var/lib/kubelet/pods/1ee11689-20f5-463a-afe8-84179f46bf57/volumes" Mar 16 16:32:56 crc kubenswrapper[4736]: I0316 16:32:56.929873 4736 scope.go:117] "RemoveContainer" containerID="1b85a3062ebabaa1328dfa0288420a69f610be04180d4f43323868e6723efc9f" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.240649 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b75zh"] Mar 16 16:32:58 crc kubenswrapper[4736]: E0316 16:32:58.241939 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee11689-20f5-463a-afe8-84179f46bf57" containerName="registry-server" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.241971 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee11689-20f5-463a-afe8-84179f46bf57" containerName="registry-server" Mar 16 16:32:58 crc kubenswrapper[4736]: E0316 16:32:58.242009 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" containerName="extract-utilities" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.242018 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" containerName="extract-utilities" Mar 16 16:32:58 crc kubenswrapper[4736]: E0316 16:32:58.242033 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee11689-20f5-463a-afe8-84179f46bf57" containerName="extract-content" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.242041 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee11689-20f5-463a-afe8-84179f46bf57" containerName="extract-content" Mar 16 16:32:58 crc kubenswrapper[4736]: E0316 16:32:58.242065 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee11689-20f5-463a-afe8-84179f46bf57" containerName="extract-utilities" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.242074 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee11689-20f5-463a-afe8-84179f46bf57" containerName="extract-utilities" Mar 16 16:32:58 crc kubenswrapper[4736]: E0316 16:32:58.242087 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" containerName="registry-server" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.242096 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" containerName="registry-server" Mar 16 16:32:58 crc kubenswrapper[4736]: E0316 16:32:58.242183 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3b4f6d-ba34-470f-bb2f-0fc933d54d70" containerName="oc" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.242192 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3b4f6d-ba34-470f-bb2f-0fc933d54d70" containerName="oc" Mar 16 16:32:58 crc kubenswrapper[4736]: E0316 16:32:58.242212 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" containerName="extract-content" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.242220 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" containerName="extract-content" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.242478 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3b4f6d-ba34-470f-bb2f-0fc933d54d70" containerName="oc" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.242513 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee11689-20f5-463a-afe8-84179f46bf57" containerName="registry-server" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.242539 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffbe5cd-842d-4623-b8cf-45e0aee43d7e" containerName="registry-server" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.244783 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.254300 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b75zh"] Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.375683 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c1b703-4677-4991-b90b-37b8168940c5-utilities\") pod \"community-operators-b75zh\" (UID: \"b9c1b703-4677-4991-b90b-37b8168940c5\") " pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.375846 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c1b703-4677-4991-b90b-37b8168940c5-catalog-content\") pod \"community-operators-b75zh\" (UID: \"b9c1b703-4677-4991-b90b-37b8168940c5\") " pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.376413 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl55v\" (UniqueName: \"kubernetes.io/projected/b9c1b703-4677-4991-b90b-37b8168940c5-kube-api-access-kl55v\") pod \"community-operators-b75zh\" (UID: \"b9c1b703-4677-4991-b90b-37b8168940c5\") " pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.478151 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c1b703-4677-4991-b90b-37b8168940c5-utilities\") pod \"community-operators-b75zh\" (UID: \"b9c1b703-4677-4991-b90b-37b8168940c5\") " pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.478527 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c1b703-4677-4991-b90b-37b8168940c5-catalog-content\") pod \"community-operators-b75zh\" (UID: \"b9c1b703-4677-4991-b90b-37b8168940c5\") " pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.478677 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl55v\" (UniqueName: \"kubernetes.io/projected/b9c1b703-4677-4991-b90b-37b8168940c5-kube-api-access-kl55v\") pod \"community-operators-b75zh\" (UID: \"b9c1b703-4677-4991-b90b-37b8168940c5\") " pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.479386 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c1b703-4677-4991-b90b-37b8168940c5-utilities\") pod \"community-operators-b75zh\" (UID: \"b9c1b703-4677-4991-b90b-37b8168940c5\") " pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.479812 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c1b703-4677-4991-b90b-37b8168940c5-catalog-content\") pod \"community-operators-b75zh\" (UID: \"b9c1b703-4677-4991-b90b-37b8168940c5\") " pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.504759 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl55v\" (UniqueName: \"kubernetes.io/projected/b9c1b703-4677-4991-b90b-37b8168940c5-kube-api-access-kl55v\") pod \"community-operators-b75zh\" (UID: \"b9c1b703-4677-4991-b90b-37b8168940c5\") " pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:32:58 crc kubenswrapper[4736]: I0316 16:32:58.609936 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:32:59 crc kubenswrapper[4736]: I0316 16:32:59.225514 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b75zh"] Mar 16 16:33:00 crc kubenswrapper[4736]: I0316 16:33:00.068616 4736 generic.go:334] "Generic (PLEG): container finished" podID="b9c1b703-4677-4991-b90b-37b8168940c5" containerID="43cd522d39803e36d92f3bd08fb8dc6eeaf327e07c769acbd183021439d9c0cf" exitCode=0 Mar 16 16:33:00 crc kubenswrapper[4736]: I0316 16:33:00.068710 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b75zh" event={"ID":"b9c1b703-4677-4991-b90b-37b8168940c5","Type":"ContainerDied","Data":"43cd522d39803e36d92f3bd08fb8dc6eeaf327e07c769acbd183021439d9c0cf"} Mar 16 16:33:00 crc kubenswrapper[4736]: I0316 16:33:00.068974 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b75zh" event={"ID":"b9c1b703-4677-4991-b90b-37b8168940c5","Type":"ContainerStarted","Data":"8c2b5b3f8a66cee86eb94e9d862c98515b954fc7682d40a344bc06305532af00"} Mar 16 16:33:00 crc kubenswrapper[4736]: I0316 16:33:00.070924 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 16:33:01 crc kubenswrapper[4736]: I0316 16:33:01.079696 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b75zh" event={"ID":"b9c1b703-4677-4991-b90b-37b8168940c5","Type":"ContainerStarted","Data":"088664df654f1d39aee663b5f10c63dece46b35c9bde6f0e464d651e536e0920"} Mar 16 16:33:03 crc kubenswrapper[4736]: I0316 16:33:03.100188 4736 generic.go:334] "Generic (PLEG): container finished" podID="b9c1b703-4677-4991-b90b-37b8168940c5" containerID="088664df654f1d39aee663b5f10c63dece46b35c9bde6f0e464d651e536e0920" exitCode=0 Mar 16 16:33:03 crc kubenswrapper[4736]: I0316 16:33:03.100255 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b75zh" event={"ID":"b9c1b703-4677-4991-b90b-37b8168940c5","Type":"ContainerDied","Data":"088664df654f1d39aee663b5f10c63dece46b35c9bde6f0e464d651e536e0920"} Mar 16 16:33:04 crc kubenswrapper[4736]: I0316 16:33:04.114064 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b75zh" event={"ID":"b9c1b703-4677-4991-b90b-37b8168940c5","Type":"ContainerStarted","Data":"283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9"} Mar 16 16:33:04 crc kubenswrapper[4736]: I0316 16:33:04.137719 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b75zh" podStartSLOduration=2.647850338 podStartE2EDuration="6.136752664s" podCreationTimestamp="2026-03-16 16:32:58 +0000 UTC" firstStartedPulling="2026-03-16 16:33:00.070657287 +0000 UTC m=+4781.798047584" lastFinishedPulling="2026-03-16 16:33:03.559559623 +0000 UTC m=+4785.286949910" observedRunningTime="2026-03-16 16:33:04.129178758 +0000 UTC m=+4785.856569065" watchObservedRunningTime="2026-03-16 16:33:04.136752664 +0000 UTC m=+4785.864142951" Mar 16 16:33:08 crc kubenswrapper[4736]: I0316 16:33:08.610333 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:33:08 crc kubenswrapper[4736]: I0316 16:33:08.610949 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:33:09 crc kubenswrapper[4736]: I0316 16:33:09.655191 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-b75zh" podUID="b9c1b703-4677-4991-b90b-37b8168940c5" containerName="registry-server" probeResult="failure" output=< Mar 16 16:33:09 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:33:09 crc kubenswrapper[4736]: > Mar 16 16:33:18 crc kubenswrapper[4736]: I0316 16:33:18.654116 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:33:18 crc kubenswrapper[4736]: I0316 16:33:18.717446 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:33:18 crc kubenswrapper[4736]: I0316 16:33:18.903207 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b75zh"] Mar 16 16:33:20 crc kubenswrapper[4736]: I0316 16:33:20.265871 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b75zh" podUID="b9c1b703-4677-4991-b90b-37b8168940c5" containerName="registry-server" containerID="cri-o://283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9" gracePeriod=2 Mar 16 16:33:20 crc kubenswrapper[4736]: E0316 16:33:20.533768 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9c1b703_4677_4991_b90b_37b8168940c5.slice/crio-283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9c1b703_4677_4991_b90b_37b8168940c5.slice/crio-conmon-283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9.scope\": RecentStats: unable to find data in memory cache]" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.146237 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.275306 4736 generic.go:334] "Generic (PLEG): container finished" podID="b9c1b703-4677-4991-b90b-37b8168940c5" containerID="283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9" exitCode=0 Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.275353 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b75zh" event={"ID":"b9c1b703-4677-4991-b90b-37b8168940c5","Type":"ContainerDied","Data":"283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9"} Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.275379 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b75zh" event={"ID":"b9c1b703-4677-4991-b90b-37b8168940c5","Type":"ContainerDied","Data":"8c2b5b3f8a66cee86eb94e9d862c98515b954fc7682d40a344bc06305532af00"} Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.275396 4736 scope.go:117] "RemoveContainer" containerID="283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.275418 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b75zh" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.301980 4736 scope.go:117] "RemoveContainer" containerID="088664df654f1d39aee663b5f10c63dece46b35c9bde6f0e464d651e536e0920" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.324677 4736 scope.go:117] "RemoveContainer" containerID="43cd522d39803e36d92f3bd08fb8dc6eeaf327e07c769acbd183021439d9c0cf" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.348036 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c1b703-4677-4991-b90b-37b8168940c5-utilities\") pod \"b9c1b703-4677-4991-b90b-37b8168940c5\" (UID: \"b9c1b703-4677-4991-b90b-37b8168940c5\") " Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.348289 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c1b703-4677-4991-b90b-37b8168940c5-catalog-content\") pod \"b9c1b703-4677-4991-b90b-37b8168940c5\" (UID: \"b9c1b703-4677-4991-b90b-37b8168940c5\") " Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.348372 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl55v\" (UniqueName: \"kubernetes.io/projected/b9c1b703-4677-4991-b90b-37b8168940c5-kube-api-access-kl55v\") pod \"b9c1b703-4677-4991-b90b-37b8168940c5\" (UID: \"b9c1b703-4677-4991-b90b-37b8168940c5\") " Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.348581 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c1b703-4677-4991-b90b-37b8168940c5-utilities" (OuterVolumeSpecName: "utilities") pod "b9c1b703-4677-4991-b90b-37b8168940c5" (UID: "b9c1b703-4677-4991-b90b-37b8168940c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.349782 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9c1b703-4677-4991-b90b-37b8168940c5-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.366241 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c1b703-4677-4991-b90b-37b8168940c5-kube-api-access-kl55v" (OuterVolumeSpecName: "kube-api-access-kl55v") pod "b9c1b703-4677-4991-b90b-37b8168940c5" (UID: "b9c1b703-4677-4991-b90b-37b8168940c5"). InnerVolumeSpecName "kube-api-access-kl55v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.370276 4736 scope.go:117] "RemoveContainer" containerID="283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9" Mar 16 16:33:21 crc kubenswrapper[4736]: E0316 16:33:21.371301 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9\": container with ID starting with 283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9 not found: ID does not exist" containerID="283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.371352 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9"} err="failed to get container status \"283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9\": rpc error: code = NotFound desc = could not find container \"283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9\": container with ID starting with 283545f1917cfc5ef563555b2dbcd198b17cf3622bcebe6fc51737c3d22eb4d9 not found: ID does not exist" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.371397 4736 scope.go:117] "RemoveContainer" containerID="088664df654f1d39aee663b5f10c63dece46b35c9bde6f0e464d651e536e0920" Mar 16 16:33:21 crc kubenswrapper[4736]: E0316 16:33:21.372001 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"088664df654f1d39aee663b5f10c63dece46b35c9bde6f0e464d651e536e0920\": container with ID starting with 088664df654f1d39aee663b5f10c63dece46b35c9bde6f0e464d651e536e0920 not found: ID does not exist" containerID="088664df654f1d39aee663b5f10c63dece46b35c9bde6f0e464d651e536e0920" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.372030 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"088664df654f1d39aee663b5f10c63dece46b35c9bde6f0e464d651e536e0920"} err="failed to get container status \"088664df654f1d39aee663b5f10c63dece46b35c9bde6f0e464d651e536e0920\": rpc error: code = NotFound desc = could not find container \"088664df654f1d39aee663b5f10c63dece46b35c9bde6f0e464d651e536e0920\": container with ID starting with 088664df654f1d39aee663b5f10c63dece46b35c9bde6f0e464d651e536e0920 not found: ID does not exist" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.372049 4736 scope.go:117] "RemoveContainer" containerID="43cd522d39803e36d92f3bd08fb8dc6eeaf327e07c769acbd183021439d9c0cf" Mar 16 16:33:21 crc kubenswrapper[4736]: E0316 16:33:21.372389 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43cd522d39803e36d92f3bd08fb8dc6eeaf327e07c769acbd183021439d9c0cf\": container with ID starting with 43cd522d39803e36d92f3bd08fb8dc6eeaf327e07c769acbd183021439d9c0cf not found: ID does not exist" containerID="43cd522d39803e36d92f3bd08fb8dc6eeaf327e07c769acbd183021439d9c0cf" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.372425 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43cd522d39803e36d92f3bd08fb8dc6eeaf327e07c769acbd183021439d9c0cf"} err="failed to get container status \"43cd522d39803e36d92f3bd08fb8dc6eeaf327e07c769acbd183021439d9c0cf\": rpc error: code = NotFound desc = could not find container \"43cd522d39803e36d92f3bd08fb8dc6eeaf327e07c769acbd183021439d9c0cf\": container with ID starting with 43cd522d39803e36d92f3bd08fb8dc6eeaf327e07c769acbd183021439d9c0cf not found: ID does not exist" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.423939 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c1b703-4677-4991-b90b-37b8168940c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9c1b703-4677-4991-b90b-37b8168940c5" (UID: "b9c1b703-4677-4991-b90b-37b8168940c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.451513 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9c1b703-4677-4991-b90b-37b8168940c5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.451544 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl55v\" (UniqueName: \"kubernetes.io/projected/b9c1b703-4677-4991-b90b-37b8168940c5-kube-api-access-kl55v\") on node \"crc\" DevicePath \"\"" Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.626552 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b75zh"] Mar 16 16:33:21 crc kubenswrapper[4736]: I0316 16:33:21.635195 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b75zh"] Mar 16 16:33:22 crc kubenswrapper[4736]: I0316 16:33:22.989409 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c1b703-4677-4991-b90b-37b8168940c5" path="/var/lib/kubelet/pods/b9c1b703-4677-4991-b90b-37b8168940c5/volumes" Mar 16 16:33:38 crc kubenswrapper[4736]: I0316 16:33:38.507904 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:33:38 crc kubenswrapper[4736]: I0316 16:33:38.508559 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:34:00 crc kubenswrapper[4736]: I0316 16:34:00.164440 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561314-8n726"] Mar 16 16:34:00 crc kubenswrapper[4736]: E0316 16:34:00.167453 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c1b703-4677-4991-b90b-37b8168940c5" containerName="extract-utilities" Mar 16 16:34:00 crc kubenswrapper[4736]: I0316 16:34:00.167474 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c1b703-4677-4991-b90b-37b8168940c5" containerName="extract-utilities" Mar 16 16:34:00 crc kubenswrapper[4736]: E0316 16:34:00.167497 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c1b703-4677-4991-b90b-37b8168940c5" containerName="registry-server" Mar 16 16:34:00 crc kubenswrapper[4736]: I0316 16:34:00.167504 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c1b703-4677-4991-b90b-37b8168940c5" containerName="registry-server" Mar 16 16:34:00 crc kubenswrapper[4736]: E0316 16:34:00.167525 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c1b703-4677-4991-b90b-37b8168940c5" containerName="extract-content" Mar 16 16:34:00 crc kubenswrapper[4736]: I0316 16:34:00.167531 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c1b703-4677-4991-b90b-37b8168940c5" containerName="extract-content" Mar 16 16:34:00 crc kubenswrapper[4736]: I0316 16:34:00.168843 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c1b703-4677-4991-b90b-37b8168940c5" containerName="registry-server" Mar 16 16:34:00 crc kubenswrapper[4736]: I0316 16:34:00.173663 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561314-8n726" Mar 16 16:34:00 crc kubenswrapper[4736]: I0316 16:34:00.181057 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561314-8n726"] Mar 16 16:34:00 crc kubenswrapper[4736]: I0316 16:34:00.189767 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:34:00 crc kubenswrapper[4736]: I0316 16:34:00.189782 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:34:00 crc kubenswrapper[4736]: I0316 16:34:00.189782 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:34:00 crc kubenswrapper[4736]: I0316 16:34:00.233831 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbwt\" (UniqueName: \"kubernetes.io/projected/245a7b65-d83a-4157-9ed1-d68cb1c164d3-kube-api-access-dkbwt\") pod \"auto-csr-approver-29561314-8n726\" (UID: \"245a7b65-d83a-4157-9ed1-d68cb1c164d3\") " pod="openshift-infra/auto-csr-approver-29561314-8n726" Mar 16 16:34:00 crc kubenswrapper[4736]: I0316 16:34:00.335633 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkbwt\" (UniqueName: \"kubernetes.io/projected/245a7b65-d83a-4157-9ed1-d68cb1c164d3-kube-api-access-dkbwt\") pod \"auto-csr-approver-29561314-8n726\" (UID: \"245a7b65-d83a-4157-9ed1-d68cb1c164d3\") " pod="openshift-infra/auto-csr-approver-29561314-8n726" Mar 16 16:34:00 crc kubenswrapper[4736]: I0316 16:34:00.362423 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkbwt\" (UniqueName: \"kubernetes.io/projected/245a7b65-d83a-4157-9ed1-d68cb1c164d3-kube-api-access-dkbwt\") pod \"auto-csr-approver-29561314-8n726\" (UID: \"245a7b65-d83a-4157-9ed1-d68cb1c164d3\") " pod="openshift-infra/auto-csr-approver-29561314-8n726" Mar 16 16:34:00 crc kubenswrapper[4736]: I0316 16:34:00.500184 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561314-8n726" Mar 16 16:34:01 crc kubenswrapper[4736]: I0316 16:34:01.128443 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561314-8n726"] Mar 16 16:34:01 crc kubenswrapper[4736]: I0316 16:34:01.668190 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561314-8n726" event={"ID":"245a7b65-d83a-4157-9ed1-d68cb1c164d3","Type":"ContainerStarted","Data":"86b3027ebd2a93bd7b6e9927a2b64a1b7381b211b6f92ce2b463c3ad24b2a0d2"} Mar 16 16:34:03 crc kubenswrapper[4736]: I0316 16:34:03.690296 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561314-8n726" event={"ID":"245a7b65-d83a-4157-9ed1-d68cb1c164d3","Type":"ContainerStarted","Data":"1e223e4ee806e29648788039e4a1cc15fa7d40a6cac4f06a95b315762a868164"} Mar 16 16:34:03 crc kubenswrapper[4736]: I0316 16:34:03.739029 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561314-8n726" podStartSLOduration=2.680817195 podStartE2EDuration="3.739004061s" podCreationTimestamp="2026-03-16 16:34:00 +0000 UTC" firstStartedPulling="2026-03-16 16:34:01.143001093 +0000 UTC m=+4842.870391400" lastFinishedPulling="2026-03-16 16:34:02.201187939 +0000 UTC m=+4843.928578266" observedRunningTime="2026-03-16 16:34:03.724457115 +0000 UTC m=+4845.451847412" watchObservedRunningTime="2026-03-16 16:34:03.739004061 +0000 UTC m=+4845.466394358" Mar 16 16:34:04 crc kubenswrapper[4736]: I0316 16:34:04.704019 4736 generic.go:334] "Generic (PLEG): container finished" podID="245a7b65-d83a-4157-9ed1-d68cb1c164d3" containerID="1e223e4ee806e29648788039e4a1cc15fa7d40a6cac4f06a95b315762a868164" exitCode=0 Mar 16 16:34:04 crc kubenswrapper[4736]: I0316 16:34:04.704171 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561314-8n726" event={"ID":"245a7b65-d83a-4157-9ed1-d68cb1c164d3","Type":"ContainerDied","Data":"1e223e4ee806e29648788039e4a1cc15fa7d40a6cac4f06a95b315762a868164"} Mar 16 16:34:06 crc kubenswrapper[4736]: I0316 16:34:06.188311 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561314-8n726" Mar 16 16:34:06 crc kubenswrapper[4736]: I0316 16:34:06.264578 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkbwt\" (UniqueName: \"kubernetes.io/projected/245a7b65-d83a-4157-9ed1-d68cb1c164d3-kube-api-access-dkbwt\") pod \"245a7b65-d83a-4157-9ed1-d68cb1c164d3\" (UID: \"245a7b65-d83a-4157-9ed1-d68cb1c164d3\") " Mar 16 16:34:06 crc kubenswrapper[4736]: I0316 16:34:06.271101 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/245a7b65-d83a-4157-9ed1-d68cb1c164d3-kube-api-access-dkbwt" (OuterVolumeSpecName: "kube-api-access-dkbwt") pod "245a7b65-d83a-4157-9ed1-d68cb1c164d3" (UID: "245a7b65-d83a-4157-9ed1-d68cb1c164d3"). InnerVolumeSpecName "kube-api-access-dkbwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:34:06 crc kubenswrapper[4736]: I0316 16:34:06.366581 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkbwt\" (UniqueName: \"kubernetes.io/projected/245a7b65-d83a-4157-9ed1-d68cb1c164d3-kube-api-access-dkbwt\") on node \"crc\" DevicePath \"\"" Mar 16 16:34:06 crc kubenswrapper[4736]: I0316 16:34:06.738417 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561314-8n726" event={"ID":"245a7b65-d83a-4157-9ed1-d68cb1c164d3","Type":"ContainerDied","Data":"86b3027ebd2a93bd7b6e9927a2b64a1b7381b211b6f92ce2b463c3ad24b2a0d2"} Mar 16 16:34:06 crc kubenswrapper[4736]: I0316 16:34:06.738489 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561314-8n726" Mar 16 16:34:06 crc kubenswrapper[4736]: I0316 16:34:06.738454 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86b3027ebd2a93bd7b6e9927a2b64a1b7381b211b6f92ce2b463c3ad24b2a0d2" Mar 16 16:34:06 crc kubenswrapper[4736]: I0316 16:34:06.814608 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561308-bw4k6"] Mar 16 16:34:06 crc kubenswrapper[4736]: I0316 16:34:06.823521 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561308-bw4k6"] Mar 16 16:34:06 crc kubenswrapper[4736]: I0316 16:34:06.994266 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd89b27a-c162-465d-b875-c1e672b27a67" path="/var/lib/kubelet/pods/dd89b27a-c162-465d-b875-c1e672b27a67/volumes" Mar 16 16:34:08 crc kubenswrapper[4736]: I0316 16:34:08.507965 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:34:08 crc kubenswrapper[4736]: I0316 16:34:08.508999 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:34:38 crc kubenswrapper[4736]: I0316 16:34:38.507996 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:34:38 crc kubenswrapper[4736]: I0316 16:34:38.508624 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:34:38 crc kubenswrapper[4736]: I0316 16:34:38.508662 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 16:34:38 crc kubenswrapper[4736]: I0316 16:34:38.509281 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e237bbdcaf8b89ad27d65e5e28ca7e3b5e4e81946b16727b0c800dc524e8350"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 16:34:38 crc kubenswrapper[4736]: I0316 16:34:38.509367 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://1e237bbdcaf8b89ad27d65e5e28ca7e3b5e4e81946b16727b0c800dc524e8350" gracePeriod=600 Mar 16 16:34:39 crc kubenswrapper[4736]: I0316 16:34:39.347554 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="1e237bbdcaf8b89ad27d65e5e28ca7e3b5e4e81946b16727b0c800dc524e8350" exitCode=0 Mar 16 16:34:39 crc kubenswrapper[4736]: I0316 16:34:39.347695 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"1e237bbdcaf8b89ad27d65e5e28ca7e3b5e4e81946b16727b0c800dc524e8350"} Mar 16 16:34:39 crc kubenswrapper[4736]: I0316 16:34:39.347974 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a"} Mar 16 16:34:39 crc kubenswrapper[4736]: I0316 16:34:39.348005 4736 scope.go:117] "RemoveContainer" containerID="be7a491f611dffaeff75f9c24c37ec0dbbc571d2c3cb39214e201272cc2a8619" Mar 16 16:34:57 crc kubenswrapper[4736]: I0316 16:34:57.193201 4736 scope.go:117] "RemoveContainer" containerID="b34f09fcd38e6f55fe8b42819aa3b082e66c5fd3c860971b28d514e3a71314b5" Mar 16 16:36:00 crc kubenswrapper[4736]: I0316 16:36:00.156326 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561316-fqqhg"] Mar 16 16:36:00 crc kubenswrapper[4736]: E0316 16:36:00.157537 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="245a7b65-d83a-4157-9ed1-d68cb1c164d3" containerName="oc" Mar 16 16:36:00 crc kubenswrapper[4736]: I0316 16:36:00.157563 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="245a7b65-d83a-4157-9ed1-d68cb1c164d3" containerName="oc" Mar 16 16:36:00 crc kubenswrapper[4736]: I0316 16:36:00.157857 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="245a7b65-d83a-4157-9ed1-d68cb1c164d3" containerName="oc" Mar 16 16:36:00 crc kubenswrapper[4736]: I0316 16:36:00.158884 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561316-fqqhg" Mar 16 16:36:00 crc kubenswrapper[4736]: I0316 16:36:00.163704 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:36:00 crc kubenswrapper[4736]: I0316 16:36:00.164891 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:36:00 crc kubenswrapper[4736]: I0316 16:36:00.166038 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:36:00 crc kubenswrapper[4736]: I0316 16:36:00.170546 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561316-fqqhg"] Mar 16 16:36:00 crc kubenswrapper[4736]: I0316 16:36:00.326452 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q4kl\" (UniqueName: \"kubernetes.io/projected/3b430cbb-acce-4f2a-8eed-2af3cb630bd5-kube-api-access-4q4kl\") pod \"auto-csr-approver-29561316-fqqhg\" (UID: \"3b430cbb-acce-4f2a-8eed-2af3cb630bd5\") " pod="openshift-infra/auto-csr-approver-29561316-fqqhg" Mar 16 16:36:00 crc kubenswrapper[4736]: I0316 16:36:00.428561 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q4kl\" (UniqueName: \"kubernetes.io/projected/3b430cbb-acce-4f2a-8eed-2af3cb630bd5-kube-api-access-4q4kl\") pod \"auto-csr-approver-29561316-fqqhg\" (UID: \"3b430cbb-acce-4f2a-8eed-2af3cb630bd5\") " pod="openshift-infra/auto-csr-approver-29561316-fqqhg" Mar 16 16:36:00 crc kubenswrapper[4736]: I0316 16:36:00.449281 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q4kl\" (UniqueName: \"kubernetes.io/projected/3b430cbb-acce-4f2a-8eed-2af3cb630bd5-kube-api-access-4q4kl\") pod \"auto-csr-approver-29561316-fqqhg\" (UID: \"3b430cbb-acce-4f2a-8eed-2af3cb630bd5\") " pod="openshift-infra/auto-csr-approver-29561316-fqqhg" Mar 16 16:36:00 crc kubenswrapper[4736]: I0316 16:36:00.480302 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561316-fqqhg" Mar 16 16:36:01 crc kubenswrapper[4736]: I0316 16:36:01.056532 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561316-fqqhg"] Mar 16 16:36:01 crc kubenswrapper[4736]: I0316 16:36:01.131339 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561316-fqqhg" event={"ID":"3b430cbb-acce-4f2a-8eed-2af3cb630bd5","Type":"ContainerStarted","Data":"447d005dfbd1411ead3746490238da0026f880bd4aeedfbf8a80a2efcc448eaf"} Mar 16 16:36:03 crc kubenswrapper[4736]: I0316 16:36:03.159978 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561316-fqqhg" event={"ID":"3b430cbb-acce-4f2a-8eed-2af3cb630bd5","Type":"ContainerStarted","Data":"b70f96db160ab6c2c20124a91eaed7f9224648013eaf2622534a870961572d80"} Mar 16 16:36:03 crc kubenswrapper[4736]: I0316 16:36:03.179159 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561316-fqqhg" podStartSLOduration=2.161248884 podStartE2EDuration="3.179133862s" podCreationTimestamp="2026-03-16 16:36:00 +0000 UTC" firstStartedPulling="2026-03-16 16:36:01.065444576 +0000 UTC m=+4962.792834863" lastFinishedPulling="2026-03-16 16:36:02.083329544 +0000 UTC m=+4963.810719841" observedRunningTime="2026-03-16 16:36:03.172646016 +0000 UTC m=+4964.900036303" watchObservedRunningTime="2026-03-16 16:36:03.179133862 +0000 UTC m=+4964.906524149" Mar 16 16:36:04 crc kubenswrapper[4736]: I0316 16:36:04.172321 4736 generic.go:334] "Generic (PLEG): container finished" podID="3b430cbb-acce-4f2a-8eed-2af3cb630bd5" containerID="b70f96db160ab6c2c20124a91eaed7f9224648013eaf2622534a870961572d80" exitCode=0 Mar 16 16:36:04 crc kubenswrapper[4736]: I0316 16:36:04.172570 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561316-fqqhg" event={"ID":"3b430cbb-acce-4f2a-8eed-2af3cb630bd5","Type":"ContainerDied","Data":"b70f96db160ab6c2c20124a91eaed7f9224648013eaf2622534a870961572d80"} Mar 16 16:36:05 crc kubenswrapper[4736]: I0316 16:36:05.591465 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561316-fqqhg" Mar 16 16:36:05 crc kubenswrapper[4736]: I0316 16:36:05.731249 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q4kl\" (UniqueName: \"kubernetes.io/projected/3b430cbb-acce-4f2a-8eed-2af3cb630bd5-kube-api-access-4q4kl\") pod \"3b430cbb-acce-4f2a-8eed-2af3cb630bd5\" (UID: \"3b430cbb-acce-4f2a-8eed-2af3cb630bd5\") " Mar 16 16:36:05 crc kubenswrapper[4736]: I0316 16:36:05.739085 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b430cbb-acce-4f2a-8eed-2af3cb630bd5-kube-api-access-4q4kl" (OuterVolumeSpecName: "kube-api-access-4q4kl") pod "3b430cbb-acce-4f2a-8eed-2af3cb630bd5" (UID: "3b430cbb-acce-4f2a-8eed-2af3cb630bd5"). InnerVolumeSpecName "kube-api-access-4q4kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:36:05 crc kubenswrapper[4736]: I0316 16:36:05.834009 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q4kl\" (UniqueName: \"kubernetes.io/projected/3b430cbb-acce-4f2a-8eed-2af3cb630bd5-kube-api-access-4q4kl\") on node \"crc\" DevicePath \"\"" Mar 16 16:36:06 crc kubenswrapper[4736]: I0316 16:36:06.215204 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561316-fqqhg" event={"ID":"3b430cbb-acce-4f2a-8eed-2af3cb630bd5","Type":"ContainerDied","Data":"447d005dfbd1411ead3746490238da0026f880bd4aeedfbf8a80a2efcc448eaf"} Mar 16 16:36:06 crc kubenswrapper[4736]: I0316 16:36:06.215245 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="447d005dfbd1411ead3746490238da0026f880bd4aeedfbf8a80a2efcc448eaf" Mar 16 16:36:06 crc kubenswrapper[4736]: I0316 16:36:06.215299 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561316-fqqhg" Mar 16 16:36:06 crc kubenswrapper[4736]: I0316 16:36:06.257623 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561310-8jbq2"] Mar 16 16:36:06 crc kubenswrapper[4736]: I0316 16:36:06.266070 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561310-8jbq2"] Mar 16 16:36:06 crc kubenswrapper[4736]: I0316 16:36:06.991475 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aa1b59c-f1ce-4bad-ad42-d1c383855885" path="/var/lib/kubelet/pods/1aa1b59c-f1ce-4bad-ad42-d1c383855885/volumes" Mar 16 16:36:38 crc kubenswrapper[4736]: I0316 16:36:38.507863 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:36:38 crc kubenswrapper[4736]: I0316 16:36:38.508347 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:36:57 crc kubenswrapper[4736]: I0316 16:36:57.381443 4736 scope.go:117] "RemoveContainer" containerID="abbb4a051e1d8c6b638ef802f7af399490a05dcf11a96b271ce274b11aa7eb4e" Mar 16 16:37:08 crc kubenswrapper[4736]: I0316 16:37:08.508224 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:37:08 crc kubenswrapper[4736]: I0316 16:37:08.509000 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:37:38 crc kubenswrapper[4736]: I0316 16:37:38.508542 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:37:38 crc kubenswrapper[4736]: I0316 16:37:38.509062 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:37:38 crc kubenswrapper[4736]: I0316 16:37:38.509152 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 16:37:38 crc kubenswrapper[4736]: I0316 16:37:38.509641 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 16:37:38 crc kubenswrapper[4736]: I0316 16:37:38.509689 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" gracePeriod=600 Mar 16 16:37:38 crc kubenswrapper[4736]: E0316 16:37:38.633486 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:37:39 crc kubenswrapper[4736]: I0316 16:37:39.115245 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" exitCode=0 Mar 16 16:37:39 crc kubenswrapper[4736]: I0316 16:37:39.115309 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a"} Mar 16 16:37:39 crc kubenswrapper[4736]: I0316 16:37:39.115342 4736 scope.go:117] "RemoveContainer" containerID="1e237bbdcaf8b89ad27d65e5e28ca7e3b5e4e81946b16727b0c800dc524e8350" Mar 16 16:37:39 crc kubenswrapper[4736]: I0316 16:37:39.115779 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:37:39 crc kubenswrapper[4736]: E0316 16:37:39.116075 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:37:52 crc kubenswrapper[4736]: I0316 16:37:52.978994 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:37:52 crc kubenswrapper[4736]: E0316 16:37:52.980450 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:38:00 crc kubenswrapper[4736]: I0316 16:38:00.150736 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561318-g4xt4"] Mar 16 16:38:00 crc kubenswrapper[4736]: E0316 16:38:00.151869 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b430cbb-acce-4f2a-8eed-2af3cb630bd5" containerName="oc" Mar 16 16:38:00 crc kubenswrapper[4736]: I0316 16:38:00.151884 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b430cbb-acce-4f2a-8eed-2af3cb630bd5" containerName="oc" Mar 16 16:38:00 crc kubenswrapper[4736]: I0316 16:38:00.152164 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b430cbb-acce-4f2a-8eed-2af3cb630bd5" containerName="oc" Mar 16 16:38:00 crc kubenswrapper[4736]: I0316 16:38:00.152978 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561318-g4xt4" Mar 16 16:38:00 crc kubenswrapper[4736]: I0316 16:38:00.155063 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:38:00 crc kubenswrapper[4736]: I0316 16:38:00.155198 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:38:00 crc kubenswrapper[4736]: I0316 16:38:00.156468 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:38:00 crc kubenswrapper[4736]: I0316 16:38:00.163509 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561318-g4xt4"] Mar 16 16:38:00 crc kubenswrapper[4736]: I0316 16:38:00.291781 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcbxk\" (UniqueName: \"kubernetes.io/projected/7599c5c0-edfb-4e99-99c9-af5ee2f068e3-kube-api-access-zcbxk\") pod \"auto-csr-approver-29561318-g4xt4\" (UID: \"7599c5c0-edfb-4e99-99c9-af5ee2f068e3\") " pod="openshift-infra/auto-csr-approver-29561318-g4xt4" Mar 16 16:38:00 crc kubenswrapper[4736]: I0316 16:38:00.394198 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcbxk\" (UniqueName: \"kubernetes.io/projected/7599c5c0-edfb-4e99-99c9-af5ee2f068e3-kube-api-access-zcbxk\") pod \"auto-csr-approver-29561318-g4xt4\" (UID: \"7599c5c0-edfb-4e99-99c9-af5ee2f068e3\") " pod="openshift-infra/auto-csr-approver-29561318-g4xt4" Mar 16 16:38:00 crc kubenswrapper[4736]: I0316 16:38:00.415897 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcbxk\" (UniqueName: \"kubernetes.io/projected/7599c5c0-edfb-4e99-99c9-af5ee2f068e3-kube-api-access-zcbxk\") pod \"auto-csr-approver-29561318-g4xt4\" (UID: \"7599c5c0-edfb-4e99-99c9-af5ee2f068e3\") " pod="openshift-infra/auto-csr-approver-29561318-g4xt4" Mar 16 16:38:00 crc kubenswrapper[4736]: I0316 16:38:00.477522 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561318-g4xt4" Mar 16 16:38:00 crc kubenswrapper[4736]: W0316 16:38:00.987359 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7599c5c0_edfb_4e99_99c9_af5ee2f068e3.slice/crio-c8d41d9344351f63c5742de6e90046d31f5dc9246afa1ef40e20872474b3c4da WatchSource:0}: Error finding container c8d41d9344351f63c5742de6e90046d31f5dc9246afa1ef40e20872474b3c4da: Status 404 returned error can't find the container with id c8d41d9344351f63c5742de6e90046d31f5dc9246afa1ef40e20872474b3c4da Mar 16 16:38:00 crc kubenswrapper[4736]: I0316 16:38:00.989430 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561318-g4xt4"] Mar 16 16:38:00 crc kubenswrapper[4736]: I0316 16:38:00.991514 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 16:38:01 crc kubenswrapper[4736]: I0316 16:38:01.314978 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561318-g4xt4" event={"ID":"7599c5c0-edfb-4e99-99c9-af5ee2f068e3","Type":"ContainerStarted","Data":"c8d41d9344351f63c5742de6e90046d31f5dc9246afa1ef40e20872474b3c4da"} Mar 16 16:38:02 crc kubenswrapper[4736]: I0316 16:38:02.325477 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561318-g4xt4" event={"ID":"7599c5c0-edfb-4e99-99c9-af5ee2f068e3","Type":"ContainerStarted","Data":"b7973b1bfd8546534ec49eedfa3dbf504406c8ffbe30e26a49d98b5e340dcb3a"} Mar 16 16:38:02 crc kubenswrapper[4736]: I0316 16:38:02.352452 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561318-g4xt4" podStartSLOduration=1.318138432 podStartE2EDuration="2.352427797s" podCreationTimestamp="2026-03-16 16:38:00 +0000 UTC" firstStartedPulling="2026-03-16 16:38:00.990052468 +0000 UTC m=+5082.717442755" lastFinishedPulling="2026-03-16 16:38:02.024341833 +0000 UTC m=+5083.751732120" observedRunningTime="2026-03-16 16:38:02.345613241 +0000 UTC m=+5084.073003538" watchObservedRunningTime="2026-03-16 16:38:02.352427797 +0000 UTC m=+5084.079818084" Mar 16 16:38:03 crc kubenswrapper[4736]: I0316 16:38:03.345502 4736 generic.go:334] "Generic (PLEG): container finished" podID="7599c5c0-edfb-4e99-99c9-af5ee2f068e3" containerID="b7973b1bfd8546534ec49eedfa3dbf504406c8ffbe30e26a49d98b5e340dcb3a" exitCode=0 Mar 16 16:38:03 crc kubenswrapper[4736]: I0316 16:38:03.345547 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561318-g4xt4" event={"ID":"7599c5c0-edfb-4e99-99c9-af5ee2f068e3","Type":"ContainerDied","Data":"b7973b1bfd8546534ec49eedfa3dbf504406c8ffbe30e26a49d98b5e340dcb3a"} Mar 16 16:38:04 crc kubenswrapper[4736]: I0316 16:38:04.715413 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561318-g4xt4" Mar 16 16:38:04 crc kubenswrapper[4736]: I0316 16:38:04.772569 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcbxk\" (UniqueName: \"kubernetes.io/projected/7599c5c0-edfb-4e99-99c9-af5ee2f068e3-kube-api-access-zcbxk\") pod \"7599c5c0-edfb-4e99-99c9-af5ee2f068e3\" (UID: \"7599c5c0-edfb-4e99-99c9-af5ee2f068e3\") " Mar 16 16:38:04 crc kubenswrapper[4736]: I0316 16:38:04.784380 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599c5c0-edfb-4e99-99c9-af5ee2f068e3-kube-api-access-zcbxk" (OuterVolumeSpecName: "kube-api-access-zcbxk") pod "7599c5c0-edfb-4e99-99c9-af5ee2f068e3" (UID: "7599c5c0-edfb-4e99-99c9-af5ee2f068e3"). InnerVolumeSpecName "kube-api-access-zcbxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:38:04 crc kubenswrapper[4736]: I0316 16:38:04.874794 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcbxk\" (UniqueName: \"kubernetes.io/projected/7599c5c0-edfb-4e99-99c9-af5ee2f068e3-kube-api-access-zcbxk\") on node \"crc\" DevicePath \"\"" Mar 16 16:38:05 crc kubenswrapper[4736]: I0316 16:38:05.367041 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561318-g4xt4" event={"ID":"7599c5c0-edfb-4e99-99c9-af5ee2f068e3","Type":"ContainerDied","Data":"c8d41d9344351f63c5742de6e90046d31f5dc9246afa1ef40e20872474b3c4da"} Mar 16 16:38:05 crc kubenswrapper[4736]: I0316 16:38:05.367127 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561318-g4xt4" Mar 16 16:38:05 crc kubenswrapper[4736]: I0316 16:38:05.367098 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8d41d9344351f63c5742de6e90046d31f5dc9246afa1ef40e20872474b3c4da" Mar 16 16:38:05 crc kubenswrapper[4736]: I0316 16:38:05.427338 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561312-j9q5c"] Mar 16 16:38:05 crc kubenswrapper[4736]: I0316 16:38:05.438058 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561312-j9q5c"] Mar 16 16:38:06 crc kubenswrapper[4736]: I0316 16:38:06.980442 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:38:06 crc kubenswrapper[4736]: E0316 16:38:06.981015 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:38:06 crc kubenswrapper[4736]: I0316 16:38:06.993671 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b3b4f6d-ba34-470f-bb2f-0fc933d54d70" path="/var/lib/kubelet/pods/0b3b4f6d-ba34-470f-bb2f-0fc933d54d70/volumes" Mar 16 16:38:20 crc kubenswrapper[4736]: I0316 16:38:20.978391 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:38:20 crc kubenswrapper[4736]: E0316 16:38:20.979190 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:38:33 crc kubenswrapper[4736]: I0316 16:38:33.896887 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-56m7r"] Mar 16 16:38:33 crc kubenswrapper[4736]: E0316 16:38:33.897918 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7599c5c0-edfb-4e99-99c9-af5ee2f068e3" containerName="oc" Mar 16 16:38:33 crc kubenswrapper[4736]: I0316 16:38:33.897936 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7599c5c0-edfb-4e99-99c9-af5ee2f068e3" containerName="oc" Mar 16 16:38:33 crc kubenswrapper[4736]: I0316 16:38:33.898184 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7599c5c0-edfb-4e99-99c9-af5ee2f068e3" containerName="oc" Mar 16 16:38:33 crc kubenswrapper[4736]: I0316 16:38:33.900722 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:38:33 crc kubenswrapper[4736]: I0316 16:38:33.920881 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56m7r"] Mar 16 16:38:33 crc kubenswrapper[4736]: I0316 16:38:33.979461 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013776c9-00e6-40fc-a822-c01e0b9337fe-utilities\") pod \"redhat-operators-56m7r\" (UID: \"013776c9-00e6-40fc-a822-c01e0b9337fe\") " pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:38:33 crc kubenswrapper[4736]: I0316 16:38:33.980223 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9vvx\" (UniqueName: \"kubernetes.io/projected/013776c9-00e6-40fc-a822-c01e0b9337fe-kube-api-access-r9vvx\") pod \"redhat-operators-56m7r\" (UID: \"013776c9-00e6-40fc-a822-c01e0b9337fe\") " pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:38:33 crc kubenswrapper[4736]: I0316 16:38:33.981237 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013776c9-00e6-40fc-a822-c01e0b9337fe-catalog-content\") pod \"redhat-operators-56m7r\" (UID: \"013776c9-00e6-40fc-a822-c01e0b9337fe\") " pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:38:34 crc kubenswrapper[4736]: I0316 16:38:34.083357 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013776c9-00e6-40fc-a822-c01e0b9337fe-catalog-content\") pod \"redhat-operators-56m7r\" (UID: \"013776c9-00e6-40fc-a822-c01e0b9337fe\") " pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:38:34 crc kubenswrapper[4736]: I0316 16:38:34.083587 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9vvx\" (UniqueName: \"kubernetes.io/projected/013776c9-00e6-40fc-a822-c01e0b9337fe-kube-api-access-r9vvx\") pod \"redhat-operators-56m7r\" (UID: \"013776c9-00e6-40fc-a822-c01e0b9337fe\") " pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:38:34 crc kubenswrapper[4736]: I0316 16:38:34.083611 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013776c9-00e6-40fc-a822-c01e0b9337fe-utilities\") pod \"redhat-operators-56m7r\" (UID: \"013776c9-00e6-40fc-a822-c01e0b9337fe\") " pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:38:34 crc kubenswrapper[4736]: I0316 16:38:34.083935 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013776c9-00e6-40fc-a822-c01e0b9337fe-catalog-content\") pod \"redhat-operators-56m7r\" (UID: \"013776c9-00e6-40fc-a822-c01e0b9337fe\") " pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:38:34 crc kubenswrapper[4736]: I0316 16:38:34.085460 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013776c9-00e6-40fc-a822-c01e0b9337fe-utilities\") pod \"redhat-operators-56m7r\" (UID: \"013776c9-00e6-40fc-a822-c01e0b9337fe\") " pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:38:34 crc kubenswrapper[4736]: I0316 16:38:34.108085 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9vvx\" (UniqueName: \"kubernetes.io/projected/013776c9-00e6-40fc-a822-c01e0b9337fe-kube-api-access-r9vvx\") pod \"redhat-operators-56m7r\" (UID: \"013776c9-00e6-40fc-a822-c01e0b9337fe\") " pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:38:34 crc kubenswrapper[4736]: I0316 16:38:34.229986 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:38:34 crc kubenswrapper[4736]: I0316 16:38:34.706478 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56m7r"] Mar 16 16:38:34 crc kubenswrapper[4736]: I0316 16:38:34.978924 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:38:34 crc kubenswrapper[4736]: E0316 16:38:34.979566 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:38:35 crc kubenswrapper[4736]: I0316 16:38:35.703838 4736 generic.go:334] "Generic (PLEG): container finished" podID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerID="32eb318e72c8153c32c3512a24f6a20af2c7fac40122c31eba3aad27e71b11f4" exitCode=0 Mar 16 16:38:35 crc kubenswrapper[4736]: I0316 16:38:35.703895 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56m7r" event={"ID":"013776c9-00e6-40fc-a822-c01e0b9337fe","Type":"ContainerDied","Data":"32eb318e72c8153c32c3512a24f6a20af2c7fac40122c31eba3aad27e71b11f4"} Mar 16 16:38:35 crc kubenswrapper[4736]: I0316 16:38:35.703946 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56m7r" event={"ID":"013776c9-00e6-40fc-a822-c01e0b9337fe","Type":"ContainerStarted","Data":"3d4854647f31c8c6cf8727768007da25b24c08af96ab1020dd044a669ed414f2"} Mar 16 16:38:36 crc kubenswrapper[4736]: I0316 16:38:36.715028 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56m7r" event={"ID":"013776c9-00e6-40fc-a822-c01e0b9337fe","Type":"ContainerStarted","Data":"a5ca5007705aa3b4cf08cee007d2c15add54911d8aea182106eca5884f570f71"} Mar 16 16:38:41 crc kubenswrapper[4736]: I0316 16:38:41.779159 4736 generic.go:334] "Generic (PLEG): container finished" podID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerID="a5ca5007705aa3b4cf08cee007d2c15add54911d8aea182106eca5884f570f71" exitCode=0 Mar 16 16:38:41 crc kubenswrapper[4736]: I0316 16:38:41.779227 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56m7r" event={"ID":"013776c9-00e6-40fc-a822-c01e0b9337fe","Type":"ContainerDied","Data":"a5ca5007705aa3b4cf08cee007d2c15add54911d8aea182106eca5884f570f71"} Mar 16 16:38:42 crc kubenswrapper[4736]: I0316 16:38:42.798590 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56m7r" event={"ID":"013776c9-00e6-40fc-a822-c01e0b9337fe","Type":"ContainerStarted","Data":"0169d43387c550187fab2efb2b45d022efb5715dc5fba091a8d066af81045295"} Mar 16 16:38:42 crc kubenswrapper[4736]: I0316 16:38:42.827342 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-56m7r" podStartSLOduration=3.115425019 podStartE2EDuration="9.827298888s" podCreationTimestamp="2026-03-16 16:38:33 +0000 UTC" firstStartedPulling="2026-03-16 16:38:35.705588261 +0000 UTC m=+5117.432978578" lastFinishedPulling="2026-03-16 16:38:42.41746212 +0000 UTC m=+5124.144852447" observedRunningTime="2026-03-16 16:38:42.821045238 +0000 UTC m=+5124.548435525" watchObservedRunningTime="2026-03-16 16:38:42.827298888 +0000 UTC m=+5124.554689175" Mar 16 16:38:44 crc kubenswrapper[4736]: I0316 16:38:44.231182 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:38:44 crc kubenswrapper[4736]: I0316 16:38:44.231484 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:38:45 crc kubenswrapper[4736]: I0316 16:38:45.285814 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56m7r" podUID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerName="registry-server" probeResult="failure" output=< Mar 16 16:38:45 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:38:45 crc kubenswrapper[4736]: > Mar 16 16:38:49 crc kubenswrapper[4736]: I0316 16:38:49.977937 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:38:49 crc kubenswrapper[4736]: E0316 16:38:49.978680 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:38:55 crc kubenswrapper[4736]: I0316 16:38:55.278250 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56m7r" podUID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerName="registry-server" probeResult="failure" output=< Mar 16 16:38:55 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:38:55 crc kubenswrapper[4736]: > Mar 16 16:38:57 crc kubenswrapper[4736]: I0316 16:38:57.508437 4736 scope.go:117] "RemoveContainer" containerID="a93ce01159af2488c2b9f0ee2c194dee6b94d92d73f69547983c88b774036fad" Mar 16 16:39:00 crc kubenswrapper[4736]: I0316 16:39:00.978489 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:39:00 crc kubenswrapper[4736]: E0316 16:39:00.979182 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:39:05 crc kubenswrapper[4736]: I0316 16:39:05.283907 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56m7r" podUID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerName="registry-server" probeResult="failure" output=< Mar 16 16:39:05 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:39:05 crc kubenswrapper[4736]: > Mar 16 16:39:14 crc kubenswrapper[4736]: I0316 16:39:14.979035 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:39:14 crc kubenswrapper[4736]: E0316 16:39:14.980342 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:39:15 crc kubenswrapper[4736]: I0316 16:39:15.313943 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56m7r" podUID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerName="registry-server" probeResult="failure" output=< Mar 16 16:39:15 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:39:15 crc kubenswrapper[4736]: > Mar 16 16:39:24 crc kubenswrapper[4736]: I0316 16:39:24.292230 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:39:24 crc kubenswrapper[4736]: I0316 16:39:24.340133 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:39:24 crc kubenswrapper[4736]: I0316 16:39:24.570720 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56m7r"] Mar 16 16:39:25 crc kubenswrapper[4736]: I0316 16:39:25.978554 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:39:25 crc kubenswrapper[4736]: E0316 16:39:25.979167 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:39:26 crc kubenswrapper[4736]: I0316 16:39:26.210821 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-56m7r" podUID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerName="registry-server" containerID="cri-o://0169d43387c550187fab2efb2b45d022efb5715dc5fba091a8d066af81045295" gracePeriod=2 Mar 16 16:39:27 crc kubenswrapper[4736]: I0316 16:39:27.221151 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56m7r" event={"ID":"013776c9-00e6-40fc-a822-c01e0b9337fe","Type":"ContainerDied","Data":"0169d43387c550187fab2efb2b45d022efb5715dc5fba091a8d066af81045295"} Mar 16 16:39:27 crc kubenswrapper[4736]: I0316 16:39:27.221740 4736 generic.go:334] "Generic (PLEG): container finished" podID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerID="0169d43387c550187fab2efb2b45d022efb5715dc5fba091a8d066af81045295" exitCode=0 Mar 16 16:39:27 crc kubenswrapper[4736]: I0316 16:39:27.800074 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:39:27 crc kubenswrapper[4736]: I0316 16:39:27.927260 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9vvx\" (UniqueName: \"kubernetes.io/projected/013776c9-00e6-40fc-a822-c01e0b9337fe-kube-api-access-r9vvx\") pod \"013776c9-00e6-40fc-a822-c01e0b9337fe\" (UID: \"013776c9-00e6-40fc-a822-c01e0b9337fe\") " Mar 16 16:39:27 crc kubenswrapper[4736]: I0316 16:39:27.927573 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013776c9-00e6-40fc-a822-c01e0b9337fe-catalog-content\") pod \"013776c9-00e6-40fc-a822-c01e0b9337fe\" (UID: \"013776c9-00e6-40fc-a822-c01e0b9337fe\") " Mar 16 16:39:27 crc kubenswrapper[4736]: I0316 16:39:27.927695 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013776c9-00e6-40fc-a822-c01e0b9337fe-utilities\") pod \"013776c9-00e6-40fc-a822-c01e0b9337fe\" (UID: \"013776c9-00e6-40fc-a822-c01e0b9337fe\") " Mar 16 16:39:27 crc kubenswrapper[4736]: I0316 16:39:27.946878 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/013776c9-00e6-40fc-a822-c01e0b9337fe-utilities" (OuterVolumeSpecName: "utilities") pod "013776c9-00e6-40fc-a822-c01e0b9337fe" (UID: "013776c9-00e6-40fc-a822-c01e0b9337fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:39:28 crc kubenswrapper[4736]: I0316 16:39:28.072740 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013776c9-00e6-40fc-a822-c01e0b9337fe-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:39:28 crc kubenswrapper[4736]: I0316 16:39:28.195613 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/013776c9-00e6-40fc-a822-c01e0b9337fe-kube-api-access-r9vvx" (OuterVolumeSpecName: "kube-api-access-r9vvx") pod "013776c9-00e6-40fc-a822-c01e0b9337fe" (UID: "013776c9-00e6-40fc-a822-c01e0b9337fe"). InnerVolumeSpecName "kube-api-access-r9vvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:39:28 crc kubenswrapper[4736]: I0316 16:39:28.232518 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56m7r" event={"ID":"013776c9-00e6-40fc-a822-c01e0b9337fe","Type":"ContainerDied","Data":"3d4854647f31c8c6cf8727768007da25b24c08af96ab1020dd044a669ed414f2"} Mar 16 16:39:28 crc kubenswrapper[4736]: I0316 16:39:28.232571 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56m7r" Mar 16 16:39:28 crc kubenswrapper[4736]: I0316 16:39:28.232588 4736 scope.go:117] "RemoveContainer" containerID="0169d43387c550187fab2efb2b45d022efb5715dc5fba091a8d066af81045295" Mar 16 16:39:28 crc kubenswrapper[4736]: I0316 16:39:28.275617 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9vvx\" (UniqueName: \"kubernetes.io/projected/013776c9-00e6-40fc-a822-c01e0b9337fe-kube-api-access-r9vvx\") on node \"crc\" DevicePath \"\"" Mar 16 16:39:28 crc kubenswrapper[4736]: I0316 16:39:28.294030 4736 scope.go:117] "RemoveContainer" containerID="a5ca5007705aa3b4cf08cee007d2c15add54911d8aea182106eca5884f570f71" Mar 16 16:39:28 crc kubenswrapper[4736]: I0316 16:39:28.335219 4736 scope.go:117] "RemoveContainer" containerID="32eb318e72c8153c32c3512a24f6a20af2c7fac40122c31eba3aad27e71b11f4" Mar 16 16:39:28 crc kubenswrapper[4736]: I0316 16:39:28.398857 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/013776c9-00e6-40fc-a822-c01e0b9337fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "013776c9-00e6-40fc-a822-c01e0b9337fe" (UID: "013776c9-00e6-40fc-a822-c01e0b9337fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:39:28 crc kubenswrapper[4736]: I0316 16:39:28.481036 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013776c9-00e6-40fc-a822-c01e0b9337fe-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:39:28 crc kubenswrapper[4736]: I0316 16:39:28.568758 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56m7r"] Mar 16 16:39:28 crc kubenswrapper[4736]: I0316 16:39:28.580599 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-56m7r"] Mar 16 16:39:28 crc kubenswrapper[4736]: I0316 16:39:28.989812 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="013776c9-00e6-40fc-a822-c01e0b9337fe" path="/var/lib/kubelet/pods/013776c9-00e6-40fc-a822-c01e0b9337fe/volumes" Mar 16 16:39:40 crc kubenswrapper[4736]: I0316 16:39:40.978158 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:39:40 crc kubenswrapper[4736]: E0316 16:39:40.979185 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:39:53 crc kubenswrapper[4736]: I0316 16:39:53.978751 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:39:53 crc kubenswrapper[4736]: E0316 16:39:53.979582 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:40:00 crc kubenswrapper[4736]: I0316 16:40:00.276209 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561320-j7rfh"] Mar 16 16:40:00 crc kubenswrapper[4736]: E0316 16:40:00.281895 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerName="registry-server" Mar 16 16:40:00 crc kubenswrapper[4736]: I0316 16:40:00.282279 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerName="registry-server" Mar 16 16:40:00 crc kubenswrapper[4736]: E0316 16:40:00.282569 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerName="extract-utilities" Mar 16 16:40:00 crc kubenswrapper[4736]: I0316 16:40:00.282775 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerName="extract-utilities" Mar 16 16:40:00 crc kubenswrapper[4736]: E0316 16:40:00.282908 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerName="extract-content" Mar 16 16:40:00 crc kubenswrapper[4736]: I0316 16:40:00.283146 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerName="extract-content" Mar 16 16:40:00 crc kubenswrapper[4736]: I0316 16:40:00.284472 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="013776c9-00e6-40fc-a822-c01e0b9337fe" containerName="registry-server" Mar 16 16:40:00 crc kubenswrapper[4736]: I0316 16:40:00.297957 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561320-j7rfh"] Mar 16 16:40:00 crc kubenswrapper[4736]: I0316 16:40:00.294286 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561320-j7rfh" Mar 16 16:40:00 crc kubenswrapper[4736]: I0316 16:40:00.315785 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:40:00 crc kubenswrapper[4736]: I0316 16:40:00.327308 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:40:00 crc kubenswrapper[4736]: I0316 16:40:00.332335 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:40:00 crc kubenswrapper[4736]: I0316 16:40:00.357338 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mllr2\" (UniqueName: \"kubernetes.io/projected/7602ffc8-e73d-4c3c-8cf8-c02fa01bc036-kube-api-access-mllr2\") pod \"auto-csr-approver-29561320-j7rfh\" (UID: \"7602ffc8-e73d-4c3c-8cf8-c02fa01bc036\") " pod="openshift-infra/auto-csr-approver-29561320-j7rfh" Mar 16 16:40:00 crc kubenswrapper[4736]: I0316 16:40:00.458725 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mllr2\" (UniqueName: \"kubernetes.io/projected/7602ffc8-e73d-4c3c-8cf8-c02fa01bc036-kube-api-access-mllr2\") pod \"auto-csr-approver-29561320-j7rfh\" (UID: \"7602ffc8-e73d-4c3c-8cf8-c02fa01bc036\") " pod="openshift-infra/auto-csr-approver-29561320-j7rfh" Mar 16 16:40:00 crc kubenswrapper[4736]: I0316 16:40:00.492402 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mllr2\" (UniqueName: \"kubernetes.io/projected/7602ffc8-e73d-4c3c-8cf8-c02fa01bc036-kube-api-access-mllr2\") pod \"auto-csr-approver-29561320-j7rfh\" (UID: \"7602ffc8-e73d-4c3c-8cf8-c02fa01bc036\") " pod="openshift-infra/auto-csr-approver-29561320-j7rfh" Mar 16 16:40:00 crc kubenswrapper[4736]: I0316 16:40:00.633554 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561320-j7rfh" Mar 16 16:40:01 crc kubenswrapper[4736]: I0316 16:40:01.177825 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561320-j7rfh"] Mar 16 16:40:01 crc kubenswrapper[4736]: I0316 16:40:01.575899 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561320-j7rfh" event={"ID":"7602ffc8-e73d-4c3c-8cf8-c02fa01bc036","Type":"ContainerStarted","Data":"6e3d83fa457d89d20a6fe2a25944d154375105cfc4b5c42c4225ceda2770e062"} Mar 16 16:40:04 crc kubenswrapper[4736]: I0316 16:40:04.608205 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561320-j7rfh" event={"ID":"7602ffc8-e73d-4c3c-8cf8-c02fa01bc036","Type":"ContainerStarted","Data":"b6750833f2a85e01b27aa6e8bb2f958e3f050aadeb692935e5b62c81b72113a4"} Mar 16 16:40:04 crc kubenswrapper[4736]: I0316 16:40:04.631778 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561320-j7rfh" podStartSLOduration=2.89495432 podStartE2EDuration="4.630947183s" podCreationTimestamp="2026-03-16 16:40:00 +0000 UTC" firstStartedPulling="2026-03-16 16:40:01.19948253 +0000 UTC m=+5202.926872817" lastFinishedPulling="2026-03-16 16:40:02.935475393 +0000 UTC m=+5204.662865680" observedRunningTime="2026-03-16 16:40:04.621550548 +0000 UTC m=+5206.348940835" watchObservedRunningTime="2026-03-16 16:40:04.630947183 +0000 UTC m=+5206.358337470" Mar 16 16:40:05 crc kubenswrapper[4736]: I0316 16:40:05.617538 4736 generic.go:334] "Generic (PLEG): container finished" podID="7602ffc8-e73d-4c3c-8cf8-c02fa01bc036" containerID="b6750833f2a85e01b27aa6e8bb2f958e3f050aadeb692935e5b62c81b72113a4" exitCode=0 Mar 16 16:40:05 crc kubenswrapper[4736]: I0316 16:40:05.617599 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561320-j7rfh" event={"ID":"7602ffc8-e73d-4c3c-8cf8-c02fa01bc036","Type":"ContainerDied","Data":"b6750833f2a85e01b27aa6e8bb2f958e3f050aadeb692935e5b62c81b72113a4"} Mar 16 16:40:05 crc kubenswrapper[4736]: I0316 16:40:05.978212 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:40:05 crc kubenswrapper[4736]: E0316 16:40:05.978599 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:40:07 crc kubenswrapper[4736]: I0316 16:40:07.044805 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561320-j7rfh" Mar 16 16:40:07 crc kubenswrapper[4736]: I0316 16:40:07.122678 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mllr2\" (UniqueName: \"kubernetes.io/projected/7602ffc8-e73d-4c3c-8cf8-c02fa01bc036-kube-api-access-mllr2\") pod \"7602ffc8-e73d-4c3c-8cf8-c02fa01bc036\" (UID: \"7602ffc8-e73d-4c3c-8cf8-c02fa01bc036\") " Mar 16 16:40:07 crc kubenswrapper[4736]: I0316 16:40:07.135383 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7602ffc8-e73d-4c3c-8cf8-c02fa01bc036-kube-api-access-mllr2" (OuterVolumeSpecName: "kube-api-access-mllr2") pod "7602ffc8-e73d-4c3c-8cf8-c02fa01bc036" (UID: "7602ffc8-e73d-4c3c-8cf8-c02fa01bc036"). InnerVolumeSpecName "kube-api-access-mllr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:40:07 crc kubenswrapper[4736]: I0316 16:40:07.224801 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mllr2\" (UniqueName: \"kubernetes.io/projected/7602ffc8-e73d-4c3c-8cf8-c02fa01bc036-kube-api-access-mllr2\") on node \"crc\" DevicePath \"\"" Mar 16 16:40:07 crc kubenswrapper[4736]: I0316 16:40:07.638169 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561320-j7rfh" event={"ID":"7602ffc8-e73d-4c3c-8cf8-c02fa01bc036","Type":"ContainerDied","Data":"6e3d83fa457d89d20a6fe2a25944d154375105cfc4b5c42c4225ceda2770e062"} Mar 16 16:40:07 crc kubenswrapper[4736]: I0316 16:40:07.638521 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e3d83fa457d89d20a6fe2a25944d154375105cfc4b5c42c4225ceda2770e062" Mar 16 16:40:07 crc kubenswrapper[4736]: I0316 16:40:07.638250 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561320-j7rfh" Mar 16 16:40:07 crc kubenswrapper[4736]: I0316 16:40:07.701438 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561314-8n726"] Mar 16 16:40:07 crc kubenswrapper[4736]: I0316 16:40:07.709013 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561314-8n726"] Mar 16 16:40:08 crc kubenswrapper[4736]: I0316 16:40:08.991071 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="245a7b65-d83a-4157-9ed1-d68cb1c164d3" path="/var/lib/kubelet/pods/245a7b65-d83a-4157-9ed1-d68cb1c164d3/volumes" Mar 16 16:40:16 crc kubenswrapper[4736]: I0316 16:40:16.978325 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:40:16 crc kubenswrapper[4736]: E0316 16:40:16.979238 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:40:31 crc kubenswrapper[4736]: I0316 16:40:31.978746 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:40:31 crc kubenswrapper[4736]: E0316 16:40:31.980447 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:40:45 crc kubenswrapper[4736]: I0316 16:40:45.978094 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:40:45 crc kubenswrapper[4736]: E0316 16:40:45.978937 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:40:57 crc kubenswrapper[4736]: I0316 16:40:57.685724 4736 scope.go:117] "RemoveContainer" containerID="1e223e4ee806e29648788039e4a1cc15fa7d40a6cac4f06a95b315762a868164" Mar 16 16:40:58 crc kubenswrapper[4736]: I0316 16:40:58.987034 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:40:58 crc kubenswrapper[4736]: E0316 16:40:58.988364 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:41:10 crc kubenswrapper[4736]: I0316 16:41:10.977909 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:41:10 crc kubenswrapper[4736]: E0316 16:41:10.978950 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:41:25 crc kubenswrapper[4736]: I0316 16:41:25.978789 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:41:25 crc kubenswrapper[4736]: E0316 16:41:25.980055 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:41:37 crc kubenswrapper[4736]: I0316 16:41:37.978099 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:41:37 crc kubenswrapper[4736]: E0316 16:41:37.979058 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:41:50 crc kubenswrapper[4736]: I0316 16:41:50.979289 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:41:50 crc kubenswrapper[4736]: E0316 16:41:50.979998 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:42:00 crc kubenswrapper[4736]: I0316 16:42:00.164289 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561322-n84r6"] Mar 16 16:42:00 crc kubenswrapper[4736]: E0316 16:42:00.165448 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7602ffc8-e73d-4c3c-8cf8-c02fa01bc036" containerName="oc" Mar 16 16:42:00 crc kubenswrapper[4736]: I0316 16:42:00.165468 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7602ffc8-e73d-4c3c-8cf8-c02fa01bc036" containerName="oc" Mar 16 16:42:00 crc kubenswrapper[4736]: I0316 16:42:00.166171 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7602ffc8-e73d-4c3c-8cf8-c02fa01bc036" containerName="oc" Mar 16 16:42:00 crc kubenswrapper[4736]: I0316 16:42:00.167602 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561322-n84r6" Mar 16 16:42:00 crc kubenswrapper[4736]: I0316 16:42:00.172271 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:42:00 crc kubenswrapper[4736]: I0316 16:42:00.173517 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561322-n84r6"] Mar 16 16:42:00 crc kubenswrapper[4736]: I0316 16:42:00.172546 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:42:00 crc kubenswrapper[4736]: I0316 16:42:00.172586 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:42:00 crc kubenswrapper[4736]: I0316 16:42:00.324865 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg28w\" (UniqueName: \"kubernetes.io/projected/b5a5e72b-2f76-4487-9a0c-1246ff23ec6c-kube-api-access-hg28w\") pod \"auto-csr-approver-29561322-n84r6\" (UID: \"b5a5e72b-2f76-4487-9a0c-1246ff23ec6c\") " pod="openshift-infra/auto-csr-approver-29561322-n84r6" Mar 16 16:42:00 crc kubenswrapper[4736]: I0316 16:42:00.427043 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg28w\" (UniqueName: \"kubernetes.io/projected/b5a5e72b-2f76-4487-9a0c-1246ff23ec6c-kube-api-access-hg28w\") pod \"auto-csr-approver-29561322-n84r6\" (UID: \"b5a5e72b-2f76-4487-9a0c-1246ff23ec6c\") " pod="openshift-infra/auto-csr-approver-29561322-n84r6" Mar 16 16:42:00 crc kubenswrapper[4736]: I0316 16:42:00.470142 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg28w\" (UniqueName: \"kubernetes.io/projected/b5a5e72b-2f76-4487-9a0c-1246ff23ec6c-kube-api-access-hg28w\") pod \"auto-csr-approver-29561322-n84r6\" (UID: \"b5a5e72b-2f76-4487-9a0c-1246ff23ec6c\") " pod="openshift-infra/auto-csr-approver-29561322-n84r6" Mar 16 16:42:00 crc kubenswrapper[4736]: I0316 16:42:00.548007 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561322-n84r6" Mar 16 16:42:01 crc kubenswrapper[4736]: I0316 16:42:01.398979 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561322-n84r6"] Mar 16 16:42:01 crc kubenswrapper[4736]: I0316 16:42:01.660574 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561322-n84r6" event={"ID":"b5a5e72b-2f76-4487-9a0c-1246ff23ec6c","Type":"ContainerStarted","Data":"1c9ea53fc393a1d9440b367c3f38831c1146804433f0f0344332994ba4127b50"} Mar 16 16:42:04 crc kubenswrapper[4736]: I0316 16:42:04.706841 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561322-n84r6" event={"ID":"b5a5e72b-2f76-4487-9a0c-1246ff23ec6c","Type":"ContainerStarted","Data":"7e3ddeed556644faf44436db56872aadd8b3b52f54795827168f33ab472990d1"} Mar 16 16:42:04 crc kubenswrapper[4736]: I0316 16:42:04.735175 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561322-n84r6" podStartSLOduration=3.2519895180000002 podStartE2EDuration="4.734077704s" podCreationTimestamp="2026-03-16 16:42:00 +0000 UTC" firstStartedPulling="2026-03-16 16:42:01.413950969 +0000 UTC m=+5323.141341256" lastFinishedPulling="2026-03-16 16:42:02.896039145 +0000 UTC m=+5324.623429442" observedRunningTime="2026-03-16 16:42:04.729809637 +0000 UTC m=+5326.457199924" watchObservedRunningTime="2026-03-16 16:42:04.734077704 +0000 UTC m=+5326.461467991" Mar 16 16:42:04 crc kubenswrapper[4736]: I0316 16:42:04.978176 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:42:04 crc kubenswrapper[4736]: E0316 16:42:04.978575 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:42:05 crc kubenswrapper[4736]: I0316 16:42:05.716680 4736 generic.go:334] "Generic (PLEG): container finished" podID="b5a5e72b-2f76-4487-9a0c-1246ff23ec6c" containerID="7e3ddeed556644faf44436db56872aadd8b3b52f54795827168f33ab472990d1" exitCode=0 Mar 16 16:42:05 crc kubenswrapper[4736]: I0316 16:42:05.716902 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561322-n84r6" event={"ID":"b5a5e72b-2f76-4487-9a0c-1246ff23ec6c","Type":"ContainerDied","Data":"7e3ddeed556644faf44436db56872aadd8b3b52f54795827168f33ab472990d1"} Mar 16 16:42:07 crc kubenswrapper[4736]: I0316 16:42:07.152267 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561322-n84r6" Mar 16 16:42:07 crc kubenswrapper[4736]: I0316 16:42:07.166165 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg28w\" (UniqueName: \"kubernetes.io/projected/b5a5e72b-2f76-4487-9a0c-1246ff23ec6c-kube-api-access-hg28w\") pod \"b5a5e72b-2f76-4487-9a0c-1246ff23ec6c\" (UID: \"b5a5e72b-2f76-4487-9a0c-1246ff23ec6c\") " Mar 16 16:42:07 crc kubenswrapper[4736]: I0316 16:42:07.176308 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a5e72b-2f76-4487-9a0c-1246ff23ec6c-kube-api-access-hg28w" (OuterVolumeSpecName: "kube-api-access-hg28w") pod "b5a5e72b-2f76-4487-9a0c-1246ff23ec6c" (UID: "b5a5e72b-2f76-4487-9a0c-1246ff23ec6c"). InnerVolumeSpecName "kube-api-access-hg28w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:42:07 crc kubenswrapper[4736]: I0316 16:42:07.271699 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg28w\" (UniqueName: \"kubernetes.io/projected/b5a5e72b-2f76-4487-9a0c-1246ff23ec6c-kube-api-access-hg28w\") on node \"crc\" DevicePath \"\"" Mar 16 16:42:07 crc kubenswrapper[4736]: I0316 16:42:07.738404 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561322-n84r6" event={"ID":"b5a5e72b-2f76-4487-9a0c-1246ff23ec6c","Type":"ContainerDied","Data":"1c9ea53fc393a1d9440b367c3f38831c1146804433f0f0344332994ba4127b50"} Mar 16 16:42:07 crc kubenswrapper[4736]: I0316 16:42:07.738442 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c9ea53fc393a1d9440b367c3f38831c1146804433f0f0344332994ba4127b50" Mar 16 16:42:07 crc kubenswrapper[4736]: I0316 16:42:07.738481 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561322-n84r6" Mar 16 16:42:07 crc kubenswrapper[4736]: I0316 16:42:07.853291 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561316-fqqhg"] Mar 16 16:42:07 crc kubenswrapper[4736]: I0316 16:42:07.864773 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561316-fqqhg"] Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.322671 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-srfxk"] Mar 16 16:42:08 crc kubenswrapper[4736]: E0316 16:42:08.323080 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a5e72b-2f76-4487-9a0c-1246ff23ec6c" containerName="oc" Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.323096 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a5e72b-2f76-4487-9a0c-1246ff23ec6c" containerName="oc" Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.323353 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a5e72b-2f76-4487-9a0c-1246ff23ec6c" containerName="oc" Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.326279 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.345824 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-srfxk"] Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.397337 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76lh9\" (UniqueName: \"kubernetes.io/projected/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-kube-api-access-76lh9\") pod \"redhat-marketplace-srfxk\" (UID: \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\") " pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.397473 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-catalog-content\") pod \"redhat-marketplace-srfxk\" (UID: \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\") " pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.397527 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-utilities\") pod \"redhat-marketplace-srfxk\" (UID: \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\") " pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.499966 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-catalog-content\") pod \"redhat-marketplace-srfxk\" (UID: \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\") " pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.500535 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-utilities\") pod \"redhat-marketplace-srfxk\" (UID: \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\") " pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.500677 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76lh9\" (UniqueName: \"kubernetes.io/projected/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-kube-api-access-76lh9\") pod \"redhat-marketplace-srfxk\" (UID: \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\") " pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.500770 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-catalog-content\") pod \"redhat-marketplace-srfxk\" (UID: \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\") " pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.501257 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-utilities\") pod \"redhat-marketplace-srfxk\" (UID: \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\") " pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.527858 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76lh9\" (UniqueName: \"kubernetes.io/projected/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-kube-api-access-76lh9\") pod \"redhat-marketplace-srfxk\" (UID: \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\") " pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.688315 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:08 crc kubenswrapper[4736]: I0316 16:42:08.988520 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b430cbb-acce-4f2a-8eed-2af3cb630bd5" path="/var/lib/kubelet/pods/3b430cbb-acce-4f2a-8eed-2af3cb630bd5/volumes" Mar 16 16:42:09 crc kubenswrapper[4736]: W0316 16:42:09.189859 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0fa8f42_9fa2_4e58_823e_e08d3a1364f3.slice/crio-f54d25265ec950191c65ec1818d176e39d9157fdb1d0b090a61efb1ea675e5c4 WatchSource:0}: Error finding container f54d25265ec950191c65ec1818d176e39d9157fdb1d0b090a61efb1ea675e5c4: Status 404 returned error can't find the container with id f54d25265ec950191c65ec1818d176e39d9157fdb1d0b090a61efb1ea675e5c4 Mar 16 16:42:09 crc kubenswrapper[4736]: I0316 16:42:09.199396 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-srfxk"] Mar 16 16:42:09 crc kubenswrapper[4736]: I0316 16:42:09.817135 4736 generic.go:334] "Generic (PLEG): container finished" podID="b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" containerID="1e550d6c3901d175b2d159fd6feac6b5d9e669eab44911abf9b57904a7e9426a" exitCode=0 Mar 16 16:42:09 crc kubenswrapper[4736]: I0316 16:42:09.817193 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srfxk" event={"ID":"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3","Type":"ContainerDied","Data":"1e550d6c3901d175b2d159fd6feac6b5d9e669eab44911abf9b57904a7e9426a"} Mar 16 16:42:09 crc kubenswrapper[4736]: I0316 16:42:09.817362 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srfxk" event={"ID":"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3","Type":"ContainerStarted","Data":"f54d25265ec950191c65ec1818d176e39d9157fdb1d0b090a61efb1ea675e5c4"} Mar 16 16:42:11 crc kubenswrapper[4736]: I0316 16:42:11.839048 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srfxk" event={"ID":"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3","Type":"ContainerStarted","Data":"643332c07a92eafe627ec8ef938fdd147723b9361aa86d7389c89168975fdb7a"} Mar 16 16:42:12 crc kubenswrapper[4736]: I0316 16:42:12.854700 4736 generic.go:334] "Generic (PLEG): container finished" podID="b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" containerID="643332c07a92eafe627ec8ef938fdd147723b9361aa86d7389c89168975fdb7a" exitCode=0 Mar 16 16:42:12 crc kubenswrapper[4736]: I0316 16:42:12.854795 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srfxk" event={"ID":"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3","Type":"ContainerDied","Data":"643332c07a92eafe627ec8ef938fdd147723b9361aa86d7389c89168975fdb7a"} Mar 16 16:42:13 crc kubenswrapper[4736]: I0316 16:42:13.867245 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srfxk" event={"ID":"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3","Type":"ContainerStarted","Data":"63cb94556ab37a4d4d53a04b6fd7e3f035d188079ea2b0b9305747537ac2c339"} Mar 16 16:42:13 crc kubenswrapper[4736]: I0316 16:42:13.896407 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-srfxk" podStartSLOduration=2.263866857 podStartE2EDuration="5.89638807s" podCreationTimestamp="2026-03-16 16:42:08 +0000 UTC" firstStartedPulling="2026-03-16 16:42:09.819577931 +0000 UTC m=+5331.546968218" lastFinishedPulling="2026-03-16 16:42:13.452099154 +0000 UTC m=+5335.179489431" observedRunningTime="2026-03-16 16:42:13.886805979 +0000 UTC m=+5335.614196266" watchObservedRunningTime="2026-03-16 16:42:13.89638807 +0000 UTC m=+5335.623778357" Mar 16 16:42:17 crc kubenswrapper[4736]: I0316 16:42:17.977554 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:42:17 crc kubenswrapper[4736]: E0316 16:42:17.978074 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:42:18 crc kubenswrapper[4736]: I0316 16:42:18.689030 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:18 crc kubenswrapper[4736]: I0316 16:42:18.689386 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:19 crc kubenswrapper[4736]: I0316 16:42:19.756239 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-srfxk" podUID="b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" containerName="registry-server" probeResult="failure" output=< Mar 16 16:42:19 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:42:19 crc kubenswrapper[4736]: > Mar 16 16:42:28 crc kubenswrapper[4736]: I0316 16:42:28.768160 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:28 crc kubenswrapper[4736]: I0316 16:42:28.843948 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:28 crc kubenswrapper[4736]: I0316 16:42:28.983982 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:42:28 crc kubenswrapper[4736]: E0316 16:42:28.984317 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:42:29 crc kubenswrapper[4736]: I0316 16:42:29.014814 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-srfxk"] Mar 16 16:42:30 crc kubenswrapper[4736]: I0316 16:42:30.017470 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-srfxk" podUID="b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" containerName="registry-server" containerID="cri-o://63cb94556ab37a4d4d53a04b6fd7e3f035d188079ea2b0b9305747537ac2c339" gracePeriod=2 Mar 16 16:42:30 crc kubenswrapper[4736]: I0316 16:42:30.699782 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:30 crc kubenswrapper[4736]: I0316 16:42:30.870132 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-utilities\") pod \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\" (UID: \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\") " Mar 16 16:42:30 crc kubenswrapper[4736]: I0316 16:42:30.870222 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-catalog-content\") pod \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\" (UID: \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\") " Mar 16 16:42:30 crc kubenswrapper[4736]: I0316 16:42:30.870281 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76lh9\" (UniqueName: \"kubernetes.io/projected/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-kube-api-access-76lh9\") pod \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\" (UID: \"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3\") " Mar 16 16:42:30 crc kubenswrapper[4736]: I0316 16:42:30.870854 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-utilities" (OuterVolumeSpecName: "utilities") pod "b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" (UID: "b0fa8f42-9fa2-4e58-823e-e08d3a1364f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:42:30 crc kubenswrapper[4736]: I0316 16:42:30.879556 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-kube-api-access-76lh9" (OuterVolumeSpecName: "kube-api-access-76lh9") pod "b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" (UID: "b0fa8f42-9fa2-4e58-823e-e08d3a1364f3"). InnerVolumeSpecName "kube-api-access-76lh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:42:30 crc kubenswrapper[4736]: I0316 16:42:30.902155 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" (UID: "b0fa8f42-9fa2-4e58-823e-e08d3a1364f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:42:30 crc kubenswrapper[4736]: I0316 16:42:30.972833 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76lh9\" (UniqueName: \"kubernetes.io/projected/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-kube-api-access-76lh9\") on node \"crc\" DevicePath \"\"" Mar 16 16:42:30 crc kubenswrapper[4736]: I0316 16:42:30.972873 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:42:30 crc kubenswrapper[4736]: I0316 16:42:30.972882 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.025869 4736 generic.go:334] "Generic (PLEG): container finished" podID="b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" containerID="63cb94556ab37a4d4d53a04b6fd7e3f035d188079ea2b0b9305747537ac2c339" exitCode=0 Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.026985 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srfxk" Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.027000 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srfxk" event={"ID":"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3","Type":"ContainerDied","Data":"63cb94556ab37a4d4d53a04b6fd7e3f035d188079ea2b0b9305747537ac2c339"} Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.027275 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srfxk" event={"ID":"b0fa8f42-9fa2-4e58-823e-e08d3a1364f3","Type":"ContainerDied","Data":"f54d25265ec950191c65ec1818d176e39d9157fdb1d0b090a61efb1ea675e5c4"} Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.027327 4736 scope.go:117] "RemoveContainer" containerID="63cb94556ab37a4d4d53a04b6fd7e3f035d188079ea2b0b9305747537ac2c339" Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.051491 4736 scope.go:117] "RemoveContainer" containerID="643332c07a92eafe627ec8ef938fdd147723b9361aa86d7389c89168975fdb7a" Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.056759 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-srfxk"] Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.066159 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-srfxk"] Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.090300 4736 scope.go:117] "RemoveContainer" containerID="1e550d6c3901d175b2d159fd6feac6b5d9e669eab44911abf9b57904a7e9426a" Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.120235 4736 scope.go:117] "RemoveContainer" containerID="63cb94556ab37a4d4d53a04b6fd7e3f035d188079ea2b0b9305747537ac2c339" Mar 16 16:42:31 crc kubenswrapper[4736]: E0316 16:42:31.124964 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63cb94556ab37a4d4d53a04b6fd7e3f035d188079ea2b0b9305747537ac2c339\": container with ID starting with 63cb94556ab37a4d4d53a04b6fd7e3f035d188079ea2b0b9305747537ac2c339 not found: ID does not exist" containerID="63cb94556ab37a4d4d53a04b6fd7e3f035d188079ea2b0b9305747537ac2c339" Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.125012 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63cb94556ab37a4d4d53a04b6fd7e3f035d188079ea2b0b9305747537ac2c339"} err="failed to get container status \"63cb94556ab37a4d4d53a04b6fd7e3f035d188079ea2b0b9305747537ac2c339\": rpc error: code = NotFound desc = could not find container \"63cb94556ab37a4d4d53a04b6fd7e3f035d188079ea2b0b9305747537ac2c339\": container with ID starting with 63cb94556ab37a4d4d53a04b6fd7e3f035d188079ea2b0b9305747537ac2c339 not found: ID does not exist" Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.125041 4736 scope.go:117] "RemoveContainer" containerID="643332c07a92eafe627ec8ef938fdd147723b9361aa86d7389c89168975fdb7a" Mar 16 16:42:31 crc kubenswrapper[4736]: E0316 16:42:31.125486 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"643332c07a92eafe627ec8ef938fdd147723b9361aa86d7389c89168975fdb7a\": container with ID starting with 643332c07a92eafe627ec8ef938fdd147723b9361aa86d7389c89168975fdb7a not found: ID does not exist" containerID="643332c07a92eafe627ec8ef938fdd147723b9361aa86d7389c89168975fdb7a" Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.125514 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"643332c07a92eafe627ec8ef938fdd147723b9361aa86d7389c89168975fdb7a"} err="failed to get container status \"643332c07a92eafe627ec8ef938fdd147723b9361aa86d7389c89168975fdb7a\": rpc error: code = NotFound desc = could not find container \"643332c07a92eafe627ec8ef938fdd147723b9361aa86d7389c89168975fdb7a\": container with ID starting with 643332c07a92eafe627ec8ef938fdd147723b9361aa86d7389c89168975fdb7a not found: ID does not exist" Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.125533 4736 scope.go:117] "RemoveContainer" containerID="1e550d6c3901d175b2d159fd6feac6b5d9e669eab44911abf9b57904a7e9426a" Mar 16 16:42:31 crc kubenswrapper[4736]: E0316 16:42:31.125866 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e550d6c3901d175b2d159fd6feac6b5d9e669eab44911abf9b57904a7e9426a\": container with ID starting with 1e550d6c3901d175b2d159fd6feac6b5d9e669eab44911abf9b57904a7e9426a not found: ID does not exist" containerID="1e550d6c3901d175b2d159fd6feac6b5d9e669eab44911abf9b57904a7e9426a" Mar 16 16:42:31 crc kubenswrapper[4736]: I0316 16:42:31.125890 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e550d6c3901d175b2d159fd6feac6b5d9e669eab44911abf9b57904a7e9426a"} err="failed to get container status \"1e550d6c3901d175b2d159fd6feac6b5d9e669eab44911abf9b57904a7e9426a\": rpc error: code = NotFound desc = could not find container \"1e550d6c3901d175b2d159fd6feac6b5d9e669eab44911abf9b57904a7e9426a\": container with ID starting with 1e550d6c3901d175b2d159fd6feac6b5d9e669eab44911abf9b57904a7e9426a not found: ID does not exist" Mar 16 16:42:32 crc kubenswrapper[4736]: I0316 16:42:32.989174 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" path="/var/lib/kubelet/pods/b0fa8f42-9fa2-4e58-823e-e08d3a1364f3/volumes" Mar 16 16:42:40 crc kubenswrapper[4736]: I0316 16:42:40.978317 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:42:42 crc kubenswrapper[4736]: I0316 16:42:42.126049 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"8dec035bba71e0f4bce091e334d7143c9eb4ac0b32023e4673c75a566a58a9f1"} Mar 16 16:42:58 crc kubenswrapper[4736]: I0316 16:42:58.267905 4736 scope.go:117] "RemoveContainer" containerID="b70f96db160ab6c2c20124a91eaed7f9224648013eaf2622534a870961572d80" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.156429 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bz22b"] Mar 16 16:42:59 crc kubenswrapper[4736]: E0316 16:42:59.156999 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" containerName="extract-content" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.157014 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" containerName="extract-content" Mar 16 16:42:59 crc kubenswrapper[4736]: E0316 16:42:59.157040 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" containerName="extract-utilities" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.157046 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" containerName="extract-utilities" Mar 16 16:42:59 crc kubenswrapper[4736]: E0316 16:42:59.157062 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" containerName="registry-server" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.157068 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" containerName="registry-server" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.159119 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0fa8f42-9fa2-4e58-823e-e08d3a1364f3" containerName="registry-server" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.161581 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.183612 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bz22b"] Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.324507 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e556d5d4-ad61-44fe-987c-efc4917dbf08-catalog-content\") pod \"community-operators-bz22b\" (UID: \"e556d5d4-ad61-44fe-987c-efc4917dbf08\") " pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.324596 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptgtl\" (UniqueName: \"kubernetes.io/projected/e556d5d4-ad61-44fe-987c-efc4917dbf08-kube-api-access-ptgtl\") pod \"community-operators-bz22b\" (UID: \"e556d5d4-ad61-44fe-987c-efc4917dbf08\") " pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.324631 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e556d5d4-ad61-44fe-987c-efc4917dbf08-utilities\") pod \"community-operators-bz22b\" (UID: \"e556d5d4-ad61-44fe-987c-efc4917dbf08\") " pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.425997 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e556d5d4-ad61-44fe-987c-efc4917dbf08-catalog-content\") pod \"community-operators-bz22b\" (UID: \"e556d5d4-ad61-44fe-987c-efc4917dbf08\") " pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.426336 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptgtl\" (UniqueName: \"kubernetes.io/projected/e556d5d4-ad61-44fe-987c-efc4917dbf08-kube-api-access-ptgtl\") pod \"community-operators-bz22b\" (UID: \"e556d5d4-ad61-44fe-987c-efc4917dbf08\") " pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.426440 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e556d5d4-ad61-44fe-987c-efc4917dbf08-utilities\") pod \"community-operators-bz22b\" (UID: \"e556d5d4-ad61-44fe-987c-efc4917dbf08\") " pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.426485 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e556d5d4-ad61-44fe-987c-efc4917dbf08-catalog-content\") pod \"community-operators-bz22b\" (UID: \"e556d5d4-ad61-44fe-987c-efc4917dbf08\") " pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.426855 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e556d5d4-ad61-44fe-987c-efc4917dbf08-utilities\") pod \"community-operators-bz22b\" (UID: \"e556d5d4-ad61-44fe-987c-efc4917dbf08\") " pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.453079 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptgtl\" (UniqueName: \"kubernetes.io/projected/e556d5d4-ad61-44fe-987c-efc4917dbf08-kube-api-access-ptgtl\") pod \"community-operators-bz22b\" (UID: \"e556d5d4-ad61-44fe-987c-efc4917dbf08\") " pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:42:59 crc kubenswrapper[4736]: I0316 16:42:59.489179 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:43:00 crc kubenswrapper[4736]: I0316 16:43:00.063249 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bz22b"] Mar 16 16:43:00 crc kubenswrapper[4736]: I0316 16:43:00.288632 4736 generic.go:334] "Generic (PLEG): container finished" podID="e556d5d4-ad61-44fe-987c-efc4917dbf08" containerID="1708baf64c03a26c796501a905bf86c826246affb7a390dd91f241ef4b6c665c" exitCode=0 Mar 16 16:43:00 crc kubenswrapper[4736]: I0316 16:43:00.288725 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz22b" event={"ID":"e556d5d4-ad61-44fe-987c-efc4917dbf08","Type":"ContainerDied","Data":"1708baf64c03a26c796501a905bf86c826246affb7a390dd91f241ef4b6c665c"} Mar 16 16:43:00 crc kubenswrapper[4736]: I0316 16:43:00.288936 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz22b" event={"ID":"e556d5d4-ad61-44fe-987c-efc4917dbf08","Type":"ContainerStarted","Data":"512a00ecf361fdba2f63bc15b0b9b99aaef68c98272ab0b3dbf99d2ab484f20a"} Mar 16 16:43:01 crc kubenswrapper[4736]: I0316 16:43:01.300719 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz22b" event={"ID":"e556d5d4-ad61-44fe-987c-efc4917dbf08","Type":"ContainerStarted","Data":"705995c09a0bd5bd4ff96c8c8eefdbd99f0414a003996ac4d36e8031e12978e3"} Mar 16 16:43:03 crc kubenswrapper[4736]: I0316 16:43:03.329472 4736 generic.go:334] "Generic (PLEG): container finished" podID="e556d5d4-ad61-44fe-987c-efc4917dbf08" containerID="705995c09a0bd5bd4ff96c8c8eefdbd99f0414a003996ac4d36e8031e12978e3" exitCode=0 Mar 16 16:43:03 crc kubenswrapper[4736]: I0316 16:43:03.329718 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz22b" event={"ID":"e556d5d4-ad61-44fe-987c-efc4917dbf08","Type":"ContainerDied","Data":"705995c09a0bd5bd4ff96c8c8eefdbd99f0414a003996ac4d36e8031e12978e3"} Mar 16 16:43:03 crc kubenswrapper[4736]: I0316 16:43:03.333998 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 16:43:04 crc kubenswrapper[4736]: I0316 16:43:04.340465 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz22b" event={"ID":"e556d5d4-ad61-44fe-987c-efc4917dbf08","Type":"ContainerStarted","Data":"555f8fc47b46947fb798bc1b903f08723af0e783037aec61335f883139277744"} Mar 16 16:43:04 crc kubenswrapper[4736]: I0316 16:43:04.368139 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bz22b" podStartSLOduration=1.9089062989999999 podStartE2EDuration="5.368101347s" podCreationTimestamp="2026-03-16 16:42:59 +0000 UTC" firstStartedPulling="2026-03-16 16:43:00.290259374 +0000 UTC m=+5382.017649661" lastFinishedPulling="2026-03-16 16:43:03.749454422 +0000 UTC m=+5385.476844709" observedRunningTime="2026-03-16 16:43:04.361713803 +0000 UTC m=+5386.089104090" watchObservedRunningTime="2026-03-16 16:43:04.368101347 +0000 UTC m=+5386.095491634" Mar 16 16:43:06 crc kubenswrapper[4736]: I0316 16:43:06.558978 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ggvd9"] Mar 16 16:43:06 crc kubenswrapper[4736]: I0316 16:43:06.561448 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:06 crc kubenswrapper[4736]: I0316 16:43:06.576287 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ggvd9"] Mar 16 16:43:06 crc kubenswrapper[4736]: I0316 16:43:06.689591 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68ea313b-f446-49d9-9ddd-eaa8e21dce99-utilities\") pod \"certified-operators-ggvd9\" (UID: \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\") " pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:06 crc kubenswrapper[4736]: I0316 16:43:06.689714 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68ea313b-f446-49d9-9ddd-eaa8e21dce99-catalog-content\") pod \"certified-operators-ggvd9\" (UID: \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\") " pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:06 crc kubenswrapper[4736]: I0316 16:43:06.689854 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wbcq\" (UniqueName: \"kubernetes.io/projected/68ea313b-f446-49d9-9ddd-eaa8e21dce99-kube-api-access-8wbcq\") pod \"certified-operators-ggvd9\" (UID: \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\") " pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:06 crc kubenswrapper[4736]: I0316 16:43:06.792239 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68ea313b-f446-49d9-9ddd-eaa8e21dce99-utilities\") pod \"certified-operators-ggvd9\" (UID: \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\") " pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:06 crc kubenswrapper[4736]: I0316 16:43:06.792343 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68ea313b-f446-49d9-9ddd-eaa8e21dce99-catalog-content\") pod \"certified-operators-ggvd9\" (UID: \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\") " pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:06 crc kubenswrapper[4736]: I0316 16:43:06.792514 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wbcq\" (UniqueName: \"kubernetes.io/projected/68ea313b-f446-49d9-9ddd-eaa8e21dce99-kube-api-access-8wbcq\") pod \"certified-operators-ggvd9\" (UID: \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\") " pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:06 crc kubenswrapper[4736]: I0316 16:43:06.792907 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68ea313b-f446-49d9-9ddd-eaa8e21dce99-utilities\") pod \"certified-operators-ggvd9\" (UID: \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\") " pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:06 crc kubenswrapper[4736]: I0316 16:43:06.793001 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68ea313b-f446-49d9-9ddd-eaa8e21dce99-catalog-content\") pod \"certified-operators-ggvd9\" (UID: \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\") " pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:07 crc kubenswrapper[4736]: I0316 16:43:07.193316 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wbcq\" (UniqueName: \"kubernetes.io/projected/68ea313b-f446-49d9-9ddd-eaa8e21dce99-kube-api-access-8wbcq\") pod \"certified-operators-ggvd9\" (UID: \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\") " pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:07 crc kubenswrapper[4736]: I0316 16:43:07.487830 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:08 crc kubenswrapper[4736]: I0316 16:43:08.056188 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ggvd9"] Mar 16 16:43:08 crc kubenswrapper[4736]: W0316 16:43:08.061975 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68ea313b_f446_49d9_9ddd_eaa8e21dce99.slice/crio-8415cac0f7bb221eefd028c0716be192160ca4bdaea7d6612796935bc48bd76d WatchSource:0}: Error finding container 8415cac0f7bb221eefd028c0716be192160ca4bdaea7d6612796935bc48bd76d: Status 404 returned error can't find the container with id 8415cac0f7bb221eefd028c0716be192160ca4bdaea7d6612796935bc48bd76d Mar 16 16:43:08 crc kubenswrapper[4736]: I0316 16:43:08.378128 4736 generic.go:334] "Generic (PLEG): container finished" podID="68ea313b-f446-49d9-9ddd-eaa8e21dce99" containerID="663eec10bf6292c282d89a2d4750ea698eafdaae6d3cb61c3f8dc00e4629bb9d" exitCode=0 Mar 16 16:43:08 crc kubenswrapper[4736]: I0316 16:43:08.378201 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggvd9" event={"ID":"68ea313b-f446-49d9-9ddd-eaa8e21dce99","Type":"ContainerDied","Data":"663eec10bf6292c282d89a2d4750ea698eafdaae6d3cb61c3f8dc00e4629bb9d"} Mar 16 16:43:08 crc kubenswrapper[4736]: I0316 16:43:08.378706 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggvd9" event={"ID":"68ea313b-f446-49d9-9ddd-eaa8e21dce99","Type":"ContainerStarted","Data":"8415cac0f7bb221eefd028c0716be192160ca4bdaea7d6612796935bc48bd76d"} Mar 16 16:43:09 crc kubenswrapper[4736]: I0316 16:43:09.489961 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:43:09 crc kubenswrapper[4736]: I0316 16:43:09.491811 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:43:10 crc kubenswrapper[4736]: I0316 16:43:10.434715 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggvd9" event={"ID":"68ea313b-f446-49d9-9ddd-eaa8e21dce99","Type":"ContainerStarted","Data":"f058853fc71a20486bfd8fe46e6a9d5732f605649a0ece6524648ea64a83f14c"} Mar 16 16:43:10 crc kubenswrapper[4736]: I0316 16:43:10.550222 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bz22b" podUID="e556d5d4-ad61-44fe-987c-efc4917dbf08" containerName="registry-server" probeResult="failure" output=< Mar 16 16:43:10 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:43:10 crc kubenswrapper[4736]: > Mar 16 16:43:11 crc kubenswrapper[4736]: I0316 16:43:11.433552 4736 generic.go:334] "Generic (PLEG): container finished" podID="68ea313b-f446-49d9-9ddd-eaa8e21dce99" containerID="f058853fc71a20486bfd8fe46e6a9d5732f605649a0ece6524648ea64a83f14c" exitCode=0 Mar 16 16:43:11 crc kubenswrapper[4736]: I0316 16:43:11.433613 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggvd9" event={"ID":"68ea313b-f446-49d9-9ddd-eaa8e21dce99","Type":"ContainerDied","Data":"f058853fc71a20486bfd8fe46e6a9d5732f605649a0ece6524648ea64a83f14c"} Mar 16 16:43:12 crc kubenswrapper[4736]: I0316 16:43:12.446363 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggvd9" event={"ID":"68ea313b-f446-49d9-9ddd-eaa8e21dce99","Type":"ContainerStarted","Data":"06eae11f84f0638f6480158dc98a8e7543d8b4a8c5fcf3472dff612df7d20455"} Mar 16 16:43:12 crc kubenswrapper[4736]: I0316 16:43:12.480578 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ggvd9" podStartSLOduration=3.019787714 podStartE2EDuration="6.480562077s" podCreationTimestamp="2026-03-16 16:43:06 +0000 UTC" firstStartedPulling="2026-03-16 16:43:08.379604353 +0000 UTC m=+5390.106994640" lastFinishedPulling="2026-03-16 16:43:11.840378716 +0000 UTC m=+5393.567769003" observedRunningTime="2026-03-16 16:43:12.475125119 +0000 UTC m=+5394.202515426" watchObservedRunningTime="2026-03-16 16:43:12.480562077 +0000 UTC m=+5394.207952364" Mar 16 16:43:17 crc kubenswrapper[4736]: I0316 16:43:17.488827 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:17 crc kubenswrapper[4736]: I0316 16:43:17.489322 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:18 crc kubenswrapper[4736]: I0316 16:43:18.547569 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ggvd9" podUID="68ea313b-f446-49d9-9ddd-eaa8e21dce99" containerName="registry-server" probeResult="failure" output=< Mar 16 16:43:18 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:43:18 crc kubenswrapper[4736]: > Mar 16 16:43:19 crc kubenswrapper[4736]: I0316 16:43:19.637846 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:43:19 crc kubenswrapper[4736]: I0316 16:43:19.698206 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:43:19 crc kubenswrapper[4736]: I0316 16:43:19.883487 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bz22b"] Mar 16 16:43:21 crc kubenswrapper[4736]: I0316 16:43:21.536306 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bz22b" podUID="e556d5d4-ad61-44fe-987c-efc4917dbf08" containerName="registry-server" containerID="cri-o://555f8fc47b46947fb798bc1b903f08723af0e783037aec61335f883139277744" gracePeriod=2 Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.131498 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.301945 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e556d5d4-ad61-44fe-987c-efc4917dbf08-utilities\") pod \"e556d5d4-ad61-44fe-987c-efc4917dbf08\" (UID: \"e556d5d4-ad61-44fe-987c-efc4917dbf08\") " Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.302279 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptgtl\" (UniqueName: \"kubernetes.io/projected/e556d5d4-ad61-44fe-987c-efc4917dbf08-kube-api-access-ptgtl\") pod \"e556d5d4-ad61-44fe-987c-efc4917dbf08\" (UID: \"e556d5d4-ad61-44fe-987c-efc4917dbf08\") " Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.302306 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e556d5d4-ad61-44fe-987c-efc4917dbf08-catalog-content\") pod \"e556d5d4-ad61-44fe-987c-efc4917dbf08\" (UID: \"e556d5d4-ad61-44fe-987c-efc4917dbf08\") " Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.302523 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e556d5d4-ad61-44fe-987c-efc4917dbf08-utilities" (OuterVolumeSpecName: "utilities") pod "e556d5d4-ad61-44fe-987c-efc4917dbf08" (UID: "e556d5d4-ad61-44fe-987c-efc4917dbf08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.303132 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e556d5d4-ad61-44fe-987c-efc4917dbf08-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.314409 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e556d5d4-ad61-44fe-987c-efc4917dbf08-kube-api-access-ptgtl" (OuterVolumeSpecName: "kube-api-access-ptgtl") pod "e556d5d4-ad61-44fe-987c-efc4917dbf08" (UID: "e556d5d4-ad61-44fe-987c-efc4917dbf08"). InnerVolumeSpecName "kube-api-access-ptgtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.357036 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e556d5d4-ad61-44fe-987c-efc4917dbf08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e556d5d4-ad61-44fe-987c-efc4917dbf08" (UID: "e556d5d4-ad61-44fe-987c-efc4917dbf08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.404828 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptgtl\" (UniqueName: \"kubernetes.io/projected/e556d5d4-ad61-44fe-987c-efc4917dbf08-kube-api-access-ptgtl\") on node \"crc\" DevicePath \"\"" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.404871 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e556d5d4-ad61-44fe-987c-efc4917dbf08-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.548655 4736 generic.go:334] "Generic (PLEG): container finished" podID="e556d5d4-ad61-44fe-987c-efc4917dbf08" containerID="555f8fc47b46947fb798bc1b903f08723af0e783037aec61335f883139277744" exitCode=0 Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.548714 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz22b" event={"ID":"e556d5d4-ad61-44fe-987c-efc4917dbf08","Type":"ContainerDied","Data":"555f8fc47b46947fb798bc1b903f08723af0e783037aec61335f883139277744"} Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.548756 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bz22b" event={"ID":"e556d5d4-ad61-44fe-987c-efc4917dbf08","Type":"ContainerDied","Data":"512a00ecf361fdba2f63bc15b0b9b99aaef68c98272ab0b3dbf99d2ab484f20a"} Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.548787 4736 scope.go:117] "RemoveContainer" containerID="555f8fc47b46947fb798bc1b903f08723af0e783037aec61335f883139277744" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.549048 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bz22b" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.602133 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bz22b"] Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.613865 4736 scope.go:117] "RemoveContainer" containerID="705995c09a0bd5bd4ff96c8c8eefdbd99f0414a003996ac4d36e8031e12978e3" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.616599 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bz22b"] Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.638744 4736 scope.go:117] "RemoveContainer" containerID="1708baf64c03a26c796501a905bf86c826246affb7a390dd91f241ef4b6c665c" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.686207 4736 scope.go:117] "RemoveContainer" containerID="555f8fc47b46947fb798bc1b903f08723af0e783037aec61335f883139277744" Mar 16 16:43:22 crc kubenswrapper[4736]: E0316 16:43:22.686545 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"555f8fc47b46947fb798bc1b903f08723af0e783037aec61335f883139277744\": container with ID starting with 555f8fc47b46947fb798bc1b903f08723af0e783037aec61335f883139277744 not found: ID does not exist" containerID="555f8fc47b46947fb798bc1b903f08723af0e783037aec61335f883139277744" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.686576 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"555f8fc47b46947fb798bc1b903f08723af0e783037aec61335f883139277744"} err="failed to get container status \"555f8fc47b46947fb798bc1b903f08723af0e783037aec61335f883139277744\": rpc error: code = NotFound desc = could not find container \"555f8fc47b46947fb798bc1b903f08723af0e783037aec61335f883139277744\": container with ID starting with 555f8fc47b46947fb798bc1b903f08723af0e783037aec61335f883139277744 not found: ID does not exist" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.686604 4736 scope.go:117] "RemoveContainer" containerID="705995c09a0bd5bd4ff96c8c8eefdbd99f0414a003996ac4d36e8031e12978e3" Mar 16 16:43:22 crc kubenswrapper[4736]: E0316 16:43:22.686905 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"705995c09a0bd5bd4ff96c8c8eefdbd99f0414a003996ac4d36e8031e12978e3\": container with ID starting with 705995c09a0bd5bd4ff96c8c8eefdbd99f0414a003996ac4d36e8031e12978e3 not found: ID does not exist" containerID="705995c09a0bd5bd4ff96c8c8eefdbd99f0414a003996ac4d36e8031e12978e3" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.686946 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705995c09a0bd5bd4ff96c8c8eefdbd99f0414a003996ac4d36e8031e12978e3"} err="failed to get container status \"705995c09a0bd5bd4ff96c8c8eefdbd99f0414a003996ac4d36e8031e12978e3\": rpc error: code = NotFound desc = could not find container \"705995c09a0bd5bd4ff96c8c8eefdbd99f0414a003996ac4d36e8031e12978e3\": container with ID starting with 705995c09a0bd5bd4ff96c8c8eefdbd99f0414a003996ac4d36e8031e12978e3 not found: ID does not exist" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.686972 4736 scope.go:117] "RemoveContainer" containerID="1708baf64c03a26c796501a905bf86c826246affb7a390dd91f241ef4b6c665c" Mar 16 16:43:22 crc kubenswrapper[4736]: E0316 16:43:22.687243 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1708baf64c03a26c796501a905bf86c826246affb7a390dd91f241ef4b6c665c\": container with ID starting with 1708baf64c03a26c796501a905bf86c826246affb7a390dd91f241ef4b6c665c not found: ID does not exist" containerID="1708baf64c03a26c796501a905bf86c826246affb7a390dd91f241ef4b6c665c" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.687274 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1708baf64c03a26c796501a905bf86c826246affb7a390dd91f241ef4b6c665c"} err="failed to get container status \"1708baf64c03a26c796501a905bf86c826246affb7a390dd91f241ef4b6c665c\": rpc error: code = NotFound desc = could not find container \"1708baf64c03a26c796501a905bf86c826246affb7a390dd91f241ef4b6c665c\": container with ID starting with 1708baf64c03a26c796501a905bf86c826246affb7a390dd91f241ef4b6c665c not found: ID does not exist" Mar 16 16:43:22 crc kubenswrapper[4736]: I0316 16:43:22.990556 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e556d5d4-ad61-44fe-987c-efc4917dbf08" path="/var/lib/kubelet/pods/e556d5d4-ad61-44fe-987c-efc4917dbf08/volumes" Mar 16 16:43:27 crc kubenswrapper[4736]: I0316 16:43:27.545447 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:27 crc kubenswrapper[4736]: I0316 16:43:27.594291 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:27 crc kubenswrapper[4736]: I0316 16:43:27.783244 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ggvd9"] Mar 16 16:43:28 crc kubenswrapper[4736]: I0316 16:43:28.651358 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ggvd9" podUID="68ea313b-f446-49d9-9ddd-eaa8e21dce99" containerName="registry-server" containerID="cri-o://06eae11f84f0638f6480158dc98a8e7543d8b4a8c5fcf3472dff612df7d20455" gracePeriod=2 Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.178717 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.242331 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68ea313b-f446-49d9-9ddd-eaa8e21dce99-utilities\") pod \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\" (UID: \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\") " Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.242443 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wbcq\" (UniqueName: \"kubernetes.io/projected/68ea313b-f446-49d9-9ddd-eaa8e21dce99-kube-api-access-8wbcq\") pod \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\" (UID: \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\") " Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.242646 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68ea313b-f446-49d9-9ddd-eaa8e21dce99-catalog-content\") pod \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\" (UID: \"68ea313b-f446-49d9-9ddd-eaa8e21dce99\") " Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.243596 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68ea313b-f446-49d9-9ddd-eaa8e21dce99-utilities" (OuterVolumeSpecName: "utilities") pod "68ea313b-f446-49d9-9ddd-eaa8e21dce99" (UID: "68ea313b-f446-49d9-9ddd-eaa8e21dce99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.249027 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ea313b-f446-49d9-9ddd-eaa8e21dce99-kube-api-access-8wbcq" (OuterVolumeSpecName: "kube-api-access-8wbcq") pod "68ea313b-f446-49d9-9ddd-eaa8e21dce99" (UID: "68ea313b-f446-49d9-9ddd-eaa8e21dce99"). InnerVolumeSpecName "kube-api-access-8wbcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.305412 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68ea313b-f446-49d9-9ddd-eaa8e21dce99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68ea313b-f446-49d9-9ddd-eaa8e21dce99" (UID: "68ea313b-f446-49d9-9ddd-eaa8e21dce99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.345424 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68ea313b-f446-49d9-9ddd-eaa8e21dce99-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.345479 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68ea313b-f446-49d9-9ddd-eaa8e21dce99-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.345493 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wbcq\" (UniqueName: \"kubernetes.io/projected/68ea313b-f446-49d9-9ddd-eaa8e21dce99-kube-api-access-8wbcq\") on node \"crc\" DevicePath \"\"" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.662353 4736 generic.go:334] "Generic (PLEG): container finished" podID="68ea313b-f446-49d9-9ddd-eaa8e21dce99" containerID="06eae11f84f0638f6480158dc98a8e7543d8b4a8c5fcf3472dff612df7d20455" exitCode=0 Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.662406 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggvd9" event={"ID":"68ea313b-f446-49d9-9ddd-eaa8e21dce99","Type":"ContainerDied","Data":"06eae11f84f0638f6480158dc98a8e7543d8b4a8c5fcf3472dff612df7d20455"} Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.662442 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggvd9" event={"ID":"68ea313b-f446-49d9-9ddd-eaa8e21dce99","Type":"ContainerDied","Data":"8415cac0f7bb221eefd028c0716be192160ca4bdaea7d6612796935bc48bd76d"} Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.662464 4736 scope.go:117] "RemoveContainer" containerID="06eae11f84f0638f6480158dc98a8e7543d8b4a8c5fcf3472dff612df7d20455" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.662478 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggvd9" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.688221 4736 scope.go:117] "RemoveContainer" containerID="f058853fc71a20486bfd8fe46e6a9d5732f605649a0ece6524648ea64a83f14c" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.710522 4736 scope.go:117] "RemoveContainer" containerID="663eec10bf6292c282d89a2d4750ea698eafdaae6d3cb61c3f8dc00e4629bb9d" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.784944 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ggvd9"] Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.787161 4736 scope.go:117] "RemoveContainer" containerID="06eae11f84f0638f6480158dc98a8e7543d8b4a8c5fcf3472dff612df7d20455" Mar 16 16:43:29 crc kubenswrapper[4736]: E0316 16:43:29.787616 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06eae11f84f0638f6480158dc98a8e7543d8b4a8c5fcf3472dff612df7d20455\": container with ID starting with 06eae11f84f0638f6480158dc98a8e7543d8b4a8c5fcf3472dff612df7d20455 not found: ID does not exist" containerID="06eae11f84f0638f6480158dc98a8e7543d8b4a8c5fcf3472dff612df7d20455" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.787646 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06eae11f84f0638f6480158dc98a8e7543d8b4a8c5fcf3472dff612df7d20455"} err="failed to get container status \"06eae11f84f0638f6480158dc98a8e7543d8b4a8c5fcf3472dff612df7d20455\": rpc error: code = NotFound desc = could not find container \"06eae11f84f0638f6480158dc98a8e7543d8b4a8c5fcf3472dff612df7d20455\": container with ID starting with 06eae11f84f0638f6480158dc98a8e7543d8b4a8c5fcf3472dff612df7d20455 not found: ID does not exist" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.787669 4736 scope.go:117] "RemoveContainer" containerID="f058853fc71a20486bfd8fe46e6a9d5732f605649a0ece6524648ea64a83f14c" Mar 16 16:43:29 crc kubenswrapper[4736]: E0316 16:43:29.787984 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f058853fc71a20486bfd8fe46e6a9d5732f605649a0ece6524648ea64a83f14c\": container with ID starting with f058853fc71a20486bfd8fe46e6a9d5732f605649a0ece6524648ea64a83f14c not found: ID does not exist" containerID="f058853fc71a20486bfd8fe46e6a9d5732f605649a0ece6524648ea64a83f14c" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.788006 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f058853fc71a20486bfd8fe46e6a9d5732f605649a0ece6524648ea64a83f14c"} err="failed to get container status \"f058853fc71a20486bfd8fe46e6a9d5732f605649a0ece6524648ea64a83f14c\": rpc error: code = NotFound desc = could not find container \"f058853fc71a20486bfd8fe46e6a9d5732f605649a0ece6524648ea64a83f14c\": container with ID starting with f058853fc71a20486bfd8fe46e6a9d5732f605649a0ece6524648ea64a83f14c not found: ID does not exist" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.788021 4736 scope.go:117] "RemoveContainer" containerID="663eec10bf6292c282d89a2d4750ea698eafdaae6d3cb61c3f8dc00e4629bb9d" Mar 16 16:43:29 crc kubenswrapper[4736]: E0316 16:43:29.788296 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"663eec10bf6292c282d89a2d4750ea698eafdaae6d3cb61c3f8dc00e4629bb9d\": container with ID starting with 663eec10bf6292c282d89a2d4750ea698eafdaae6d3cb61c3f8dc00e4629bb9d not found: ID does not exist" containerID="663eec10bf6292c282d89a2d4750ea698eafdaae6d3cb61c3f8dc00e4629bb9d" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.788317 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"663eec10bf6292c282d89a2d4750ea698eafdaae6d3cb61c3f8dc00e4629bb9d"} err="failed to get container status \"663eec10bf6292c282d89a2d4750ea698eafdaae6d3cb61c3f8dc00e4629bb9d\": rpc error: code = NotFound desc = could not find container \"663eec10bf6292c282d89a2d4750ea698eafdaae6d3cb61c3f8dc00e4629bb9d\": container with ID starting with 663eec10bf6292c282d89a2d4750ea698eafdaae6d3cb61c3f8dc00e4629bb9d not found: ID does not exist" Mar 16 16:43:29 crc kubenswrapper[4736]: I0316 16:43:29.797153 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ggvd9"] Mar 16 16:43:30 crc kubenswrapper[4736]: I0316 16:43:30.993598 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68ea313b-f446-49d9-9ddd-eaa8e21dce99" path="/var/lib/kubelet/pods/68ea313b-f446-49d9-9ddd-eaa8e21dce99/volumes" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.177652 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561324-sc9gp"] Mar 16 16:44:00 crc kubenswrapper[4736]: E0316 16:44:00.178775 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e556d5d4-ad61-44fe-987c-efc4917dbf08" containerName="registry-server" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.178793 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e556d5d4-ad61-44fe-987c-efc4917dbf08" containerName="registry-server" Mar 16 16:44:00 crc kubenswrapper[4736]: E0316 16:44:00.178826 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e556d5d4-ad61-44fe-987c-efc4917dbf08" containerName="extract-utilities" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.178836 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e556d5d4-ad61-44fe-987c-efc4917dbf08" containerName="extract-utilities" Mar 16 16:44:00 crc kubenswrapper[4736]: E0316 16:44:00.178846 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e556d5d4-ad61-44fe-987c-efc4917dbf08" containerName="extract-content" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.178855 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e556d5d4-ad61-44fe-987c-efc4917dbf08" containerName="extract-content" Mar 16 16:44:00 crc kubenswrapper[4736]: E0316 16:44:00.178884 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ea313b-f446-49d9-9ddd-eaa8e21dce99" containerName="extract-utilities" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.178893 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ea313b-f446-49d9-9ddd-eaa8e21dce99" containerName="extract-utilities" Mar 16 16:44:00 crc kubenswrapper[4736]: E0316 16:44:00.178903 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ea313b-f446-49d9-9ddd-eaa8e21dce99" containerName="extract-content" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.178913 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ea313b-f446-49d9-9ddd-eaa8e21dce99" containerName="extract-content" Mar 16 16:44:00 crc kubenswrapper[4736]: E0316 16:44:00.178923 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ea313b-f446-49d9-9ddd-eaa8e21dce99" containerName="registry-server" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.178933 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ea313b-f446-49d9-9ddd-eaa8e21dce99" containerName="registry-server" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.179204 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e556d5d4-ad61-44fe-987c-efc4917dbf08" containerName="registry-server" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.179232 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="68ea313b-f446-49d9-9ddd-eaa8e21dce99" containerName="registry-server" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.180054 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561324-sc9gp" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.188938 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561324-sc9gp"] Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.195926 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.197717 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.197966 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.321782 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49hn2\" (UniqueName: \"kubernetes.io/projected/3c9d8c74-f7a4-4136-9b76-45f838fc532e-kube-api-access-49hn2\") pod \"auto-csr-approver-29561324-sc9gp\" (UID: \"3c9d8c74-f7a4-4136-9b76-45f838fc532e\") " pod="openshift-infra/auto-csr-approver-29561324-sc9gp" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.424209 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49hn2\" (UniqueName: \"kubernetes.io/projected/3c9d8c74-f7a4-4136-9b76-45f838fc532e-kube-api-access-49hn2\") pod \"auto-csr-approver-29561324-sc9gp\" (UID: \"3c9d8c74-f7a4-4136-9b76-45f838fc532e\") " pod="openshift-infra/auto-csr-approver-29561324-sc9gp" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.448333 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49hn2\" (UniqueName: \"kubernetes.io/projected/3c9d8c74-f7a4-4136-9b76-45f838fc532e-kube-api-access-49hn2\") pod \"auto-csr-approver-29561324-sc9gp\" (UID: \"3c9d8c74-f7a4-4136-9b76-45f838fc532e\") " pod="openshift-infra/auto-csr-approver-29561324-sc9gp" Mar 16 16:44:00 crc kubenswrapper[4736]: I0316 16:44:00.539328 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561324-sc9gp" Mar 16 16:44:01 crc kubenswrapper[4736]: I0316 16:44:01.037756 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561324-sc9gp"] Mar 16 16:44:01 crc kubenswrapper[4736]: I0316 16:44:01.955437 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561324-sc9gp" event={"ID":"3c9d8c74-f7a4-4136-9b76-45f838fc532e","Type":"ContainerStarted","Data":"7e445e21f4366196fef36ab8bf75ef2f0b69b89f5355a5214fcb30733c04e0cb"} Mar 16 16:44:03 crc kubenswrapper[4736]: I0316 16:44:03.983752 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561324-sc9gp" event={"ID":"3c9d8c74-f7a4-4136-9b76-45f838fc532e","Type":"ContainerStarted","Data":"3c8db7265a539301cae9106a8eb6ed235f403a70fdfb51f140e18e3cb9a3a2d0"} Mar 16 16:44:04 crc kubenswrapper[4736]: I0316 16:44:04.015158 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561324-sc9gp" podStartSLOduration=2.856941799 podStartE2EDuration="4.015136672s" podCreationTimestamp="2026-03-16 16:44:00 +0000 UTC" firstStartedPulling="2026-03-16 16:44:01.040190304 +0000 UTC m=+5442.767580591" lastFinishedPulling="2026-03-16 16:44:02.198385177 +0000 UTC m=+5443.925775464" observedRunningTime="2026-03-16 16:44:04.008904273 +0000 UTC m=+5445.736294640" watchObservedRunningTime="2026-03-16 16:44:04.015136672 +0000 UTC m=+5445.742526959" Mar 16 16:44:04 crc kubenswrapper[4736]: I0316 16:44:04.993481 4736 generic.go:334] "Generic (PLEG): container finished" podID="3c9d8c74-f7a4-4136-9b76-45f838fc532e" containerID="3c8db7265a539301cae9106a8eb6ed235f403a70fdfb51f140e18e3cb9a3a2d0" exitCode=0 Mar 16 16:44:04 crc kubenswrapper[4736]: I0316 16:44:04.993591 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561324-sc9gp" event={"ID":"3c9d8c74-f7a4-4136-9b76-45f838fc532e","Type":"ContainerDied","Data":"3c8db7265a539301cae9106a8eb6ed235f403a70fdfb51f140e18e3cb9a3a2d0"} Mar 16 16:44:06 crc kubenswrapper[4736]: I0316 16:44:06.392179 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561324-sc9gp" Mar 16 16:44:06 crc kubenswrapper[4736]: I0316 16:44:06.559323 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49hn2\" (UniqueName: \"kubernetes.io/projected/3c9d8c74-f7a4-4136-9b76-45f838fc532e-kube-api-access-49hn2\") pod \"3c9d8c74-f7a4-4136-9b76-45f838fc532e\" (UID: \"3c9d8c74-f7a4-4136-9b76-45f838fc532e\") " Mar 16 16:44:06 crc kubenswrapper[4736]: I0316 16:44:06.574780 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c9d8c74-f7a4-4136-9b76-45f838fc532e-kube-api-access-49hn2" (OuterVolumeSpecName: "kube-api-access-49hn2") pod "3c9d8c74-f7a4-4136-9b76-45f838fc532e" (UID: "3c9d8c74-f7a4-4136-9b76-45f838fc532e"). InnerVolumeSpecName "kube-api-access-49hn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:44:06 crc kubenswrapper[4736]: I0316 16:44:06.661269 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49hn2\" (UniqueName: \"kubernetes.io/projected/3c9d8c74-f7a4-4136-9b76-45f838fc532e-kube-api-access-49hn2\") on node \"crc\" DevicePath \"\"" Mar 16 16:44:07 crc kubenswrapper[4736]: I0316 16:44:07.022251 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561324-sc9gp" event={"ID":"3c9d8c74-f7a4-4136-9b76-45f838fc532e","Type":"ContainerDied","Data":"7e445e21f4366196fef36ab8bf75ef2f0b69b89f5355a5214fcb30733c04e0cb"} Mar 16 16:44:07 crc kubenswrapper[4736]: I0316 16:44:07.022303 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e445e21f4366196fef36ab8bf75ef2f0b69b89f5355a5214fcb30733c04e0cb" Mar 16 16:44:07 crc kubenswrapper[4736]: I0316 16:44:07.022425 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561324-sc9gp" Mar 16 16:44:07 crc kubenswrapper[4736]: E0316 16:44:07.097819 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c9d8c74_f7a4_4136_9b76_45f838fc532e.slice\": RecentStats: unable to find data in memory cache]" Mar 16 16:44:07 crc kubenswrapper[4736]: I0316 16:44:07.476216 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561318-g4xt4"] Mar 16 16:44:07 crc kubenswrapper[4736]: I0316 16:44:07.483620 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561318-g4xt4"] Mar 16 16:44:08 crc kubenswrapper[4736]: I0316 16:44:08.996974 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599c5c0-edfb-4e99-99c9-af5ee2f068e3" path="/var/lib/kubelet/pods/7599c5c0-edfb-4e99-99c9-af5ee2f068e3/volumes" Mar 16 16:44:58 crc kubenswrapper[4736]: I0316 16:44:58.870052 4736 scope.go:117] "RemoveContainer" containerID="b7973b1bfd8546534ec49eedfa3dbf504406c8ffbe30e26a49d98b5e340dcb3a" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.153649 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl"] Mar 16 16:45:00 crc kubenswrapper[4736]: E0316 16:45:00.154516 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c9d8c74-f7a4-4136-9b76-45f838fc532e" containerName="oc" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.154536 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c9d8c74-f7a4-4136-9b76-45f838fc532e" containerName="oc" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.154874 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c9d8c74-f7a4-4136-9b76-45f838fc532e" containerName="oc" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.155873 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.159267 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.161534 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.163079 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0731b440-a0eb-4665-81ad-6c49663b31ce-secret-volume\") pod \"collect-profiles-29561325-kf2fl\" (UID: \"0731b440-a0eb-4665-81ad-6c49663b31ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.163137 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0731b440-a0eb-4665-81ad-6c49663b31ce-config-volume\") pod \"collect-profiles-29561325-kf2fl\" (UID: \"0731b440-a0eb-4665-81ad-6c49663b31ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.163205 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7bhq\" (UniqueName: \"kubernetes.io/projected/0731b440-a0eb-4665-81ad-6c49663b31ce-kube-api-access-x7bhq\") pod \"collect-profiles-29561325-kf2fl\" (UID: \"0731b440-a0eb-4665-81ad-6c49663b31ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.163452 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl"] Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.265719 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0731b440-a0eb-4665-81ad-6c49663b31ce-secret-volume\") pod \"collect-profiles-29561325-kf2fl\" (UID: \"0731b440-a0eb-4665-81ad-6c49663b31ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.265762 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0731b440-a0eb-4665-81ad-6c49663b31ce-config-volume\") pod \"collect-profiles-29561325-kf2fl\" (UID: \"0731b440-a0eb-4665-81ad-6c49663b31ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.265834 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7bhq\" (UniqueName: \"kubernetes.io/projected/0731b440-a0eb-4665-81ad-6c49663b31ce-kube-api-access-x7bhq\") pod \"collect-profiles-29561325-kf2fl\" (UID: \"0731b440-a0eb-4665-81ad-6c49663b31ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.266895 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0731b440-a0eb-4665-81ad-6c49663b31ce-config-volume\") pod \"collect-profiles-29561325-kf2fl\" (UID: \"0731b440-a0eb-4665-81ad-6c49663b31ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.271695 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0731b440-a0eb-4665-81ad-6c49663b31ce-secret-volume\") pod \"collect-profiles-29561325-kf2fl\" (UID: \"0731b440-a0eb-4665-81ad-6c49663b31ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.281238 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7bhq\" (UniqueName: \"kubernetes.io/projected/0731b440-a0eb-4665-81ad-6c49663b31ce-kube-api-access-x7bhq\") pod \"collect-profiles-29561325-kf2fl\" (UID: \"0731b440-a0eb-4665-81ad-6c49663b31ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" Mar 16 16:45:00 crc kubenswrapper[4736]: I0316 16:45:00.477056 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" Mar 16 16:45:01 crc kubenswrapper[4736]: I0316 16:45:01.038699 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl"] Mar 16 16:45:01 crc kubenswrapper[4736]: I0316 16:45:01.568541 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" event={"ID":"0731b440-a0eb-4665-81ad-6c49663b31ce","Type":"ContainerStarted","Data":"33095cd6dcf13d036c16292dc5a46d514369c90ce01a49100855b5853b27e2de"} Mar 16 16:45:01 crc kubenswrapper[4736]: I0316 16:45:01.568940 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" event={"ID":"0731b440-a0eb-4665-81ad-6c49663b31ce","Type":"ContainerStarted","Data":"4fcab8c9d6c48893cfc6014a94e17b696d8c00be0b8cd24cfd942f2a727e2aee"} Mar 16 16:45:01 crc kubenswrapper[4736]: I0316 16:45:01.598814 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" podStartSLOduration=1.598746887 podStartE2EDuration="1.598746887s" podCreationTimestamp="2026-03-16 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 16:45:01.590649866 +0000 UTC m=+5503.318040153" watchObservedRunningTime="2026-03-16 16:45:01.598746887 +0000 UTC m=+5503.326137214" Mar 16 16:45:02 crc kubenswrapper[4736]: I0316 16:45:02.580199 4736 generic.go:334] "Generic (PLEG): container finished" podID="0731b440-a0eb-4665-81ad-6c49663b31ce" containerID="33095cd6dcf13d036c16292dc5a46d514369c90ce01a49100855b5853b27e2de" exitCode=0 Mar 16 16:45:02 crc kubenswrapper[4736]: I0316 16:45:02.580511 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" event={"ID":"0731b440-a0eb-4665-81ad-6c49663b31ce","Type":"ContainerDied","Data":"33095cd6dcf13d036c16292dc5a46d514369c90ce01a49100855b5853b27e2de"} Mar 16 16:45:03 crc kubenswrapper[4736]: I0316 16:45:03.945954 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" Mar 16 16:45:04 crc kubenswrapper[4736]: I0316 16:45:04.138357 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0731b440-a0eb-4665-81ad-6c49663b31ce-secret-volume\") pod \"0731b440-a0eb-4665-81ad-6c49663b31ce\" (UID: \"0731b440-a0eb-4665-81ad-6c49663b31ce\") " Mar 16 16:45:04 crc kubenswrapper[4736]: I0316 16:45:04.138636 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0731b440-a0eb-4665-81ad-6c49663b31ce-config-volume\") pod \"0731b440-a0eb-4665-81ad-6c49663b31ce\" (UID: \"0731b440-a0eb-4665-81ad-6c49663b31ce\") " Mar 16 16:45:04 crc kubenswrapper[4736]: I0316 16:45:04.138752 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7bhq\" (UniqueName: \"kubernetes.io/projected/0731b440-a0eb-4665-81ad-6c49663b31ce-kube-api-access-x7bhq\") pod \"0731b440-a0eb-4665-81ad-6c49663b31ce\" (UID: \"0731b440-a0eb-4665-81ad-6c49663b31ce\") " Mar 16 16:45:04 crc kubenswrapper[4736]: I0316 16:45:04.139468 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0731b440-a0eb-4665-81ad-6c49663b31ce-config-volume" (OuterVolumeSpecName: "config-volume") pod "0731b440-a0eb-4665-81ad-6c49663b31ce" (UID: "0731b440-a0eb-4665-81ad-6c49663b31ce"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 16:45:04 crc kubenswrapper[4736]: I0316 16:45:04.146900 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0731b440-a0eb-4665-81ad-6c49663b31ce-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0731b440-a0eb-4665-81ad-6c49663b31ce" (UID: "0731b440-a0eb-4665-81ad-6c49663b31ce"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 16:45:04 crc kubenswrapper[4736]: I0316 16:45:04.149347 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0731b440-a0eb-4665-81ad-6c49663b31ce-kube-api-access-x7bhq" (OuterVolumeSpecName: "kube-api-access-x7bhq") pod "0731b440-a0eb-4665-81ad-6c49663b31ce" (UID: "0731b440-a0eb-4665-81ad-6c49663b31ce"). InnerVolumeSpecName "kube-api-access-x7bhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:45:04 crc kubenswrapper[4736]: I0316 16:45:04.240877 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0731b440-a0eb-4665-81ad-6c49663b31ce-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 16:45:04 crc kubenswrapper[4736]: I0316 16:45:04.240906 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7bhq\" (UniqueName: \"kubernetes.io/projected/0731b440-a0eb-4665-81ad-6c49663b31ce-kube-api-access-x7bhq\") on node \"crc\" DevicePath \"\"" Mar 16 16:45:04 crc kubenswrapper[4736]: I0316 16:45:04.240918 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0731b440-a0eb-4665-81ad-6c49663b31ce-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 16:45:04 crc kubenswrapper[4736]: I0316 16:45:04.601482 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" event={"ID":"0731b440-a0eb-4665-81ad-6c49663b31ce","Type":"ContainerDied","Data":"4fcab8c9d6c48893cfc6014a94e17b696d8c00be0b8cd24cfd942f2a727e2aee"} Mar 16 16:45:04 crc kubenswrapper[4736]: I0316 16:45:04.601839 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fcab8c9d6c48893cfc6014a94e17b696d8c00be0b8cd24cfd942f2a727e2aee" Mar 16 16:45:04 crc kubenswrapper[4736]: I0316 16:45:04.601548 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl" Mar 16 16:45:05 crc kubenswrapper[4736]: I0316 16:45:05.066384 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9"] Mar 16 16:45:05 crc kubenswrapper[4736]: I0316 16:45:05.082863 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561280-6nbs9"] Mar 16 16:45:07 crc kubenswrapper[4736]: I0316 16:45:07.002904 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949c9ebb-dba5-4681-9252-c327b26a00e6" path="/var/lib/kubelet/pods/949c9ebb-dba5-4681-9252-c327b26a00e6/volumes" Mar 16 16:45:08 crc kubenswrapper[4736]: I0316 16:45:08.507972 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:45:08 crc kubenswrapper[4736]: I0316 16:45:08.508504 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:45:30 crc kubenswrapper[4736]: I0316 16:45:30.859363 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-678dd4f677-jxtsk" podUID="bccee937-d642-4483-87fb-033b157cf68c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Mar 16 16:45:38 crc kubenswrapper[4736]: I0316 16:45:38.513207 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:45:38 crc kubenswrapper[4736]: I0316 16:45:38.513899 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:45:58 crc kubenswrapper[4736]: I0316 16:45:58.987142 4736 scope.go:117] "RemoveContainer" containerID="73fd8b8407b09a866bb4f1ce7c7c3566d00c6a64b3ff46e8f3947878e5097cd8" Mar 16 16:46:00 crc kubenswrapper[4736]: I0316 16:46:00.139892 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561326-9vdsg"] Mar 16 16:46:00 crc kubenswrapper[4736]: E0316 16:46:00.141391 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0731b440-a0eb-4665-81ad-6c49663b31ce" containerName="collect-profiles" Mar 16 16:46:00 crc kubenswrapper[4736]: I0316 16:46:00.141500 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0731b440-a0eb-4665-81ad-6c49663b31ce" containerName="collect-profiles" Mar 16 16:46:00 crc kubenswrapper[4736]: I0316 16:46:00.144753 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0731b440-a0eb-4665-81ad-6c49663b31ce" containerName="collect-profiles" Mar 16 16:46:00 crc kubenswrapper[4736]: I0316 16:46:00.148351 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561326-9vdsg" Mar 16 16:46:00 crc kubenswrapper[4736]: I0316 16:46:00.171025 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:46:00 crc kubenswrapper[4736]: I0316 16:46:00.171614 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:46:00 crc kubenswrapper[4736]: I0316 16:46:00.171781 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:46:00 crc kubenswrapper[4736]: I0316 16:46:00.184276 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561326-9vdsg"] Mar 16 16:46:00 crc kubenswrapper[4736]: I0316 16:46:00.230489 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6qls\" (UniqueName: \"kubernetes.io/projected/c84612ed-b267-4af5-990d-08890b3b81b1-kube-api-access-p6qls\") pod \"auto-csr-approver-29561326-9vdsg\" (UID: \"c84612ed-b267-4af5-990d-08890b3b81b1\") " pod="openshift-infra/auto-csr-approver-29561326-9vdsg" Mar 16 16:46:00 crc kubenswrapper[4736]: I0316 16:46:00.332137 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6qls\" (UniqueName: \"kubernetes.io/projected/c84612ed-b267-4af5-990d-08890b3b81b1-kube-api-access-p6qls\") pod \"auto-csr-approver-29561326-9vdsg\" (UID: \"c84612ed-b267-4af5-990d-08890b3b81b1\") " pod="openshift-infra/auto-csr-approver-29561326-9vdsg" Mar 16 16:46:00 crc kubenswrapper[4736]: I0316 16:46:00.350004 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6qls\" (UniqueName: \"kubernetes.io/projected/c84612ed-b267-4af5-990d-08890b3b81b1-kube-api-access-p6qls\") pod \"auto-csr-approver-29561326-9vdsg\" (UID: \"c84612ed-b267-4af5-990d-08890b3b81b1\") " pod="openshift-infra/auto-csr-approver-29561326-9vdsg" Mar 16 16:46:00 crc kubenswrapper[4736]: I0316 16:46:00.489664 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561326-9vdsg" Mar 16 16:46:00 crc kubenswrapper[4736]: I0316 16:46:00.974067 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561326-9vdsg"] Mar 16 16:46:01 crc kubenswrapper[4736]: I0316 16:46:01.318271 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561326-9vdsg" event={"ID":"c84612ed-b267-4af5-990d-08890b3b81b1","Type":"ContainerStarted","Data":"2472bd5af6a00969e07ba2d81eca0b6eef3c011dacf58bf07a60277e2f9a0f6c"} Mar 16 16:46:03 crc kubenswrapper[4736]: I0316 16:46:03.338258 4736 generic.go:334] "Generic (PLEG): container finished" podID="c84612ed-b267-4af5-990d-08890b3b81b1" containerID="2982591311f30f6d270e9fc129d2da73093d65feef2b577730be1a7472816de0" exitCode=0 Mar 16 16:46:03 crc kubenswrapper[4736]: I0316 16:46:03.338415 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561326-9vdsg" event={"ID":"c84612ed-b267-4af5-990d-08890b3b81b1","Type":"ContainerDied","Data":"2982591311f30f6d270e9fc129d2da73093d65feef2b577730be1a7472816de0"} Mar 16 16:46:04 crc kubenswrapper[4736]: I0316 16:46:04.775575 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561326-9vdsg" Mar 16 16:46:04 crc kubenswrapper[4736]: I0316 16:46:04.818150 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6qls\" (UniqueName: \"kubernetes.io/projected/c84612ed-b267-4af5-990d-08890b3b81b1-kube-api-access-p6qls\") pod \"c84612ed-b267-4af5-990d-08890b3b81b1\" (UID: \"c84612ed-b267-4af5-990d-08890b3b81b1\") " Mar 16 16:46:04 crc kubenswrapper[4736]: I0316 16:46:04.825620 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c84612ed-b267-4af5-990d-08890b3b81b1-kube-api-access-p6qls" (OuterVolumeSpecName: "kube-api-access-p6qls") pod "c84612ed-b267-4af5-990d-08890b3b81b1" (UID: "c84612ed-b267-4af5-990d-08890b3b81b1"). InnerVolumeSpecName "kube-api-access-p6qls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:46:04 crc kubenswrapper[4736]: I0316 16:46:04.921241 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6qls\" (UniqueName: \"kubernetes.io/projected/c84612ed-b267-4af5-990d-08890b3b81b1-kube-api-access-p6qls\") on node \"crc\" DevicePath \"\"" Mar 16 16:46:05 crc kubenswrapper[4736]: I0316 16:46:05.360638 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561326-9vdsg" event={"ID":"c84612ed-b267-4af5-990d-08890b3b81b1","Type":"ContainerDied","Data":"2472bd5af6a00969e07ba2d81eca0b6eef3c011dacf58bf07a60277e2f9a0f6c"} Mar 16 16:46:05 crc kubenswrapper[4736]: I0316 16:46:05.360885 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2472bd5af6a00969e07ba2d81eca0b6eef3c011dacf58bf07a60277e2f9a0f6c" Mar 16 16:46:05 crc kubenswrapper[4736]: I0316 16:46:05.360986 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561326-9vdsg" Mar 16 16:46:05 crc kubenswrapper[4736]: I0316 16:46:05.905194 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561320-j7rfh"] Mar 16 16:46:05 crc kubenswrapper[4736]: I0316 16:46:05.916366 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561320-j7rfh"] Mar 16 16:46:06 crc kubenswrapper[4736]: I0316 16:46:06.987941 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7602ffc8-e73d-4c3c-8cf8-c02fa01bc036" path="/var/lib/kubelet/pods/7602ffc8-e73d-4c3c-8cf8-c02fa01bc036/volumes" Mar 16 16:46:08 crc kubenswrapper[4736]: I0316 16:46:08.508482 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:46:08 crc kubenswrapper[4736]: I0316 16:46:08.508549 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:46:08 crc kubenswrapper[4736]: I0316 16:46:08.508597 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 16:46:08 crc kubenswrapper[4736]: I0316 16:46:08.509481 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8dec035bba71e0f4bce091e334d7143c9eb4ac0b32023e4673c75a566a58a9f1"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 16:46:08 crc kubenswrapper[4736]: I0316 16:46:08.509555 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://8dec035bba71e0f4bce091e334d7143c9eb4ac0b32023e4673c75a566a58a9f1" gracePeriod=600 Mar 16 16:46:09 crc kubenswrapper[4736]: I0316 16:46:09.407442 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="8dec035bba71e0f4bce091e334d7143c9eb4ac0b32023e4673c75a566a58a9f1" exitCode=0 Mar 16 16:46:09 crc kubenswrapper[4736]: I0316 16:46:09.407521 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"8dec035bba71e0f4bce091e334d7143c9eb4ac0b32023e4673c75a566a58a9f1"} Mar 16 16:46:09 crc kubenswrapper[4736]: I0316 16:46:09.409405 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6"} Mar 16 16:46:09 crc kubenswrapper[4736]: I0316 16:46:09.409432 4736 scope.go:117] "RemoveContainer" containerID="cc7706eeaa8355d7945ee57eedf1f67e3d0823a56f07ad722dfa39f8b39ebe6a" Mar 16 16:46:59 crc kubenswrapper[4736]: I0316 16:46:59.087249 4736 scope.go:117] "RemoveContainer" containerID="b6750833f2a85e01b27aa6e8bb2f958e3f050aadeb692935e5b62c81b72113a4" Mar 16 16:48:00 crc kubenswrapper[4736]: I0316 16:48:00.166420 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561328-wmzs4"] Mar 16 16:48:00 crc kubenswrapper[4736]: E0316 16:48:00.167536 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c84612ed-b267-4af5-990d-08890b3b81b1" containerName="oc" Mar 16 16:48:00 crc kubenswrapper[4736]: I0316 16:48:00.167553 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c84612ed-b267-4af5-990d-08890b3b81b1" containerName="oc" Mar 16 16:48:00 crc kubenswrapper[4736]: I0316 16:48:00.167828 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c84612ed-b267-4af5-990d-08890b3b81b1" containerName="oc" Mar 16 16:48:00 crc kubenswrapper[4736]: I0316 16:48:00.168648 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561328-wmzs4" Mar 16 16:48:00 crc kubenswrapper[4736]: I0316 16:48:00.171065 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:48:00 crc kubenswrapper[4736]: I0316 16:48:00.172965 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:48:00 crc kubenswrapper[4736]: I0316 16:48:00.182063 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561328-wmzs4"] Mar 16 16:48:00 crc kubenswrapper[4736]: I0316 16:48:00.185628 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:48:00 crc kubenswrapper[4736]: I0316 16:48:00.289153 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhgjc\" (UniqueName: \"kubernetes.io/projected/a0c7861b-4a8b-4d87-8462-333874519dfe-kube-api-access-qhgjc\") pod \"auto-csr-approver-29561328-wmzs4\" (UID: \"a0c7861b-4a8b-4d87-8462-333874519dfe\") " pod="openshift-infra/auto-csr-approver-29561328-wmzs4" Mar 16 16:48:00 crc kubenswrapper[4736]: I0316 16:48:00.391424 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhgjc\" (UniqueName: \"kubernetes.io/projected/a0c7861b-4a8b-4d87-8462-333874519dfe-kube-api-access-qhgjc\") pod \"auto-csr-approver-29561328-wmzs4\" (UID: \"a0c7861b-4a8b-4d87-8462-333874519dfe\") " pod="openshift-infra/auto-csr-approver-29561328-wmzs4" Mar 16 16:48:00 crc kubenswrapper[4736]: I0316 16:48:00.425055 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhgjc\" (UniqueName: \"kubernetes.io/projected/a0c7861b-4a8b-4d87-8462-333874519dfe-kube-api-access-qhgjc\") pod \"auto-csr-approver-29561328-wmzs4\" (UID: \"a0c7861b-4a8b-4d87-8462-333874519dfe\") " pod="openshift-infra/auto-csr-approver-29561328-wmzs4" Mar 16 16:48:00 crc kubenswrapper[4736]: I0316 16:48:00.490075 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561328-wmzs4" Mar 16 16:48:01 crc kubenswrapper[4736]: I0316 16:48:01.083021 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561328-wmzs4"] Mar 16 16:48:01 crc kubenswrapper[4736]: W0316 16:48:01.097035 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0c7861b_4a8b_4d87_8462_333874519dfe.slice/crio-adb5885ed4520d7dd46eaa46e9c6d65b41896b9f37c9a27fedd1934d42177a1e WatchSource:0}: Error finding container adb5885ed4520d7dd46eaa46e9c6d65b41896b9f37c9a27fedd1934d42177a1e: Status 404 returned error can't find the container with id adb5885ed4520d7dd46eaa46e9c6d65b41896b9f37c9a27fedd1934d42177a1e Mar 16 16:48:01 crc kubenswrapper[4736]: I0316 16:48:01.464143 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561328-wmzs4" event={"ID":"a0c7861b-4a8b-4d87-8462-333874519dfe","Type":"ContainerStarted","Data":"adb5885ed4520d7dd46eaa46e9c6d65b41896b9f37c9a27fedd1934d42177a1e"} Mar 16 16:48:03 crc kubenswrapper[4736]: I0316 16:48:03.493639 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561328-wmzs4" event={"ID":"a0c7861b-4a8b-4d87-8462-333874519dfe","Type":"ContainerStarted","Data":"10b422180b22eb75e71bb8a18f9ccebedc0b25cddec0d015c9085a7e3d31f481"} Mar 16 16:48:03 crc kubenswrapper[4736]: I0316 16:48:03.522793 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561328-wmzs4" podStartSLOduration=2.437643117 podStartE2EDuration="3.522766889s" podCreationTimestamp="2026-03-16 16:48:00 +0000 UTC" firstStartedPulling="2026-03-16 16:48:01.0995311 +0000 UTC m=+5682.826921387" lastFinishedPulling="2026-03-16 16:48:02.184654872 +0000 UTC m=+5683.912045159" observedRunningTime="2026-03-16 16:48:03.515667896 +0000 UTC m=+5685.243058223" watchObservedRunningTime="2026-03-16 16:48:03.522766889 +0000 UTC m=+5685.250157206" Mar 16 16:48:04 crc kubenswrapper[4736]: I0316 16:48:04.502456 4736 generic.go:334] "Generic (PLEG): container finished" podID="a0c7861b-4a8b-4d87-8462-333874519dfe" containerID="10b422180b22eb75e71bb8a18f9ccebedc0b25cddec0d015c9085a7e3d31f481" exitCode=0 Mar 16 16:48:04 crc kubenswrapper[4736]: I0316 16:48:04.502509 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561328-wmzs4" event={"ID":"a0c7861b-4a8b-4d87-8462-333874519dfe","Type":"ContainerDied","Data":"10b422180b22eb75e71bb8a18f9ccebedc0b25cddec0d015c9085a7e3d31f481"} Mar 16 16:48:05 crc kubenswrapper[4736]: I0316 16:48:05.981371 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561328-wmzs4" Mar 16 16:48:06 crc kubenswrapper[4736]: I0316 16:48:06.124173 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhgjc\" (UniqueName: \"kubernetes.io/projected/a0c7861b-4a8b-4d87-8462-333874519dfe-kube-api-access-qhgjc\") pod \"a0c7861b-4a8b-4d87-8462-333874519dfe\" (UID: \"a0c7861b-4a8b-4d87-8462-333874519dfe\") " Mar 16 16:48:06 crc kubenswrapper[4736]: I0316 16:48:06.138352 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c7861b-4a8b-4d87-8462-333874519dfe-kube-api-access-qhgjc" (OuterVolumeSpecName: "kube-api-access-qhgjc") pod "a0c7861b-4a8b-4d87-8462-333874519dfe" (UID: "a0c7861b-4a8b-4d87-8462-333874519dfe"). InnerVolumeSpecName "kube-api-access-qhgjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:48:06 crc kubenswrapper[4736]: I0316 16:48:06.227546 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhgjc\" (UniqueName: \"kubernetes.io/projected/a0c7861b-4a8b-4d87-8462-333874519dfe-kube-api-access-qhgjc\") on node \"crc\" DevicePath \"\"" Mar 16 16:48:06 crc kubenswrapper[4736]: I0316 16:48:06.571859 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561328-wmzs4" event={"ID":"a0c7861b-4a8b-4d87-8462-333874519dfe","Type":"ContainerDied","Data":"adb5885ed4520d7dd46eaa46e9c6d65b41896b9f37c9a27fedd1934d42177a1e"} Mar 16 16:48:06 crc kubenswrapper[4736]: I0316 16:48:06.571925 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adb5885ed4520d7dd46eaa46e9c6d65b41896b9f37c9a27fedd1934d42177a1e" Mar 16 16:48:06 crc kubenswrapper[4736]: I0316 16:48:06.572020 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561328-wmzs4" Mar 16 16:48:06 crc kubenswrapper[4736]: I0316 16:48:06.605900 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561322-n84r6"] Mar 16 16:48:06 crc kubenswrapper[4736]: I0316 16:48:06.614679 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561322-n84r6"] Mar 16 16:48:06 crc kubenswrapper[4736]: I0316 16:48:06.991508 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5a5e72b-2f76-4487-9a0c-1246ff23ec6c" path="/var/lib/kubelet/pods/b5a5e72b-2f76-4487-9a0c-1246ff23ec6c/volumes" Mar 16 16:48:08 crc kubenswrapper[4736]: I0316 16:48:08.507722 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:48:08 crc kubenswrapper[4736]: I0316 16:48:08.508045 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:48:38 crc kubenswrapper[4736]: I0316 16:48:38.507836 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:48:38 crc kubenswrapper[4736]: I0316 16:48:38.508406 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:48:59 crc kubenswrapper[4736]: I0316 16:48:59.216362 4736 scope.go:117] "RemoveContainer" containerID="7e3ddeed556644faf44436db56872aadd8b3b52f54795827168f33ab472990d1" Mar 16 16:49:08 crc kubenswrapper[4736]: I0316 16:49:08.507941 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:49:08 crc kubenswrapper[4736]: I0316 16:49:08.508535 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:49:08 crc kubenswrapper[4736]: I0316 16:49:08.508574 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 16:49:08 crc kubenswrapper[4736]: I0316 16:49:08.509285 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 16:49:08 crc kubenswrapper[4736]: I0316 16:49:08.509331 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" gracePeriod=600 Mar 16 16:49:08 crc kubenswrapper[4736]: E0316 16:49:08.642014 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:49:09 crc kubenswrapper[4736]: I0316 16:49:09.150317 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" exitCode=0 Mar 16 16:49:09 crc kubenswrapper[4736]: I0316 16:49:09.150372 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6"} Mar 16 16:49:09 crc kubenswrapper[4736]: I0316 16:49:09.150464 4736 scope.go:117] "RemoveContainer" containerID="8dec035bba71e0f4bce091e334d7143c9eb4ac0b32023e4673c75a566a58a9f1" Mar 16 16:49:09 crc kubenswrapper[4736]: I0316 16:49:09.152170 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:49:09 crc kubenswrapper[4736]: E0316 16:49:09.153672 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:49:23 crc kubenswrapper[4736]: I0316 16:49:23.978306 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:49:23 crc kubenswrapper[4736]: E0316 16:49:23.979061 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:49:27 crc kubenswrapper[4736]: I0316 16:49:27.881366 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kd8nx"] Mar 16 16:49:27 crc kubenswrapper[4736]: E0316 16:49:27.882507 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c7861b-4a8b-4d87-8462-333874519dfe" containerName="oc" Mar 16 16:49:27 crc kubenswrapper[4736]: I0316 16:49:27.882526 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c7861b-4a8b-4d87-8462-333874519dfe" containerName="oc" Mar 16 16:49:27 crc kubenswrapper[4736]: I0316 16:49:27.882734 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c7861b-4a8b-4d87-8462-333874519dfe" containerName="oc" Mar 16 16:49:27 crc kubenswrapper[4736]: I0316 16:49:27.885696 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:49:27 crc kubenswrapper[4736]: I0316 16:49:27.902750 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kd8nx"] Mar 16 16:49:27 crc kubenswrapper[4736]: I0316 16:49:27.925426 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23b1297-3252-4159-87e1-3e7bb7e6699d-utilities\") pod \"redhat-operators-kd8nx\" (UID: \"b23b1297-3252-4159-87e1-3e7bb7e6699d\") " pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:49:27 crc kubenswrapper[4736]: I0316 16:49:27.925487 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23b1297-3252-4159-87e1-3e7bb7e6699d-catalog-content\") pod \"redhat-operators-kd8nx\" (UID: \"b23b1297-3252-4159-87e1-3e7bb7e6699d\") " pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:49:27 crc kubenswrapper[4736]: I0316 16:49:27.925550 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmfpc\" (UniqueName: \"kubernetes.io/projected/b23b1297-3252-4159-87e1-3e7bb7e6699d-kube-api-access-qmfpc\") pod \"redhat-operators-kd8nx\" (UID: \"b23b1297-3252-4159-87e1-3e7bb7e6699d\") " pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:49:28 crc kubenswrapper[4736]: I0316 16:49:28.027498 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23b1297-3252-4159-87e1-3e7bb7e6699d-utilities\") pod \"redhat-operators-kd8nx\" (UID: \"b23b1297-3252-4159-87e1-3e7bb7e6699d\") " pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:49:28 crc kubenswrapper[4736]: I0316 16:49:28.027535 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23b1297-3252-4159-87e1-3e7bb7e6699d-catalog-content\") pod \"redhat-operators-kd8nx\" (UID: \"b23b1297-3252-4159-87e1-3e7bb7e6699d\") " pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:49:28 crc kubenswrapper[4736]: I0316 16:49:28.027827 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmfpc\" (UniqueName: \"kubernetes.io/projected/b23b1297-3252-4159-87e1-3e7bb7e6699d-kube-api-access-qmfpc\") pod \"redhat-operators-kd8nx\" (UID: \"b23b1297-3252-4159-87e1-3e7bb7e6699d\") " pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:49:28 crc kubenswrapper[4736]: I0316 16:49:28.028215 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23b1297-3252-4159-87e1-3e7bb7e6699d-utilities\") pod \"redhat-operators-kd8nx\" (UID: \"b23b1297-3252-4159-87e1-3e7bb7e6699d\") " pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:49:28 crc kubenswrapper[4736]: I0316 16:49:28.028413 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23b1297-3252-4159-87e1-3e7bb7e6699d-catalog-content\") pod \"redhat-operators-kd8nx\" (UID: \"b23b1297-3252-4159-87e1-3e7bb7e6699d\") " pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:49:28 crc kubenswrapper[4736]: I0316 16:49:28.048434 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmfpc\" (UniqueName: \"kubernetes.io/projected/b23b1297-3252-4159-87e1-3e7bb7e6699d-kube-api-access-qmfpc\") pod \"redhat-operators-kd8nx\" (UID: \"b23b1297-3252-4159-87e1-3e7bb7e6699d\") " pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:49:28 crc kubenswrapper[4736]: I0316 16:49:28.204654 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:49:28 crc kubenswrapper[4736]: I0316 16:49:28.870727 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kd8nx"] Mar 16 16:49:29 crc kubenswrapper[4736]: I0316 16:49:29.325877 4736 generic.go:334] "Generic (PLEG): container finished" podID="b23b1297-3252-4159-87e1-3e7bb7e6699d" containerID="9f5b7a3f2b4444fab7bde5b0ac9a5b0b23215c93c23ca8f1e0f56fe6a7b2cdbc" exitCode=0 Mar 16 16:49:29 crc kubenswrapper[4736]: I0316 16:49:29.325925 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kd8nx" event={"ID":"b23b1297-3252-4159-87e1-3e7bb7e6699d","Type":"ContainerDied","Data":"9f5b7a3f2b4444fab7bde5b0ac9a5b0b23215c93c23ca8f1e0f56fe6a7b2cdbc"} Mar 16 16:49:29 crc kubenswrapper[4736]: I0316 16:49:29.326313 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kd8nx" event={"ID":"b23b1297-3252-4159-87e1-3e7bb7e6699d","Type":"ContainerStarted","Data":"6148427da0f227f1ff8b79b71febde7ef83aabdb342161bd691553722bd567ca"} Mar 16 16:49:29 crc kubenswrapper[4736]: I0316 16:49:29.328314 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 16:49:32 crc kubenswrapper[4736]: I0316 16:49:32.354743 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kd8nx" event={"ID":"b23b1297-3252-4159-87e1-3e7bb7e6699d","Type":"ContainerStarted","Data":"0e7f76d4dc6f255f33aeb4e4fa8a5057b422619a635b4091871388519de828dc"} Mar 16 16:49:37 crc kubenswrapper[4736]: I0316 16:49:37.396022 4736 generic.go:334] "Generic (PLEG): container finished" podID="b23b1297-3252-4159-87e1-3e7bb7e6699d" containerID="0e7f76d4dc6f255f33aeb4e4fa8a5057b422619a635b4091871388519de828dc" exitCode=0 Mar 16 16:49:37 crc kubenswrapper[4736]: I0316 16:49:37.396089 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kd8nx" event={"ID":"b23b1297-3252-4159-87e1-3e7bb7e6699d","Type":"ContainerDied","Data":"0e7f76d4dc6f255f33aeb4e4fa8a5057b422619a635b4091871388519de828dc"} Mar 16 16:49:37 crc kubenswrapper[4736]: I0316 16:49:37.978456 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:49:37 crc kubenswrapper[4736]: E0316 16:49:37.978856 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:49:38 crc kubenswrapper[4736]: I0316 16:49:38.409317 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kd8nx" event={"ID":"b23b1297-3252-4159-87e1-3e7bb7e6699d","Type":"ContainerStarted","Data":"240333ec52320d60145bea849f6ff65f44ec23a576d4bc6ba98ef56c40d050da"} Mar 16 16:49:38 crc kubenswrapper[4736]: I0316 16:49:38.437395 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kd8nx" podStartSLOduration=2.699860182 podStartE2EDuration="11.43736985s" podCreationTimestamp="2026-03-16 16:49:27 +0000 UTC" firstStartedPulling="2026-03-16 16:49:29.328007895 +0000 UTC m=+5771.055398182" lastFinishedPulling="2026-03-16 16:49:38.065517563 +0000 UTC m=+5779.792907850" observedRunningTime="2026-03-16 16:49:38.436017843 +0000 UTC m=+5780.163408130" watchObservedRunningTime="2026-03-16 16:49:38.43736985 +0000 UTC m=+5780.164760157" Mar 16 16:49:48 crc kubenswrapper[4736]: I0316 16:49:48.205802 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:49:48 crc kubenswrapper[4736]: I0316 16:49:48.207055 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:49:49 crc kubenswrapper[4736]: I0316 16:49:49.253694 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kd8nx" podUID="b23b1297-3252-4159-87e1-3e7bb7e6699d" containerName="registry-server" probeResult="failure" output=< Mar 16 16:49:49 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:49:49 crc kubenswrapper[4736]: > Mar 16 16:49:52 crc kubenswrapper[4736]: I0316 16:49:52.978629 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:49:52 crc kubenswrapper[4736]: E0316 16:49:52.979200 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:49:59 crc kubenswrapper[4736]: I0316 16:49:59.261448 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kd8nx" podUID="b23b1297-3252-4159-87e1-3e7bb7e6699d" containerName="registry-server" probeResult="failure" output=< Mar 16 16:49:59 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:49:59 crc kubenswrapper[4736]: > Mar 16 16:50:00 crc kubenswrapper[4736]: I0316 16:50:00.150221 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561330-5d67b"] Mar 16 16:50:00 crc kubenswrapper[4736]: I0316 16:50:00.152423 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561330-5d67b" Mar 16 16:50:00 crc kubenswrapper[4736]: I0316 16:50:00.154849 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:50:00 crc kubenswrapper[4736]: I0316 16:50:00.155672 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:50:00 crc kubenswrapper[4736]: I0316 16:50:00.156190 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:50:00 crc kubenswrapper[4736]: I0316 16:50:00.160274 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561330-5d67b"] Mar 16 16:50:00 crc kubenswrapper[4736]: I0316 16:50:00.346743 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jznjw\" (UniqueName: \"kubernetes.io/projected/2b0593c4-d000-41d2-b38c-d2778d8aa9bf-kube-api-access-jznjw\") pod \"auto-csr-approver-29561330-5d67b\" (UID: \"2b0593c4-d000-41d2-b38c-d2778d8aa9bf\") " pod="openshift-infra/auto-csr-approver-29561330-5d67b" Mar 16 16:50:00 crc kubenswrapper[4736]: I0316 16:50:00.449012 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jznjw\" (UniqueName: \"kubernetes.io/projected/2b0593c4-d000-41d2-b38c-d2778d8aa9bf-kube-api-access-jznjw\") pod \"auto-csr-approver-29561330-5d67b\" (UID: \"2b0593c4-d000-41d2-b38c-d2778d8aa9bf\") " pod="openshift-infra/auto-csr-approver-29561330-5d67b" Mar 16 16:50:00 crc kubenswrapper[4736]: I0316 16:50:00.480952 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jznjw\" (UniqueName: \"kubernetes.io/projected/2b0593c4-d000-41d2-b38c-d2778d8aa9bf-kube-api-access-jznjw\") pod \"auto-csr-approver-29561330-5d67b\" (UID: \"2b0593c4-d000-41d2-b38c-d2778d8aa9bf\") " pod="openshift-infra/auto-csr-approver-29561330-5d67b" Mar 16 16:50:00 crc kubenswrapper[4736]: I0316 16:50:00.774574 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561330-5d67b" Mar 16 16:50:01 crc kubenswrapper[4736]: W0316 16:50:01.769379 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b0593c4_d000_41d2_b38c_d2778d8aa9bf.slice/crio-c3bb88dbaa3da7510f4b842aea8c4edb84f971946d346cf02f17c5306f7e15c9 WatchSource:0}: Error finding container c3bb88dbaa3da7510f4b842aea8c4edb84f971946d346cf02f17c5306f7e15c9: Status 404 returned error can't find the container with id c3bb88dbaa3da7510f4b842aea8c4edb84f971946d346cf02f17c5306f7e15c9 Mar 16 16:50:01 crc kubenswrapper[4736]: I0316 16:50:01.786661 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561330-5d67b"] Mar 16 16:50:02 crc kubenswrapper[4736]: I0316 16:50:02.632454 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561330-5d67b" event={"ID":"2b0593c4-d000-41d2-b38c-d2778d8aa9bf","Type":"ContainerStarted","Data":"c3bb88dbaa3da7510f4b842aea8c4edb84f971946d346cf02f17c5306f7e15c9"} Mar 16 16:50:04 crc kubenswrapper[4736]: I0316 16:50:04.669082 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561330-5d67b" event={"ID":"2b0593c4-d000-41d2-b38c-d2778d8aa9bf","Type":"ContainerStarted","Data":"0724de47fd8281840fcb9347dcb2cccf9cc7c0151206ec4761d7d5bde7b7ffbc"} Mar 16 16:50:04 crc kubenswrapper[4736]: I0316 16:50:04.693565 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561330-5d67b" podStartSLOduration=3.218823624 podStartE2EDuration="4.693545016s" podCreationTimestamp="2026-03-16 16:50:00 +0000 UTC" firstStartedPulling="2026-03-16 16:50:01.775988552 +0000 UTC m=+5803.503378839" lastFinishedPulling="2026-03-16 16:50:03.250709914 +0000 UTC m=+5804.978100231" observedRunningTime="2026-03-16 16:50:04.686987588 +0000 UTC m=+5806.414377885" watchObservedRunningTime="2026-03-16 16:50:04.693545016 +0000 UTC m=+5806.420935303" Mar 16 16:50:05 crc kubenswrapper[4736]: I0316 16:50:05.681835 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561330-5d67b" event={"ID":"2b0593c4-d000-41d2-b38c-d2778d8aa9bf","Type":"ContainerDied","Data":"0724de47fd8281840fcb9347dcb2cccf9cc7c0151206ec4761d7d5bde7b7ffbc"} Mar 16 16:50:05 crc kubenswrapper[4736]: I0316 16:50:05.681627 4736 generic.go:334] "Generic (PLEG): container finished" podID="2b0593c4-d000-41d2-b38c-d2778d8aa9bf" containerID="0724de47fd8281840fcb9347dcb2cccf9cc7c0151206ec4761d7d5bde7b7ffbc" exitCode=0 Mar 16 16:50:06 crc kubenswrapper[4736]: I0316 16:50:06.980754 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:50:06 crc kubenswrapper[4736]: E0316 16:50:06.981387 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:50:07 crc kubenswrapper[4736]: I0316 16:50:07.236392 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561330-5d67b" Mar 16 16:50:07 crc kubenswrapper[4736]: I0316 16:50:07.373193 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jznjw\" (UniqueName: \"kubernetes.io/projected/2b0593c4-d000-41d2-b38c-d2778d8aa9bf-kube-api-access-jznjw\") pod \"2b0593c4-d000-41d2-b38c-d2778d8aa9bf\" (UID: \"2b0593c4-d000-41d2-b38c-d2778d8aa9bf\") " Mar 16 16:50:07 crc kubenswrapper[4736]: I0316 16:50:07.385478 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0593c4-d000-41d2-b38c-d2778d8aa9bf-kube-api-access-jznjw" (OuterVolumeSpecName: "kube-api-access-jznjw") pod "2b0593c4-d000-41d2-b38c-d2778d8aa9bf" (UID: "2b0593c4-d000-41d2-b38c-d2778d8aa9bf"). InnerVolumeSpecName "kube-api-access-jznjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:50:07 crc kubenswrapper[4736]: I0316 16:50:07.475866 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jznjw\" (UniqueName: \"kubernetes.io/projected/2b0593c4-d000-41d2-b38c-d2778d8aa9bf-kube-api-access-jznjw\") on node \"crc\" DevicePath \"\"" Mar 16 16:50:07 crc kubenswrapper[4736]: I0316 16:50:07.700355 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561330-5d67b" event={"ID":"2b0593c4-d000-41d2-b38c-d2778d8aa9bf","Type":"ContainerDied","Data":"c3bb88dbaa3da7510f4b842aea8c4edb84f971946d346cf02f17c5306f7e15c9"} Mar 16 16:50:07 crc kubenswrapper[4736]: I0316 16:50:07.700401 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3bb88dbaa3da7510f4b842aea8c4edb84f971946d346cf02f17c5306f7e15c9" Mar 16 16:50:07 crc kubenswrapper[4736]: I0316 16:50:07.700423 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561330-5d67b" Mar 16 16:50:07 crc kubenswrapper[4736]: I0316 16:50:07.792482 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561324-sc9gp"] Mar 16 16:50:07 crc kubenswrapper[4736]: I0316 16:50:07.825629 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561324-sc9gp"] Mar 16 16:50:08 crc kubenswrapper[4736]: I0316 16:50:08.990376 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c9d8c74-f7a4-4136-9b76-45f838fc532e" path="/var/lib/kubelet/pods/3c9d8c74-f7a4-4136-9b76-45f838fc532e/volumes" Mar 16 16:50:09 crc kubenswrapper[4736]: I0316 16:50:09.251614 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kd8nx" podUID="b23b1297-3252-4159-87e1-3e7bb7e6699d" containerName="registry-server" probeResult="failure" output=< Mar 16 16:50:09 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:50:09 crc kubenswrapper[4736]: > Mar 16 16:50:18 crc kubenswrapper[4736]: I0316 16:50:18.259715 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:50:18 crc kubenswrapper[4736]: I0316 16:50:18.317516 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:50:18 crc kubenswrapper[4736]: I0316 16:50:18.496139 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kd8nx"] Mar 16 16:50:19 crc kubenswrapper[4736]: I0316 16:50:19.863716 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kd8nx" podUID="b23b1297-3252-4159-87e1-3e7bb7e6699d" containerName="registry-server" containerID="cri-o://240333ec52320d60145bea849f6ff65f44ec23a576d4bc6ba98ef56c40d050da" gracePeriod=2 Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.657933 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.829007 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23b1297-3252-4159-87e1-3e7bb7e6699d-utilities\") pod \"b23b1297-3252-4159-87e1-3e7bb7e6699d\" (UID: \"b23b1297-3252-4159-87e1-3e7bb7e6699d\") " Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.829157 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmfpc\" (UniqueName: \"kubernetes.io/projected/b23b1297-3252-4159-87e1-3e7bb7e6699d-kube-api-access-qmfpc\") pod \"b23b1297-3252-4159-87e1-3e7bb7e6699d\" (UID: \"b23b1297-3252-4159-87e1-3e7bb7e6699d\") " Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.829352 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23b1297-3252-4159-87e1-3e7bb7e6699d-catalog-content\") pod \"b23b1297-3252-4159-87e1-3e7bb7e6699d\" (UID: \"b23b1297-3252-4159-87e1-3e7bb7e6699d\") " Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.830474 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23b1297-3252-4159-87e1-3e7bb7e6699d-utilities" (OuterVolumeSpecName: "utilities") pod "b23b1297-3252-4159-87e1-3e7bb7e6699d" (UID: "b23b1297-3252-4159-87e1-3e7bb7e6699d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.840323 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23b1297-3252-4159-87e1-3e7bb7e6699d-kube-api-access-qmfpc" (OuterVolumeSpecName: "kube-api-access-qmfpc") pod "b23b1297-3252-4159-87e1-3e7bb7e6699d" (UID: "b23b1297-3252-4159-87e1-3e7bb7e6699d"). InnerVolumeSpecName "kube-api-access-qmfpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.873065 4736 generic.go:334] "Generic (PLEG): container finished" podID="b23b1297-3252-4159-87e1-3e7bb7e6699d" containerID="240333ec52320d60145bea849f6ff65f44ec23a576d4bc6ba98ef56c40d050da" exitCode=0 Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.873166 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kd8nx" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.873186 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kd8nx" event={"ID":"b23b1297-3252-4159-87e1-3e7bb7e6699d","Type":"ContainerDied","Data":"240333ec52320d60145bea849f6ff65f44ec23a576d4bc6ba98ef56c40d050da"} Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.874411 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kd8nx" event={"ID":"b23b1297-3252-4159-87e1-3e7bb7e6699d","Type":"ContainerDied","Data":"6148427da0f227f1ff8b79b71febde7ef83aabdb342161bd691553722bd567ca"} Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.874432 4736 scope.go:117] "RemoveContainer" containerID="240333ec52320d60145bea849f6ff65f44ec23a576d4bc6ba98ef56c40d050da" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.906739 4736 scope.go:117] "RemoveContainer" containerID="0e7f76d4dc6f255f33aeb4e4fa8a5057b422619a635b4091871388519de828dc" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.931071 4736 scope.go:117] "RemoveContainer" containerID="9f5b7a3f2b4444fab7bde5b0ac9a5b0b23215c93c23ca8f1e0f56fe6a7b2cdbc" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.932997 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b23b1297-3252-4159-87e1-3e7bb7e6699d-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.933040 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmfpc\" (UniqueName: \"kubernetes.io/projected/b23b1297-3252-4159-87e1-3e7bb7e6699d-kube-api-access-qmfpc\") on node \"crc\" DevicePath \"\"" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.974788 4736 scope.go:117] "RemoveContainer" containerID="240333ec52320d60145bea849f6ff65f44ec23a576d4bc6ba98ef56c40d050da" Mar 16 16:50:20 crc kubenswrapper[4736]: E0316 16:50:20.978600 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"240333ec52320d60145bea849f6ff65f44ec23a576d4bc6ba98ef56c40d050da\": container with ID starting with 240333ec52320d60145bea849f6ff65f44ec23a576d4bc6ba98ef56c40d050da not found: ID does not exist" containerID="240333ec52320d60145bea849f6ff65f44ec23a576d4bc6ba98ef56c40d050da" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.978654 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"240333ec52320d60145bea849f6ff65f44ec23a576d4bc6ba98ef56c40d050da"} err="failed to get container status \"240333ec52320d60145bea849f6ff65f44ec23a576d4bc6ba98ef56c40d050da\": rpc error: code = NotFound desc = could not find container \"240333ec52320d60145bea849f6ff65f44ec23a576d4bc6ba98ef56c40d050da\": container with ID starting with 240333ec52320d60145bea849f6ff65f44ec23a576d4bc6ba98ef56c40d050da not found: ID does not exist" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.978699 4736 scope.go:117] "RemoveContainer" containerID="0e7f76d4dc6f255f33aeb4e4fa8a5057b422619a635b4091871388519de828dc" Mar 16 16:50:20 crc kubenswrapper[4736]: E0316 16:50:20.979031 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e7f76d4dc6f255f33aeb4e4fa8a5057b422619a635b4091871388519de828dc\": container with ID starting with 0e7f76d4dc6f255f33aeb4e4fa8a5057b422619a635b4091871388519de828dc not found: ID does not exist" containerID="0e7f76d4dc6f255f33aeb4e4fa8a5057b422619a635b4091871388519de828dc" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.979214 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e7f76d4dc6f255f33aeb4e4fa8a5057b422619a635b4091871388519de828dc"} err="failed to get container status \"0e7f76d4dc6f255f33aeb4e4fa8a5057b422619a635b4091871388519de828dc\": rpc error: code = NotFound desc = could not find container \"0e7f76d4dc6f255f33aeb4e4fa8a5057b422619a635b4091871388519de828dc\": container with ID starting with 0e7f76d4dc6f255f33aeb4e4fa8a5057b422619a635b4091871388519de828dc not found: ID does not exist" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.979296 4736 scope.go:117] "RemoveContainer" containerID="9f5b7a3f2b4444fab7bde5b0ac9a5b0b23215c93c23ca8f1e0f56fe6a7b2cdbc" Mar 16 16:50:20 crc kubenswrapper[4736]: E0316 16:50:20.979599 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f5b7a3f2b4444fab7bde5b0ac9a5b0b23215c93c23ca8f1e0f56fe6a7b2cdbc\": container with ID starting with 9f5b7a3f2b4444fab7bde5b0ac9a5b0b23215c93c23ca8f1e0f56fe6a7b2cdbc not found: ID does not exist" containerID="9f5b7a3f2b4444fab7bde5b0ac9a5b0b23215c93c23ca8f1e0f56fe6a7b2cdbc" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.979638 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f5b7a3f2b4444fab7bde5b0ac9a5b0b23215c93c23ca8f1e0f56fe6a7b2cdbc"} err="failed to get container status \"9f5b7a3f2b4444fab7bde5b0ac9a5b0b23215c93c23ca8f1e0f56fe6a7b2cdbc\": rpc error: code = NotFound desc = could not find container \"9f5b7a3f2b4444fab7bde5b0ac9a5b0b23215c93c23ca8f1e0f56fe6a7b2cdbc\": container with ID starting with 9f5b7a3f2b4444fab7bde5b0ac9a5b0b23215c93c23ca8f1e0f56fe6a7b2cdbc not found: ID does not exist" Mar 16 16:50:20 crc kubenswrapper[4736]: I0316 16:50:20.982260 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b23b1297-3252-4159-87e1-3e7bb7e6699d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b23b1297-3252-4159-87e1-3e7bb7e6699d" (UID: "b23b1297-3252-4159-87e1-3e7bb7e6699d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:50:21 crc kubenswrapper[4736]: I0316 16:50:21.034492 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b23b1297-3252-4159-87e1-3e7bb7e6699d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:50:21 crc kubenswrapper[4736]: I0316 16:50:21.203377 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kd8nx"] Mar 16 16:50:21 crc kubenswrapper[4736]: I0316 16:50:21.212372 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kd8nx"] Mar 16 16:50:21 crc kubenswrapper[4736]: I0316 16:50:21.978347 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:50:21 crc kubenswrapper[4736]: E0316 16:50:21.978972 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:50:22 crc kubenswrapper[4736]: I0316 16:50:22.990207 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23b1297-3252-4159-87e1-3e7bb7e6699d" path="/var/lib/kubelet/pods/b23b1297-3252-4159-87e1-3e7bb7e6699d/volumes" Mar 16 16:50:32 crc kubenswrapper[4736]: I0316 16:50:32.978303 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:50:32 crc kubenswrapper[4736]: E0316 16:50:32.979145 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:50:46 crc kubenswrapper[4736]: I0316 16:50:46.977933 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:50:46 crc kubenswrapper[4736]: E0316 16:50:46.979029 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:50:59 crc kubenswrapper[4736]: I0316 16:50:59.382859 4736 scope.go:117] "RemoveContainer" containerID="3c8db7265a539301cae9106a8eb6ed235f403a70fdfb51f140e18e3cb9a3a2d0" Mar 16 16:51:01 crc kubenswrapper[4736]: I0316 16:51:01.977975 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:51:01 crc kubenswrapper[4736]: E0316 16:51:01.978821 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:51:16 crc kubenswrapper[4736]: I0316 16:51:16.980287 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:51:16 crc kubenswrapper[4736]: E0316 16:51:16.981312 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:51:29 crc kubenswrapper[4736]: I0316 16:51:29.978493 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:51:29 crc kubenswrapper[4736]: E0316 16:51:29.979509 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:51:40 crc kubenswrapper[4736]: I0316 16:51:40.978869 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:51:40 crc kubenswrapper[4736]: E0316 16:51:40.981635 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:51:52 crc kubenswrapper[4736]: I0316 16:51:52.978314 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:51:52 crc kubenswrapper[4736]: E0316 16:51:52.979629 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.150166 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561332-kx4g9"] Mar 16 16:52:00 crc kubenswrapper[4736]: E0316 16:52:00.152641 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23b1297-3252-4159-87e1-3e7bb7e6699d" containerName="extract-utilities" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.152668 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23b1297-3252-4159-87e1-3e7bb7e6699d" containerName="extract-utilities" Mar 16 16:52:00 crc kubenswrapper[4736]: E0316 16:52:00.152707 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23b1297-3252-4159-87e1-3e7bb7e6699d" containerName="extract-content" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.152715 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23b1297-3252-4159-87e1-3e7bb7e6699d" containerName="extract-content" Mar 16 16:52:00 crc kubenswrapper[4736]: E0316 16:52:00.152733 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0593c4-d000-41d2-b38c-d2778d8aa9bf" containerName="oc" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.152739 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0593c4-d000-41d2-b38c-d2778d8aa9bf" containerName="oc" Mar 16 16:52:00 crc kubenswrapper[4736]: E0316 16:52:00.152751 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23b1297-3252-4159-87e1-3e7bb7e6699d" containerName="registry-server" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.152757 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23b1297-3252-4159-87e1-3e7bb7e6699d" containerName="registry-server" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.152975 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0593c4-d000-41d2-b38c-d2778d8aa9bf" containerName="oc" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.153004 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23b1297-3252-4159-87e1-3e7bb7e6699d" containerName="registry-server" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.153679 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561332-kx4g9" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.156378 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.157317 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.157381 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.169074 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561332-kx4g9"] Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.179328 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt9fh\" (UniqueName: \"kubernetes.io/projected/1ee75f66-a179-4458-91b5-832d04d1d392-kube-api-access-rt9fh\") pod \"auto-csr-approver-29561332-kx4g9\" (UID: \"1ee75f66-a179-4458-91b5-832d04d1d392\") " pod="openshift-infra/auto-csr-approver-29561332-kx4g9" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.280957 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt9fh\" (UniqueName: \"kubernetes.io/projected/1ee75f66-a179-4458-91b5-832d04d1d392-kube-api-access-rt9fh\") pod \"auto-csr-approver-29561332-kx4g9\" (UID: \"1ee75f66-a179-4458-91b5-832d04d1d392\") " pod="openshift-infra/auto-csr-approver-29561332-kx4g9" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.300570 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt9fh\" (UniqueName: \"kubernetes.io/projected/1ee75f66-a179-4458-91b5-832d04d1d392-kube-api-access-rt9fh\") pod \"auto-csr-approver-29561332-kx4g9\" (UID: \"1ee75f66-a179-4458-91b5-832d04d1d392\") " pod="openshift-infra/auto-csr-approver-29561332-kx4g9" Mar 16 16:52:00 crc kubenswrapper[4736]: I0316 16:52:00.475730 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561332-kx4g9" Mar 16 16:52:01 crc kubenswrapper[4736]: I0316 16:52:01.001791 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561332-kx4g9"] Mar 16 16:52:01 crc kubenswrapper[4736]: I0316 16:52:01.842993 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561332-kx4g9" event={"ID":"1ee75f66-a179-4458-91b5-832d04d1d392","Type":"ContainerStarted","Data":"8249a5135054f9702fd1aada380d133ba9dd0a2d4f6d8f4b3dea4e681763ab43"} Mar 16 16:52:02 crc kubenswrapper[4736]: I0316 16:52:02.852677 4736 generic.go:334] "Generic (PLEG): container finished" podID="1ee75f66-a179-4458-91b5-832d04d1d392" containerID="ce136ed60825e8496cf131856d9c8ff0855ed153aff0945ea9075932df9a749f" exitCode=0 Mar 16 16:52:02 crc kubenswrapper[4736]: I0316 16:52:02.852791 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561332-kx4g9" event={"ID":"1ee75f66-a179-4458-91b5-832d04d1d392","Type":"ContainerDied","Data":"ce136ed60825e8496cf131856d9c8ff0855ed153aff0945ea9075932df9a749f"} Mar 16 16:52:04 crc kubenswrapper[4736]: I0316 16:52:04.244843 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561332-kx4g9" Mar 16 16:52:04 crc kubenswrapper[4736]: I0316 16:52:04.377508 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt9fh\" (UniqueName: \"kubernetes.io/projected/1ee75f66-a179-4458-91b5-832d04d1d392-kube-api-access-rt9fh\") pod \"1ee75f66-a179-4458-91b5-832d04d1d392\" (UID: \"1ee75f66-a179-4458-91b5-832d04d1d392\") " Mar 16 16:52:04 crc kubenswrapper[4736]: I0316 16:52:04.382927 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee75f66-a179-4458-91b5-832d04d1d392-kube-api-access-rt9fh" (OuterVolumeSpecName: "kube-api-access-rt9fh") pod "1ee75f66-a179-4458-91b5-832d04d1d392" (UID: "1ee75f66-a179-4458-91b5-832d04d1d392"). InnerVolumeSpecName "kube-api-access-rt9fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:52:04 crc kubenswrapper[4736]: I0316 16:52:04.480368 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt9fh\" (UniqueName: \"kubernetes.io/projected/1ee75f66-a179-4458-91b5-832d04d1d392-kube-api-access-rt9fh\") on node \"crc\" DevicePath \"\"" Mar 16 16:52:04 crc kubenswrapper[4736]: I0316 16:52:04.877003 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561332-kx4g9" event={"ID":"1ee75f66-a179-4458-91b5-832d04d1d392","Type":"ContainerDied","Data":"8249a5135054f9702fd1aada380d133ba9dd0a2d4f6d8f4b3dea4e681763ab43"} Mar 16 16:52:04 crc kubenswrapper[4736]: I0316 16:52:04.877492 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8249a5135054f9702fd1aada380d133ba9dd0a2d4f6d8f4b3dea4e681763ab43" Mar 16 16:52:04 crc kubenswrapper[4736]: I0316 16:52:04.877181 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561332-kx4g9" Mar 16 16:52:05 crc kubenswrapper[4736]: I0316 16:52:05.325952 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561326-9vdsg"] Mar 16 16:52:05 crc kubenswrapper[4736]: I0316 16:52:05.340130 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561326-9vdsg"] Mar 16 16:52:06 crc kubenswrapper[4736]: I0316 16:52:06.991056 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c84612ed-b267-4af5-990d-08890b3b81b1" path="/var/lib/kubelet/pods/c84612ed-b267-4af5-990d-08890b3b81b1/volumes" Mar 16 16:52:07 crc kubenswrapper[4736]: I0316 16:52:07.978023 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:52:07 crc kubenswrapper[4736]: E0316 16:52:07.978957 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:52:18 crc kubenswrapper[4736]: I0316 16:52:18.978316 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:52:18 crc kubenswrapper[4736]: E0316 16:52:18.980824 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:52:31 crc kubenswrapper[4736]: I0316 16:52:31.978611 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:52:31 crc kubenswrapper[4736]: E0316 16:52:31.979562 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:52:42 crc kubenswrapper[4736]: I0316 16:52:42.978307 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:52:42 crc kubenswrapper[4736]: E0316 16:52:42.979042 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:52:55 crc kubenswrapper[4736]: I0316 16:52:55.978664 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:52:55 crc kubenswrapper[4736]: E0316 16:52:55.980386 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:52:59 crc kubenswrapper[4736]: I0316 16:52:59.517994 4736 scope.go:117] "RemoveContainer" containerID="2982591311f30f6d270e9fc129d2da73093d65feef2b577730be1a7472816de0" Mar 16 16:53:07 crc kubenswrapper[4736]: I0316 16:53:07.979532 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:53:07 crc kubenswrapper[4736]: E0316 16:53:07.980309 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:53:22 crc kubenswrapper[4736]: I0316 16:53:22.978683 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:53:22 crc kubenswrapper[4736]: E0316 16:53:22.979771 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.612146 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bzk4p"] Mar 16 16:53:35 crc kubenswrapper[4736]: E0316 16:53:35.614373 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee75f66-a179-4458-91b5-832d04d1d392" containerName="oc" Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.614487 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee75f66-a179-4458-91b5-832d04d1d392" containerName="oc" Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.614837 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee75f66-a179-4458-91b5-832d04d1d392" containerName="oc" Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.618257 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.628764 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bzk4p"] Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.777256 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-catalog-content\") pod \"certified-operators-bzk4p\" (UID: \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\") " pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.777322 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-utilities\") pod \"certified-operators-bzk4p\" (UID: \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\") " pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.777422 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5cpq\" (UniqueName: \"kubernetes.io/projected/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-kube-api-access-t5cpq\") pod \"certified-operators-bzk4p\" (UID: \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\") " pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.878646 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5cpq\" (UniqueName: \"kubernetes.io/projected/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-kube-api-access-t5cpq\") pod \"certified-operators-bzk4p\" (UID: \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\") " pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.878787 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-catalog-content\") pod \"certified-operators-bzk4p\" (UID: \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\") " pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.878815 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-utilities\") pod \"certified-operators-bzk4p\" (UID: \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\") " pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.879268 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-utilities\") pod \"certified-operators-bzk4p\" (UID: \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\") " pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.879277 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-catalog-content\") pod \"certified-operators-bzk4p\" (UID: \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\") " pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.903160 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5cpq\" (UniqueName: \"kubernetes.io/projected/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-kube-api-access-t5cpq\") pod \"certified-operators-bzk4p\" (UID: \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\") " pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:53:35 crc kubenswrapper[4736]: I0316 16:53:35.995165 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:53:36 crc kubenswrapper[4736]: I0316 16:53:36.978888 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:53:36 crc kubenswrapper[4736]: E0316 16:53:36.979550 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:53:37 crc kubenswrapper[4736]: I0316 16:53:37.094116 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bzk4p"] Mar 16 16:53:37 crc kubenswrapper[4736]: I0316 16:53:37.881282 4736 generic.go:334] "Generic (PLEG): container finished" podID="b2e5b7d0-6ed8-44e4-bab5-9884e870d673" containerID="8b9916ddb7c7e4f8c84ffe0ed0a099409fa11c565d5b8e75777dec3c0daf6501" exitCode=0 Mar 16 16:53:37 crc kubenswrapper[4736]: I0316 16:53:37.881358 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzk4p" event={"ID":"b2e5b7d0-6ed8-44e4-bab5-9884e870d673","Type":"ContainerDied","Data":"8b9916ddb7c7e4f8c84ffe0ed0a099409fa11c565d5b8e75777dec3c0daf6501"} Mar 16 16:53:37 crc kubenswrapper[4736]: I0316 16:53:37.881829 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzk4p" event={"ID":"b2e5b7d0-6ed8-44e4-bab5-9884e870d673","Type":"ContainerStarted","Data":"6980f3a421fca9be1a20d3e4b3ddc1492a9cd955b2f627716a9e4a37172697ed"} Mar 16 16:53:38 crc kubenswrapper[4736]: I0316 16:53:38.891237 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzk4p" event={"ID":"b2e5b7d0-6ed8-44e4-bab5-9884e870d673","Type":"ContainerStarted","Data":"5e13cbd2b8a31c5d865d816c3748a7498fd9bb00c4c14370332db5acef46f1fb"} Mar 16 16:53:40 crc kubenswrapper[4736]: I0316 16:53:40.911551 4736 generic.go:334] "Generic (PLEG): container finished" podID="b2e5b7d0-6ed8-44e4-bab5-9884e870d673" containerID="5e13cbd2b8a31c5d865d816c3748a7498fd9bb00c4c14370332db5acef46f1fb" exitCode=0 Mar 16 16:53:40 crc kubenswrapper[4736]: I0316 16:53:40.911634 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzk4p" event={"ID":"b2e5b7d0-6ed8-44e4-bab5-9884e870d673","Type":"ContainerDied","Data":"5e13cbd2b8a31c5d865d816c3748a7498fd9bb00c4c14370332db5acef46f1fb"} Mar 16 16:53:40 crc kubenswrapper[4736]: I0316 16:53:40.992817 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wws42"] Mar 16 16:53:40 crc kubenswrapper[4736]: I0316 16:53:40.995238 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wws42" Mar 16 16:53:41 crc kubenswrapper[4736]: I0316 16:53:41.002510 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wws42"] Mar 16 16:53:41 crc kubenswrapper[4736]: I0316 16:53:41.075848 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/938fc316-fda9-4e19-8972-92b50fd432e4-catalog-content\") pod \"community-operators-wws42\" (UID: \"938fc316-fda9-4e19-8972-92b50fd432e4\") " pod="openshift-marketplace/community-operators-wws42" Mar 16 16:53:41 crc kubenswrapper[4736]: I0316 16:53:41.076135 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/938fc316-fda9-4e19-8972-92b50fd432e4-utilities\") pod \"community-operators-wws42\" (UID: \"938fc316-fda9-4e19-8972-92b50fd432e4\") " pod="openshift-marketplace/community-operators-wws42" Mar 16 16:53:41 crc kubenswrapper[4736]: I0316 16:53:41.076212 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zjh8\" (UniqueName: \"kubernetes.io/projected/938fc316-fda9-4e19-8972-92b50fd432e4-kube-api-access-5zjh8\") pod \"community-operators-wws42\" (UID: \"938fc316-fda9-4e19-8972-92b50fd432e4\") " pod="openshift-marketplace/community-operators-wws42" Mar 16 16:53:41 crc kubenswrapper[4736]: I0316 16:53:41.177718 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/938fc316-fda9-4e19-8972-92b50fd432e4-utilities\") pod \"community-operators-wws42\" (UID: \"938fc316-fda9-4e19-8972-92b50fd432e4\") " pod="openshift-marketplace/community-operators-wws42" Mar 16 16:53:41 crc kubenswrapper[4736]: I0316 16:53:41.177787 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zjh8\" (UniqueName: \"kubernetes.io/projected/938fc316-fda9-4e19-8972-92b50fd432e4-kube-api-access-5zjh8\") pod \"community-operators-wws42\" (UID: \"938fc316-fda9-4e19-8972-92b50fd432e4\") " pod="openshift-marketplace/community-operators-wws42" Mar 16 16:53:41 crc kubenswrapper[4736]: I0316 16:53:41.177845 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/938fc316-fda9-4e19-8972-92b50fd432e4-catalog-content\") pod \"community-operators-wws42\" (UID: \"938fc316-fda9-4e19-8972-92b50fd432e4\") " pod="openshift-marketplace/community-operators-wws42" Mar 16 16:53:41 crc kubenswrapper[4736]: I0316 16:53:41.178268 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/938fc316-fda9-4e19-8972-92b50fd432e4-catalog-content\") pod \"community-operators-wws42\" (UID: \"938fc316-fda9-4e19-8972-92b50fd432e4\") " pod="openshift-marketplace/community-operators-wws42" Mar 16 16:53:41 crc kubenswrapper[4736]: I0316 16:53:41.178467 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/938fc316-fda9-4e19-8972-92b50fd432e4-utilities\") pod \"community-operators-wws42\" (UID: \"938fc316-fda9-4e19-8972-92b50fd432e4\") " pod="openshift-marketplace/community-operators-wws42" Mar 16 16:53:41 crc kubenswrapper[4736]: I0316 16:53:41.238495 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zjh8\" (UniqueName: \"kubernetes.io/projected/938fc316-fda9-4e19-8972-92b50fd432e4-kube-api-access-5zjh8\") pod \"community-operators-wws42\" (UID: \"938fc316-fda9-4e19-8972-92b50fd432e4\") " pod="openshift-marketplace/community-operators-wws42" Mar 16 16:53:41 crc kubenswrapper[4736]: I0316 16:53:41.320486 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wws42" Mar 16 16:53:41 crc kubenswrapper[4736]: I0316 16:53:41.937426 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzk4p" event={"ID":"b2e5b7d0-6ed8-44e4-bab5-9884e870d673","Type":"ContainerStarted","Data":"fdc414d24cb4927bdf61a4a4c117874abc76e41a1b752a5c657375b37597e591"} Mar 16 16:53:41 crc kubenswrapper[4736]: I0316 16:53:41.961308 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bzk4p" podStartSLOduration=3.454192256 podStartE2EDuration="6.958904237s" podCreationTimestamp="2026-03-16 16:53:35 +0000 UTC" firstStartedPulling="2026-03-16 16:53:37.882975495 +0000 UTC m=+6019.610365782" lastFinishedPulling="2026-03-16 16:53:41.387687476 +0000 UTC m=+6023.115077763" observedRunningTime="2026-03-16 16:53:41.958250919 +0000 UTC m=+6023.685641206" watchObservedRunningTime="2026-03-16 16:53:41.958904237 +0000 UTC m=+6023.686294524" Mar 16 16:53:42 crc kubenswrapper[4736]: I0316 16:53:42.244236 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wws42"] Mar 16 16:53:42 crc kubenswrapper[4736]: W0316 16:53:42.247404 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod938fc316_fda9_4e19_8972_92b50fd432e4.slice/crio-79933b72254dbc7ca92f3697eb40601b6b6dc486b5076811a398aa6c88d848fb WatchSource:0}: Error finding container 79933b72254dbc7ca92f3697eb40601b6b6dc486b5076811a398aa6c88d848fb: Status 404 returned error can't find the container with id 79933b72254dbc7ca92f3697eb40601b6b6dc486b5076811a398aa6c88d848fb Mar 16 16:53:42 crc kubenswrapper[4736]: I0316 16:53:42.954478 4736 generic.go:334] "Generic (PLEG): container finished" podID="938fc316-fda9-4e19-8972-92b50fd432e4" containerID="189ac835f6e9e1711807a4e8b4f73ef202c4b94c8a987d6b125974926f288726" exitCode=0 Mar 16 16:53:42 crc kubenswrapper[4736]: I0316 16:53:42.954687 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wws42" event={"ID":"938fc316-fda9-4e19-8972-92b50fd432e4","Type":"ContainerDied","Data":"189ac835f6e9e1711807a4e8b4f73ef202c4b94c8a987d6b125974926f288726"} Mar 16 16:53:42 crc kubenswrapper[4736]: I0316 16:53:42.954798 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wws42" event={"ID":"938fc316-fda9-4e19-8972-92b50fd432e4","Type":"ContainerStarted","Data":"79933b72254dbc7ca92f3697eb40601b6b6dc486b5076811a398aa6c88d848fb"} Mar 16 16:53:45 crc kubenswrapper[4736]: I0316 16:53:45.995258 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:53:45 crc kubenswrapper[4736]: I0316 16:53:45.996407 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:53:47 crc kubenswrapper[4736]: I0316 16:53:47.051647 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bzk4p" podUID="b2e5b7d0-6ed8-44e4-bab5-9884e870d673" containerName="registry-server" probeResult="failure" output=< Mar 16 16:53:47 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:53:47 crc kubenswrapper[4736]: > Mar 16 16:53:48 crc kubenswrapper[4736]: I0316 16:53:48.984238 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:53:48 crc kubenswrapper[4736]: E0316 16:53:48.985367 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:53:50 crc kubenswrapper[4736]: I0316 16:53:50.016622 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wws42" event={"ID":"938fc316-fda9-4e19-8972-92b50fd432e4","Type":"ContainerStarted","Data":"4ee5b776e42c0f67def414421fb1af20389093791a3c626d58c25b47ac2edb44"} Mar 16 16:53:52 crc kubenswrapper[4736]: I0316 16:53:52.059263 4736 generic.go:334] "Generic (PLEG): container finished" podID="938fc316-fda9-4e19-8972-92b50fd432e4" containerID="4ee5b776e42c0f67def414421fb1af20389093791a3c626d58c25b47ac2edb44" exitCode=0 Mar 16 16:53:52 crc kubenswrapper[4736]: I0316 16:53:52.059352 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wws42" event={"ID":"938fc316-fda9-4e19-8972-92b50fd432e4","Type":"ContainerDied","Data":"4ee5b776e42c0f67def414421fb1af20389093791a3c626d58c25b47ac2edb44"} Mar 16 16:53:53 crc kubenswrapper[4736]: I0316 16:53:53.069261 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wws42" event={"ID":"938fc316-fda9-4e19-8972-92b50fd432e4","Type":"ContainerStarted","Data":"42c11afcf4ee104f24a7ccb5d5ef467a2fa779d9c8b4d5fb017af5946632fbdb"} Mar 16 16:53:53 crc kubenswrapper[4736]: I0316 16:53:53.099283 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wws42" podStartSLOduration=3.412801881 podStartE2EDuration="13.099263698s" podCreationTimestamp="2026-03-16 16:53:40 +0000 UTC" firstStartedPulling="2026-03-16 16:53:42.958663111 +0000 UTC m=+6024.686053398" lastFinishedPulling="2026-03-16 16:53:52.645124928 +0000 UTC m=+6034.372515215" observedRunningTime="2026-03-16 16:53:53.095807574 +0000 UTC m=+6034.823197861" watchObservedRunningTime="2026-03-16 16:53:53.099263698 +0000 UTC m=+6034.826653995" Mar 16 16:53:57 crc kubenswrapper[4736]: I0316 16:53:57.042994 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bzk4p" podUID="b2e5b7d0-6ed8-44e4-bab5-9884e870d673" containerName="registry-server" probeResult="failure" output=< Mar 16 16:53:57 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:53:57 crc kubenswrapper[4736]: > Mar 16 16:54:00 crc kubenswrapper[4736]: I0316 16:54:00.248344 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561334-g9kpg"] Mar 16 16:54:00 crc kubenswrapper[4736]: I0316 16:54:00.251295 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561334-g9kpg" Mar 16 16:54:00 crc kubenswrapper[4736]: I0316 16:54:00.260945 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:54:00 crc kubenswrapper[4736]: I0316 16:54:00.260956 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:54:00 crc kubenswrapper[4736]: I0316 16:54:00.260950 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:54:00 crc kubenswrapper[4736]: I0316 16:54:00.261855 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561334-g9kpg"] Mar 16 16:54:00 crc kubenswrapper[4736]: I0316 16:54:00.387872 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrhk5\" (UniqueName: \"kubernetes.io/projected/f6db902e-e71c-4740-8f5d-623c70017205-kube-api-access-rrhk5\") pod \"auto-csr-approver-29561334-g9kpg\" (UID: \"f6db902e-e71c-4740-8f5d-623c70017205\") " pod="openshift-infra/auto-csr-approver-29561334-g9kpg" Mar 16 16:54:00 crc kubenswrapper[4736]: I0316 16:54:00.490631 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrhk5\" (UniqueName: \"kubernetes.io/projected/f6db902e-e71c-4740-8f5d-623c70017205-kube-api-access-rrhk5\") pod \"auto-csr-approver-29561334-g9kpg\" (UID: \"f6db902e-e71c-4740-8f5d-623c70017205\") " pod="openshift-infra/auto-csr-approver-29561334-g9kpg" Mar 16 16:54:00 crc kubenswrapper[4736]: I0316 16:54:00.515965 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrhk5\" (UniqueName: \"kubernetes.io/projected/f6db902e-e71c-4740-8f5d-623c70017205-kube-api-access-rrhk5\") pod \"auto-csr-approver-29561334-g9kpg\" (UID: \"f6db902e-e71c-4740-8f5d-623c70017205\") " pod="openshift-infra/auto-csr-approver-29561334-g9kpg" Mar 16 16:54:00 crc kubenswrapper[4736]: I0316 16:54:00.574679 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561334-g9kpg" Mar 16 16:54:01 crc kubenswrapper[4736]: I0316 16:54:01.321383 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wws42" Mar 16 16:54:01 crc kubenswrapper[4736]: I0316 16:54:01.321737 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wws42" Mar 16 16:54:01 crc kubenswrapper[4736]: I0316 16:54:01.488505 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561334-g9kpg"] Mar 16 16:54:01 crc kubenswrapper[4736]: W0316 16:54:01.498549 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6db902e_e71c_4740_8f5d_623c70017205.slice/crio-6bdef190e7aeaeeb1292def03cde4909de9babbb71cf029adb710d7191257af2 WatchSource:0}: Error finding container 6bdef190e7aeaeeb1292def03cde4909de9babbb71cf029adb710d7191257af2: Status 404 returned error can't find the container with id 6bdef190e7aeaeeb1292def03cde4909de9babbb71cf029adb710d7191257af2 Mar 16 16:54:02 crc kubenswrapper[4736]: I0316 16:54:02.149373 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561334-g9kpg" event={"ID":"f6db902e-e71c-4740-8f5d-623c70017205","Type":"ContainerStarted","Data":"6bdef190e7aeaeeb1292def03cde4909de9babbb71cf029adb710d7191257af2"} Mar 16 16:54:02 crc kubenswrapper[4736]: I0316 16:54:02.369448 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wws42" podUID="938fc316-fda9-4e19-8972-92b50fd432e4" containerName="registry-server" probeResult="failure" output=< Mar 16 16:54:02 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:54:02 crc kubenswrapper[4736]: > Mar 16 16:54:02 crc kubenswrapper[4736]: I0316 16:54:02.978804 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:54:02 crc kubenswrapper[4736]: E0316 16:54:02.979236 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 16:54:04 crc kubenswrapper[4736]: I0316 16:54:04.168401 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561334-g9kpg" event={"ID":"f6db902e-e71c-4740-8f5d-623c70017205","Type":"ContainerStarted","Data":"f47672a647ae58db1e8b3eeb127d92f433e71bc400d3022a01ba423add1472b9"} Mar 16 16:54:04 crc kubenswrapper[4736]: I0316 16:54:04.183028 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561334-g9kpg" podStartSLOduration=2.976003603 podStartE2EDuration="4.182985246s" podCreationTimestamp="2026-03-16 16:54:00 +0000 UTC" firstStartedPulling="2026-03-16 16:54:01.503511242 +0000 UTC m=+6043.230901529" lastFinishedPulling="2026-03-16 16:54:02.710492885 +0000 UTC m=+6044.437883172" observedRunningTime="2026-03-16 16:54:04.181345231 +0000 UTC m=+6045.908735528" watchObservedRunningTime="2026-03-16 16:54:04.182985246 +0000 UTC m=+6045.910375553" Mar 16 16:54:05 crc kubenswrapper[4736]: I0316 16:54:05.182993 4736 generic.go:334] "Generic (PLEG): container finished" podID="f6db902e-e71c-4740-8f5d-623c70017205" containerID="f47672a647ae58db1e8b3eeb127d92f433e71bc400d3022a01ba423add1472b9" exitCode=0 Mar 16 16:54:05 crc kubenswrapper[4736]: I0316 16:54:05.183320 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561334-g9kpg" event={"ID":"f6db902e-e71c-4740-8f5d-623c70017205","Type":"ContainerDied","Data":"f47672a647ae58db1e8b3eeb127d92f433e71bc400d3022a01ba423add1472b9"} Mar 16 16:54:06 crc kubenswrapper[4736]: I0316 16:54:06.055967 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:54:06 crc kubenswrapper[4736]: I0316 16:54:06.111711 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:54:06 crc kubenswrapper[4736]: I0316 16:54:06.683705 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561334-g9kpg" Mar 16 16:54:06 crc kubenswrapper[4736]: I0316 16:54:06.816312 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrhk5\" (UniqueName: \"kubernetes.io/projected/f6db902e-e71c-4740-8f5d-623c70017205-kube-api-access-rrhk5\") pod \"f6db902e-e71c-4740-8f5d-623c70017205\" (UID: \"f6db902e-e71c-4740-8f5d-623c70017205\") " Mar 16 16:54:06 crc kubenswrapper[4736]: I0316 16:54:06.817369 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bzk4p"] Mar 16 16:54:06 crc kubenswrapper[4736]: I0316 16:54:06.837255 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6db902e-e71c-4740-8f5d-623c70017205-kube-api-access-rrhk5" (OuterVolumeSpecName: "kube-api-access-rrhk5") pod "f6db902e-e71c-4740-8f5d-623c70017205" (UID: "f6db902e-e71c-4740-8f5d-623c70017205"). InnerVolumeSpecName "kube-api-access-rrhk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:54:06 crc kubenswrapper[4736]: I0316 16:54:06.919214 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrhk5\" (UniqueName: \"kubernetes.io/projected/f6db902e-e71c-4740-8f5d-623c70017205-kube-api-access-rrhk5\") on node \"crc\" DevicePath \"\"" Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.200760 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561334-g9kpg" event={"ID":"f6db902e-e71c-4740-8f5d-623c70017205","Type":"ContainerDied","Data":"6bdef190e7aeaeeb1292def03cde4909de9babbb71cf029adb710d7191257af2"} Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.200827 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bdef190e7aeaeeb1292def03cde4909de9babbb71cf029adb710d7191257af2" Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.200774 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561334-g9kpg" Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.204351 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bzk4p" podUID="b2e5b7d0-6ed8-44e4-bab5-9884e870d673" containerName="registry-server" containerID="cri-o://fdc414d24cb4927bdf61a4a4c117874abc76e41a1b752a5c657375b37597e591" gracePeriod=2 Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.344246 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561328-wmzs4"] Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.358523 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561328-wmzs4"] Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.772248 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.832572 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5cpq\" (UniqueName: \"kubernetes.io/projected/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-kube-api-access-t5cpq\") pod \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\" (UID: \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\") " Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.832639 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-catalog-content\") pod \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\" (UID: \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\") " Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.832904 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-utilities\") pod \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\" (UID: \"b2e5b7d0-6ed8-44e4-bab5-9884e870d673\") " Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.843546 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-kube-api-access-t5cpq" (OuterVolumeSpecName: "kube-api-access-t5cpq") pod "b2e5b7d0-6ed8-44e4-bab5-9884e870d673" (UID: "b2e5b7d0-6ed8-44e4-bab5-9884e870d673"). InnerVolumeSpecName "kube-api-access-t5cpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.847136 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-utilities" (OuterVolumeSpecName: "utilities") pod "b2e5b7d0-6ed8-44e4-bab5-9884e870d673" (UID: "b2e5b7d0-6ed8-44e4-bab5-9884e870d673"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.931512 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2e5b7d0-6ed8-44e4-bab5-9884e870d673" (UID: "b2e5b7d0-6ed8-44e4-bab5-9884e870d673"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.934898 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.935135 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5cpq\" (UniqueName: \"kubernetes.io/projected/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-kube-api-access-t5cpq\") on node \"crc\" DevicePath \"\"" Mar 16 16:54:07 crc kubenswrapper[4736]: I0316 16:54:07.935240 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2e5b7d0-6ed8-44e4-bab5-9884e870d673-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.225744 4736 generic.go:334] "Generic (PLEG): container finished" podID="b2e5b7d0-6ed8-44e4-bab5-9884e870d673" containerID="fdc414d24cb4927bdf61a4a4c117874abc76e41a1b752a5c657375b37597e591" exitCode=0 Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.225794 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzk4p" event={"ID":"b2e5b7d0-6ed8-44e4-bab5-9884e870d673","Type":"ContainerDied","Data":"fdc414d24cb4927bdf61a4a4c117874abc76e41a1b752a5c657375b37597e591"} Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.225832 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzk4p" event={"ID":"b2e5b7d0-6ed8-44e4-bab5-9884e870d673","Type":"ContainerDied","Data":"6980f3a421fca9be1a20d3e4b3ddc1492a9cd955b2f627716a9e4a37172697ed"} Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.225854 4736 scope.go:117] "RemoveContainer" containerID="fdc414d24cb4927bdf61a4a4c117874abc76e41a1b752a5c657375b37597e591" Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.226039 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzk4p" Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.256143 4736 scope.go:117] "RemoveContainer" containerID="5e13cbd2b8a31c5d865d816c3748a7498fd9bb00c4c14370332db5acef46f1fb" Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.293205 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bzk4p"] Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.301402 4736 scope.go:117] "RemoveContainer" containerID="8b9916ddb7c7e4f8c84ffe0ed0a099409fa11c565d5b8e75777dec3c0daf6501" Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.305561 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bzk4p"] Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.338658 4736 scope.go:117] "RemoveContainer" containerID="fdc414d24cb4927bdf61a4a4c117874abc76e41a1b752a5c657375b37597e591" Mar 16 16:54:08 crc kubenswrapper[4736]: E0316 16:54:08.342771 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc414d24cb4927bdf61a4a4c117874abc76e41a1b752a5c657375b37597e591\": container with ID starting with fdc414d24cb4927bdf61a4a4c117874abc76e41a1b752a5c657375b37597e591 not found: ID does not exist" containerID="fdc414d24cb4927bdf61a4a4c117874abc76e41a1b752a5c657375b37597e591" Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.342850 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc414d24cb4927bdf61a4a4c117874abc76e41a1b752a5c657375b37597e591"} err="failed to get container status \"fdc414d24cb4927bdf61a4a4c117874abc76e41a1b752a5c657375b37597e591\": rpc error: code = NotFound desc = could not find container \"fdc414d24cb4927bdf61a4a4c117874abc76e41a1b752a5c657375b37597e591\": container with ID starting with fdc414d24cb4927bdf61a4a4c117874abc76e41a1b752a5c657375b37597e591 not found: ID does not exist" Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.342886 4736 scope.go:117] "RemoveContainer" containerID="5e13cbd2b8a31c5d865d816c3748a7498fd9bb00c4c14370332db5acef46f1fb" Mar 16 16:54:08 crc kubenswrapper[4736]: E0316 16:54:08.343453 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e13cbd2b8a31c5d865d816c3748a7498fd9bb00c4c14370332db5acef46f1fb\": container with ID starting with 5e13cbd2b8a31c5d865d816c3748a7498fd9bb00c4c14370332db5acef46f1fb not found: ID does not exist" containerID="5e13cbd2b8a31c5d865d816c3748a7498fd9bb00c4c14370332db5acef46f1fb" Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.343485 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e13cbd2b8a31c5d865d816c3748a7498fd9bb00c4c14370332db5acef46f1fb"} err="failed to get container status \"5e13cbd2b8a31c5d865d816c3748a7498fd9bb00c4c14370332db5acef46f1fb\": rpc error: code = NotFound desc = could not find container \"5e13cbd2b8a31c5d865d816c3748a7498fd9bb00c4c14370332db5acef46f1fb\": container with ID starting with 5e13cbd2b8a31c5d865d816c3748a7498fd9bb00c4c14370332db5acef46f1fb not found: ID does not exist" Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.343503 4736 scope.go:117] "RemoveContainer" containerID="8b9916ddb7c7e4f8c84ffe0ed0a099409fa11c565d5b8e75777dec3c0daf6501" Mar 16 16:54:08 crc kubenswrapper[4736]: E0316 16:54:08.344001 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b9916ddb7c7e4f8c84ffe0ed0a099409fa11c565d5b8e75777dec3c0daf6501\": container with ID starting with 8b9916ddb7c7e4f8c84ffe0ed0a099409fa11c565d5b8e75777dec3c0daf6501 not found: ID does not exist" containerID="8b9916ddb7c7e4f8c84ffe0ed0a099409fa11c565d5b8e75777dec3c0daf6501" Mar 16 16:54:08 crc kubenswrapper[4736]: I0316 16:54:08.344040 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b9916ddb7c7e4f8c84ffe0ed0a099409fa11c565d5b8e75777dec3c0daf6501"} err="failed to get container status \"8b9916ddb7c7e4f8c84ffe0ed0a099409fa11c565d5b8e75777dec3c0daf6501\": rpc error: code = NotFound desc = could not find container \"8b9916ddb7c7e4f8c84ffe0ed0a099409fa11c565d5b8e75777dec3c0daf6501\": container with ID starting with 8b9916ddb7c7e4f8c84ffe0ed0a099409fa11c565d5b8e75777dec3c0daf6501 not found: ID does not exist" Mar 16 16:54:09 crc kubenswrapper[4736]: I0316 16:54:08.999768 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c7861b-4a8b-4d87-8462-333874519dfe" path="/var/lib/kubelet/pods/a0c7861b-4a8b-4d87-8462-333874519dfe/volumes" Mar 16 16:54:09 crc kubenswrapper[4736]: I0316 16:54:09.002229 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2e5b7d0-6ed8-44e4-bab5-9884e870d673" path="/var/lib/kubelet/pods/b2e5b7d0-6ed8-44e4-bab5-9884e870d673/volumes" Mar 16 16:54:11 crc kubenswrapper[4736]: I0316 16:54:11.374944 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wws42" Mar 16 16:54:11 crc kubenswrapper[4736]: I0316 16:54:11.426783 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wws42" Mar 16 16:54:11 crc kubenswrapper[4736]: I0316 16:54:11.857861 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wws42"] Mar 16 16:54:12 crc kubenswrapper[4736]: I0316 16:54:12.019695 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftntd"] Mar 16 16:54:12 crc kubenswrapper[4736]: I0316 16:54:12.020002 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ftntd" podUID="44843712-11b8-4d11-b61f-00678e344b30" containerName="registry-server" containerID="cri-o://0ecbeefd8e3b38c484a44b435c8864a5517539648f94142ed9ca578b578964bd" gracePeriod=2 Mar 16 16:54:12 crc kubenswrapper[4736]: I0316 16:54:12.260727 4736 generic.go:334] "Generic (PLEG): container finished" podID="44843712-11b8-4d11-b61f-00678e344b30" containerID="0ecbeefd8e3b38c484a44b435c8864a5517539648f94142ed9ca578b578964bd" exitCode=0 Mar 16 16:54:12 crc kubenswrapper[4736]: I0316 16:54:12.260821 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftntd" event={"ID":"44843712-11b8-4d11-b61f-00678e344b30","Type":"ContainerDied","Data":"0ecbeefd8e3b38c484a44b435c8864a5517539648f94142ed9ca578b578964bd"} Mar 16 16:54:12 crc kubenswrapper[4736]: E0316 16:54:12.571892 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0ecbeefd8e3b38c484a44b435c8864a5517539648f94142ed9ca578b578964bd is running failed: container process not found" containerID="0ecbeefd8e3b38c484a44b435c8864a5517539648f94142ed9ca578b578964bd" cmd=["grpc_health_probe","-addr=:50051"] Mar 16 16:54:12 crc kubenswrapper[4736]: E0316 16:54:12.572131 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0ecbeefd8e3b38c484a44b435c8864a5517539648f94142ed9ca578b578964bd is running failed: container process not found" containerID="0ecbeefd8e3b38c484a44b435c8864a5517539648f94142ed9ca578b578964bd" cmd=["grpc_health_probe","-addr=:50051"] Mar 16 16:54:12 crc kubenswrapper[4736]: E0316 16:54:12.572311 4736 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0ecbeefd8e3b38c484a44b435c8864a5517539648f94142ed9ca578b578964bd is running failed: container process not found" containerID="0ecbeefd8e3b38c484a44b435c8864a5517539648f94142ed9ca578b578964bd" cmd=["grpc_health_probe","-addr=:50051"] Mar 16 16:54:12 crc kubenswrapper[4736]: E0316 16:54:12.572336 4736 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0ecbeefd8e3b38c484a44b435c8864a5517539648f94142ed9ca578b578964bd is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-ftntd" podUID="44843712-11b8-4d11-b61f-00678e344b30" containerName="registry-server" Mar 16 16:54:12 crc kubenswrapper[4736]: I0316 16:54:12.641267 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:54:12 crc kubenswrapper[4736]: I0316 16:54:12.831951 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44843712-11b8-4d11-b61f-00678e344b30-catalog-content\") pod \"44843712-11b8-4d11-b61f-00678e344b30\" (UID: \"44843712-11b8-4d11-b61f-00678e344b30\") " Mar 16 16:54:12 crc kubenswrapper[4736]: I0316 16:54:12.832036 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44843712-11b8-4d11-b61f-00678e344b30-utilities\") pod \"44843712-11b8-4d11-b61f-00678e344b30\" (UID: \"44843712-11b8-4d11-b61f-00678e344b30\") " Mar 16 16:54:12 crc kubenswrapper[4736]: I0316 16:54:12.832122 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhwcl\" (UniqueName: \"kubernetes.io/projected/44843712-11b8-4d11-b61f-00678e344b30-kube-api-access-jhwcl\") pod \"44843712-11b8-4d11-b61f-00678e344b30\" (UID: \"44843712-11b8-4d11-b61f-00678e344b30\") " Mar 16 16:54:12 crc kubenswrapper[4736]: I0316 16:54:12.834944 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44843712-11b8-4d11-b61f-00678e344b30-utilities" (OuterVolumeSpecName: "utilities") pod "44843712-11b8-4d11-b61f-00678e344b30" (UID: "44843712-11b8-4d11-b61f-00678e344b30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:54:12 crc kubenswrapper[4736]: I0316 16:54:12.841603 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44843712-11b8-4d11-b61f-00678e344b30-kube-api-access-jhwcl" (OuterVolumeSpecName: "kube-api-access-jhwcl") pod "44843712-11b8-4d11-b61f-00678e344b30" (UID: "44843712-11b8-4d11-b61f-00678e344b30"). InnerVolumeSpecName "kube-api-access-jhwcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:54:12 crc kubenswrapper[4736]: I0316 16:54:12.936501 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44843712-11b8-4d11-b61f-00678e344b30-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:54:12 crc kubenswrapper[4736]: I0316 16:54:12.936529 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhwcl\" (UniqueName: \"kubernetes.io/projected/44843712-11b8-4d11-b61f-00678e344b30-kube-api-access-jhwcl\") on node \"crc\" DevicePath \"\"" Mar 16 16:54:12 crc kubenswrapper[4736]: I0316 16:54:12.938784 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44843712-11b8-4d11-b61f-00678e344b30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44843712-11b8-4d11-b61f-00678e344b30" (UID: "44843712-11b8-4d11-b61f-00678e344b30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:54:13 crc kubenswrapper[4736]: I0316 16:54:13.039172 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44843712-11b8-4d11-b61f-00678e344b30-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:54:13 crc kubenswrapper[4736]: I0316 16:54:13.271851 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftntd" Mar 16 16:54:13 crc kubenswrapper[4736]: I0316 16:54:13.272388 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftntd" event={"ID":"44843712-11b8-4d11-b61f-00678e344b30","Type":"ContainerDied","Data":"910e1f03278f9061a63140aa05a1f593cfa59e6850a983b60c95bbcb291d0c06"} Mar 16 16:54:13 crc kubenswrapper[4736]: I0316 16:54:13.272425 4736 scope.go:117] "RemoveContainer" containerID="0ecbeefd8e3b38c484a44b435c8864a5517539648f94142ed9ca578b578964bd" Mar 16 16:54:13 crc kubenswrapper[4736]: I0316 16:54:13.292877 4736 scope.go:117] "RemoveContainer" containerID="5b85387327c831f5148d91bf2cc9c35a0d71c330c802dadf0c094e25d194fca5" Mar 16 16:54:13 crc kubenswrapper[4736]: I0316 16:54:13.300158 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftntd"] Mar 16 16:54:13 crc kubenswrapper[4736]: I0316 16:54:13.310803 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ftntd"] Mar 16 16:54:13 crc kubenswrapper[4736]: I0316 16:54:13.338818 4736 scope.go:117] "RemoveContainer" containerID="299c7fdac0a2219f5165184ee36697bb1011f1681bb211d7b8334099c2db26e2" Mar 16 16:54:14 crc kubenswrapper[4736]: I0316 16:54:14.991248 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44843712-11b8-4d11-b61f-00678e344b30" path="/var/lib/kubelet/pods/44843712-11b8-4d11-b61f-00678e344b30/volumes" Mar 16 16:54:15 crc kubenswrapper[4736]: I0316 16:54:15.978804 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:54:16 crc kubenswrapper[4736]: I0316 16:54:16.301048 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"6ed377f9c884681c0b4f334e9353ccbadf6d52d9097e6c8a08c51af13c92cc52"} Mar 16 16:54:59 crc kubenswrapper[4736]: I0316 16:54:59.681549 4736 scope.go:117] "RemoveContainer" containerID="10b422180b22eb75e71bb8a18f9ccebedc0b25cddec0d015c9085a7e3d31f481" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.639579 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zm4db"] Mar 16 16:55:00 crc kubenswrapper[4736]: E0316 16:55:00.643406 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44843712-11b8-4d11-b61f-00678e344b30" containerName="extract-content" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.643623 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="44843712-11b8-4d11-b61f-00678e344b30" containerName="extract-content" Mar 16 16:55:00 crc kubenswrapper[4736]: E0316 16:55:00.643763 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44843712-11b8-4d11-b61f-00678e344b30" containerName="extract-utilities" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.643853 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="44843712-11b8-4d11-b61f-00678e344b30" containerName="extract-utilities" Mar 16 16:55:00 crc kubenswrapper[4736]: E0316 16:55:00.643964 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e5b7d0-6ed8-44e4-bab5-9884e870d673" containerName="registry-server" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.644059 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e5b7d0-6ed8-44e4-bab5-9884e870d673" containerName="registry-server" Mar 16 16:55:00 crc kubenswrapper[4736]: E0316 16:55:00.644160 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6db902e-e71c-4740-8f5d-623c70017205" containerName="oc" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.644269 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6db902e-e71c-4740-8f5d-623c70017205" containerName="oc" Mar 16 16:55:00 crc kubenswrapper[4736]: E0316 16:55:00.644364 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e5b7d0-6ed8-44e4-bab5-9884e870d673" containerName="extract-content" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.644473 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e5b7d0-6ed8-44e4-bab5-9884e870d673" containerName="extract-content" Mar 16 16:55:00 crc kubenswrapper[4736]: E0316 16:55:00.644577 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44843712-11b8-4d11-b61f-00678e344b30" containerName="registry-server" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.644680 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="44843712-11b8-4d11-b61f-00678e344b30" containerName="registry-server" Mar 16 16:55:00 crc kubenswrapper[4736]: E0316 16:55:00.645386 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e5b7d0-6ed8-44e4-bab5-9884e870d673" containerName="extract-utilities" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.645491 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e5b7d0-6ed8-44e4-bab5-9884e870d673" containerName="extract-utilities" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.648202 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6db902e-e71c-4740-8f5d-623c70017205" containerName="oc" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.648402 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e5b7d0-6ed8-44e4-bab5-9884e870d673" containerName="registry-server" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.648525 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="44843712-11b8-4d11-b61f-00678e344b30" containerName="registry-server" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.651016 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.670938 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zm4db"] Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.820979 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ln4l\" (UniqueName: \"kubernetes.io/projected/5ff3455b-f483-41db-a598-803d7440ddd6-kube-api-access-2ln4l\") pod \"redhat-marketplace-zm4db\" (UID: \"5ff3455b-f483-41db-a598-803d7440ddd6\") " pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.821662 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff3455b-f483-41db-a598-803d7440ddd6-catalog-content\") pod \"redhat-marketplace-zm4db\" (UID: \"5ff3455b-f483-41db-a598-803d7440ddd6\") " pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.821751 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff3455b-f483-41db-a598-803d7440ddd6-utilities\") pod \"redhat-marketplace-zm4db\" (UID: \"5ff3455b-f483-41db-a598-803d7440ddd6\") " pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.923387 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ln4l\" (UniqueName: \"kubernetes.io/projected/5ff3455b-f483-41db-a598-803d7440ddd6-kube-api-access-2ln4l\") pod \"redhat-marketplace-zm4db\" (UID: \"5ff3455b-f483-41db-a598-803d7440ddd6\") " pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.923690 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff3455b-f483-41db-a598-803d7440ddd6-catalog-content\") pod \"redhat-marketplace-zm4db\" (UID: \"5ff3455b-f483-41db-a598-803d7440ddd6\") " pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.923836 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff3455b-f483-41db-a598-803d7440ddd6-utilities\") pod \"redhat-marketplace-zm4db\" (UID: \"5ff3455b-f483-41db-a598-803d7440ddd6\") " pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.924257 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff3455b-f483-41db-a598-803d7440ddd6-catalog-content\") pod \"redhat-marketplace-zm4db\" (UID: \"5ff3455b-f483-41db-a598-803d7440ddd6\") " pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.924445 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff3455b-f483-41db-a598-803d7440ddd6-utilities\") pod \"redhat-marketplace-zm4db\" (UID: \"5ff3455b-f483-41db-a598-803d7440ddd6\") " pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:00 crc kubenswrapper[4736]: I0316 16:55:00.942879 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ln4l\" (UniqueName: \"kubernetes.io/projected/5ff3455b-f483-41db-a598-803d7440ddd6-kube-api-access-2ln4l\") pod \"redhat-marketplace-zm4db\" (UID: \"5ff3455b-f483-41db-a598-803d7440ddd6\") " pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:01 crc kubenswrapper[4736]: I0316 16:55:01.024418 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:01 crc kubenswrapper[4736]: I0316 16:55:01.508064 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zm4db"] Mar 16 16:55:01 crc kubenswrapper[4736]: I0316 16:55:01.772508 4736 generic.go:334] "Generic (PLEG): container finished" podID="5ff3455b-f483-41db-a598-803d7440ddd6" containerID="9012315162e55566bb6b8d8d895ec703fdcba168f8edb9bfad4670aa35b9ee7f" exitCode=0 Mar 16 16:55:01 crc kubenswrapper[4736]: I0316 16:55:01.772549 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zm4db" event={"ID":"5ff3455b-f483-41db-a598-803d7440ddd6","Type":"ContainerDied","Data":"9012315162e55566bb6b8d8d895ec703fdcba168f8edb9bfad4670aa35b9ee7f"} Mar 16 16:55:01 crc kubenswrapper[4736]: I0316 16:55:01.772577 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zm4db" event={"ID":"5ff3455b-f483-41db-a598-803d7440ddd6","Type":"ContainerStarted","Data":"750ffdf6ddd3f9e1aa3772ebc0c4ac545dcfa6cce52fa2c67528ddb52b125b07"} Mar 16 16:55:01 crc kubenswrapper[4736]: I0316 16:55:01.774848 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 16:55:02 crc kubenswrapper[4736]: I0316 16:55:02.781839 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zm4db" event={"ID":"5ff3455b-f483-41db-a598-803d7440ddd6","Type":"ContainerStarted","Data":"4b4c620610df32ec6f0366914a228efb5491bbc452273b9385add28aa6c594d0"} Mar 16 16:55:03 crc kubenswrapper[4736]: I0316 16:55:03.804669 4736 generic.go:334] "Generic (PLEG): container finished" podID="5ff3455b-f483-41db-a598-803d7440ddd6" containerID="4b4c620610df32ec6f0366914a228efb5491bbc452273b9385add28aa6c594d0" exitCode=0 Mar 16 16:55:03 crc kubenswrapper[4736]: I0316 16:55:03.805581 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zm4db" event={"ID":"5ff3455b-f483-41db-a598-803d7440ddd6","Type":"ContainerDied","Data":"4b4c620610df32ec6f0366914a228efb5491bbc452273b9385add28aa6c594d0"} Mar 16 16:55:04 crc kubenswrapper[4736]: I0316 16:55:04.823511 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zm4db" event={"ID":"5ff3455b-f483-41db-a598-803d7440ddd6","Type":"ContainerStarted","Data":"b69287f8a077fea3467f812815537c8bb013cc8dacfcf5e0a61203a85c77299f"} Mar 16 16:55:04 crc kubenswrapper[4736]: I0316 16:55:04.848521 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zm4db" podStartSLOduration=2.382066246 podStartE2EDuration="4.848505361s" podCreationTimestamp="2026-03-16 16:55:00 +0000 UTC" firstStartedPulling="2026-03-16 16:55:01.773917968 +0000 UTC m=+6103.501308255" lastFinishedPulling="2026-03-16 16:55:04.240357043 +0000 UTC m=+6105.967747370" observedRunningTime="2026-03-16 16:55:04.840989877 +0000 UTC m=+6106.568380164" watchObservedRunningTime="2026-03-16 16:55:04.848505361 +0000 UTC m=+6106.575895648" Mar 16 16:55:11 crc kubenswrapper[4736]: I0316 16:55:11.024852 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:11 crc kubenswrapper[4736]: I0316 16:55:11.025562 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:12 crc kubenswrapper[4736]: I0316 16:55:12.074734 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zm4db" podUID="5ff3455b-f483-41db-a598-803d7440ddd6" containerName="registry-server" probeResult="failure" output=< Mar 16 16:55:12 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 16:55:12 crc kubenswrapper[4736]: > Mar 16 16:55:21 crc kubenswrapper[4736]: I0316 16:55:21.076267 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:21 crc kubenswrapper[4736]: I0316 16:55:21.128504 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:21 crc kubenswrapper[4736]: I0316 16:55:21.325149 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zm4db"] Mar 16 16:55:22 crc kubenswrapper[4736]: I0316 16:55:22.980761 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zm4db" podUID="5ff3455b-f483-41db-a598-803d7440ddd6" containerName="registry-server" containerID="cri-o://b69287f8a077fea3467f812815537c8bb013cc8dacfcf5e0a61203a85c77299f" gracePeriod=2 Mar 16 16:55:23 crc kubenswrapper[4736]: I0316 16:55:23.901315 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:23 crc kubenswrapper[4736]: I0316 16:55:23.999077 4736 generic.go:334] "Generic (PLEG): container finished" podID="5ff3455b-f483-41db-a598-803d7440ddd6" containerID="b69287f8a077fea3467f812815537c8bb013cc8dacfcf5e0a61203a85c77299f" exitCode=0 Mar 16 16:55:23 crc kubenswrapper[4736]: I0316 16:55:23.999132 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zm4db" event={"ID":"5ff3455b-f483-41db-a598-803d7440ddd6","Type":"ContainerDied","Data":"b69287f8a077fea3467f812815537c8bb013cc8dacfcf5e0a61203a85c77299f"} Mar 16 16:55:23 crc kubenswrapper[4736]: I0316 16:55:23.999157 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zm4db" event={"ID":"5ff3455b-f483-41db-a598-803d7440ddd6","Type":"ContainerDied","Data":"750ffdf6ddd3f9e1aa3772ebc0c4ac545dcfa6cce52fa2c67528ddb52b125b07"} Mar 16 16:55:23 crc kubenswrapper[4736]: I0316 16:55:23.999174 4736 scope.go:117] "RemoveContainer" containerID="b69287f8a077fea3467f812815537c8bb013cc8dacfcf5e0a61203a85c77299f" Mar 16 16:55:23 crc kubenswrapper[4736]: I0316 16:55:23.999291 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zm4db" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.015538 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff3455b-f483-41db-a598-803d7440ddd6-catalog-content\") pod \"5ff3455b-f483-41db-a598-803d7440ddd6\" (UID: \"5ff3455b-f483-41db-a598-803d7440ddd6\") " Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.015737 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff3455b-f483-41db-a598-803d7440ddd6-utilities\") pod \"5ff3455b-f483-41db-a598-803d7440ddd6\" (UID: \"5ff3455b-f483-41db-a598-803d7440ddd6\") " Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.015951 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ln4l\" (UniqueName: \"kubernetes.io/projected/5ff3455b-f483-41db-a598-803d7440ddd6-kube-api-access-2ln4l\") pod \"5ff3455b-f483-41db-a598-803d7440ddd6\" (UID: \"5ff3455b-f483-41db-a598-803d7440ddd6\") " Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.017887 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ff3455b-f483-41db-a598-803d7440ddd6-utilities" (OuterVolumeSpecName: "utilities") pod "5ff3455b-f483-41db-a598-803d7440ddd6" (UID: "5ff3455b-f483-41db-a598-803d7440ddd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.028087 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff3455b-f483-41db-a598-803d7440ddd6-kube-api-access-2ln4l" (OuterVolumeSpecName: "kube-api-access-2ln4l") pod "5ff3455b-f483-41db-a598-803d7440ddd6" (UID: "5ff3455b-f483-41db-a598-803d7440ddd6"). InnerVolumeSpecName "kube-api-access-2ln4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.046769 4736 scope.go:117] "RemoveContainer" containerID="4b4c620610df32ec6f0366914a228efb5491bbc452273b9385add28aa6c594d0" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.052508 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ff3455b-f483-41db-a598-803d7440ddd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ff3455b-f483-41db-a598-803d7440ddd6" (UID: "5ff3455b-f483-41db-a598-803d7440ddd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.084952 4736 scope.go:117] "RemoveContainer" containerID="9012315162e55566bb6b8d8d895ec703fdcba168f8edb9bfad4670aa35b9ee7f" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.119351 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ff3455b-f483-41db-a598-803d7440ddd6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.119377 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ff3455b-f483-41db-a598-803d7440ddd6-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.119403 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ln4l\" (UniqueName: \"kubernetes.io/projected/5ff3455b-f483-41db-a598-803d7440ddd6-kube-api-access-2ln4l\") on node \"crc\" DevicePath \"\"" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.129526 4736 scope.go:117] "RemoveContainer" containerID="b69287f8a077fea3467f812815537c8bb013cc8dacfcf5e0a61203a85c77299f" Mar 16 16:55:24 crc kubenswrapper[4736]: E0316 16:55:24.133156 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b69287f8a077fea3467f812815537c8bb013cc8dacfcf5e0a61203a85c77299f\": container with ID starting with b69287f8a077fea3467f812815537c8bb013cc8dacfcf5e0a61203a85c77299f not found: ID does not exist" containerID="b69287f8a077fea3467f812815537c8bb013cc8dacfcf5e0a61203a85c77299f" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.133223 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b69287f8a077fea3467f812815537c8bb013cc8dacfcf5e0a61203a85c77299f"} err="failed to get container status \"b69287f8a077fea3467f812815537c8bb013cc8dacfcf5e0a61203a85c77299f\": rpc error: code = NotFound desc = could not find container \"b69287f8a077fea3467f812815537c8bb013cc8dacfcf5e0a61203a85c77299f\": container with ID starting with b69287f8a077fea3467f812815537c8bb013cc8dacfcf5e0a61203a85c77299f not found: ID does not exist" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.133258 4736 scope.go:117] "RemoveContainer" containerID="4b4c620610df32ec6f0366914a228efb5491bbc452273b9385add28aa6c594d0" Mar 16 16:55:24 crc kubenswrapper[4736]: E0316 16:55:24.133728 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b4c620610df32ec6f0366914a228efb5491bbc452273b9385add28aa6c594d0\": container with ID starting with 4b4c620610df32ec6f0366914a228efb5491bbc452273b9385add28aa6c594d0 not found: ID does not exist" containerID="4b4c620610df32ec6f0366914a228efb5491bbc452273b9385add28aa6c594d0" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.133858 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b4c620610df32ec6f0366914a228efb5491bbc452273b9385add28aa6c594d0"} err="failed to get container status \"4b4c620610df32ec6f0366914a228efb5491bbc452273b9385add28aa6c594d0\": rpc error: code = NotFound desc = could not find container \"4b4c620610df32ec6f0366914a228efb5491bbc452273b9385add28aa6c594d0\": container with ID starting with 4b4c620610df32ec6f0366914a228efb5491bbc452273b9385add28aa6c594d0 not found: ID does not exist" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.133896 4736 scope.go:117] "RemoveContainer" containerID="9012315162e55566bb6b8d8d895ec703fdcba168f8edb9bfad4670aa35b9ee7f" Mar 16 16:55:24 crc kubenswrapper[4736]: E0316 16:55:24.134392 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9012315162e55566bb6b8d8d895ec703fdcba168f8edb9bfad4670aa35b9ee7f\": container with ID starting with 9012315162e55566bb6b8d8d895ec703fdcba168f8edb9bfad4670aa35b9ee7f not found: ID does not exist" containerID="9012315162e55566bb6b8d8d895ec703fdcba168f8edb9bfad4670aa35b9ee7f" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.134452 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9012315162e55566bb6b8d8d895ec703fdcba168f8edb9bfad4670aa35b9ee7f"} err="failed to get container status \"9012315162e55566bb6b8d8d895ec703fdcba168f8edb9bfad4670aa35b9ee7f\": rpc error: code = NotFound desc = could not find container \"9012315162e55566bb6b8d8d895ec703fdcba168f8edb9bfad4670aa35b9ee7f\": container with ID starting with 9012315162e55566bb6b8d8d895ec703fdcba168f8edb9bfad4670aa35b9ee7f not found: ID does not exist" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.351228 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zm4db"] Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.372554 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zm4db"] Mar 16 16:55:24 crc kubenswrapper[4736]: E0316 16:55:24.402755 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ff3455b_f483_41db_a598_803d7440ddd6.slice/crio-750ffdf6ddd3f9e1aa3772ebc0c4ac545dcfa6cce52fa2c67528ddb52b125b07\": RecentStats: unable to find data in memory cache]" Mar 16 16:55:24 crc kubenswrapper[4736]: I0316 16:55:24.990081 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff3455b-f483-41db-a598-803d7440ddd6" path="/var/lib/kubelet/pods/5ff3455b-f483-41db-a598-803d7440ddd6/volumes" Mar 16 16:56:00 crc kubenswrapper[4736]: I0316 16:56:00.254505 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561336-n9gvg"] Mar 16 16:56:00 crc kubenswrapper[4736]: E0316 16:56:00.256610 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff3455b-f483-41db-a598-803d7440ddd6" containerName="registry-server" Mar 16 16:56:00 crc kubenswrapper[4736]: I0316 16:56:00.256639 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff3455b-f483-41db-a598-803d7440ddd6" containerName="registry-server" Mar 16 16:56:00 crc kubenswrapper[4736]: E0316 16:56:00.256679 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff3455b-f483-41db-a598-803d7440ddd6" containerName="extract-utilities" Mar 16 16:56:00 crc kubenswrapper[4736]: I0316 16:56:00.256687 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff3455b-f483-41db-a598-803d7440ddd6" containerName="extract-utilities" Mar 16 16:56:00 crc kubenswrapper[4736]: E0316 16:56:00.256704 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff3455b-f483-41db-a598-803d7440ddd6" containerName="extract-content" Mar 16 16:56:00 crc kubenswrapper[4736]: I0316 16:56:00.256712 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff3455b-f483-41db-a598-803d7440ddd6" containerName="extract-content" Mar 16 16:56:00 crc kubenswrapper[4736]: I0316 16:56:00.257000 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff3455b-f483-41db-a598-803d7440ddd6" containerName="registry-server" Mar 16 16:56:00 crc kubenswrapper[4736]: I0316 16:56:00.257914 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561336-n9gvg" Mar 16 16:56:00 crc kubenswrapper[4736]: I0316 16:56:00.274098 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:56:00 crc kubenswrapper[4736]: I0316 16:56:00.274097 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:56:00 crc kubenswrapper[4736]: I0316 16:56:00.274146 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:56:00 crc kubenswrapper[4736]: I0316 16:56:00.325334 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561336-n9gvg"] Mar 16 16:56:00 crc kubenswrapper[4736]: I0316 16:56:00.393805 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmpw6\" (UniqueName: \"kubernetes.io/projected/2f0bd710-8de2-47b1-915a-7592bb25311c-kube-api-access-wmpw6\") pod \"auto-csr-approver-29561336-n9gvg\" (UID: \"2f0bd710-8de2-47b1-915a-7592bb25311c\") " pod="openshift-infra/auto-csr-approver-29561336-n9gvg" Mar 16 16:56:00 crc kubenswrapper[4736]: I0316 16:56:00.495791 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmpw6\" (UniqueName: \"kubernetes.io/projected/2f0bd710-8de2-47b1-915a-7592bb25311c-kube-api-access-wmpw6\") pod \"auto-csr-approver-29561336-n9gvg\" (UID: \"2f0bd710-8de2-47b1-915a-7592bb25311c\") " pod="openshift-infra/auto-csr-approver-29561336-n9gvg" Mar 16 16:56:00 crc kubenswrapper[4736]: I0316 16:56:00.527208 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmpw6\" (UniqueName: \"kubernetes.io/projected/2f0bd710-8de2-47b1-915a-7592bb25311c-kube-api-access-wmpw6\") pod \"auto-csr-approver-29561336-n9gvg\" (UID: \"2f0bd710-8de2-47b1-915a-7592bb25311c\") " pod="openshift-infra/auto-csr-approver-29561336-n9gvg" Mar 16 16:56:00 crc kubenswrapper[4736]: I0316 16:56:00.586632 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561336-n9gvg" Mar 16 16:56:01 crc kubenswrapper[4736]: I0316 16:56:01.248714 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561336-n9gvg"] Mar 16 16:56:01 crc kubenswrapper[4736]: I0316 16:56:01.357648 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561336-n9gvg" event={"ID":"2f0bd710-8de2-47b1-915a-7592bb25311c","Type":"ContainerStarted","Data":"108a934b97e016097b7ffcd2455fd8525c5f4643fcb79d7c146ead0598bfe32f"} Mar 16 16:56:03 crc kubenswrapper[4736]: I0316 16:56:03.378083 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561336-n9gvg" event={"ID":"2f0bd710-8de2-47b1-915a-7592bb25311c","Type":"ContainerStarted","Data":"0d8f5c1b8839167b068c3c626559d159a42a339ea808791a952895b5ef7bf56e"} Mar 16 16:56:03 crc kubenswrapper[4736]: I0316 16:56:03.409548 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561336-n9gvg" podStartSLOduration=2.533667695 podStartE2EDuration="3.409521831s" podCreationTimestamp="2026-03-16 16:56:00 +0000 UTC" firstStartedPulling="2026-03-16 16:56:01.266225354 +0000 UTC m=+6162.993615641" lastFinishedPulling="2026-03-16 16:56:02.1420795 +0000 UTC m=+6163.869469777" observedRunningTime="2026-03-16 16:56:03.401096382 +0000 UTC m=+6165.128486689" watchObservedRunningTime="2026-03-16 16:56:03.409521831 +0000 UTC m=+6165.136912138" Mar 16 16:56:04 crc kubenswrapper[4736]: I0316 16:56:04.390775 4736 generic.go:334] "Generic (PLEG): container finished" podID="2f0bd710-8de2-47b1-915a-7592bb25311c" containerID="0d8f5c1b8839167b068c3c626559d159a42a339ea808791a952895b5ef7bf56e" exitCode=0 Mar 16 16:56:04 crc kubenswrapper[4736]: I0316 16:56:04.390915 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561336-n9gvg" event={"ID":"2f0bd710-8de2-47b1-915a-7592bb25311c","Type":"ContainerDied","Data":"0d8f5c1b8839167b068c3c626559d159a42a339ea808791a952895b5ef7bf56e"} Mar 16 16:56:05 crc kubenswrapper[4736]: I0316 16:56:05.831763 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561336-n9gvg" Mar 16 16:56:05 crc kubenswrapper[4736]: I0316 16:56:05.994170 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmpw6\" (UniqueName: \"kubernetes.io/projected/2f0bd710-8de2-47b1-915a-7592bb25311c-kube-api-access-wmpw6\") pod \"2f0bd710-8de2-47b1-915a-7592bb25311c\" (UID: \"2f0bd710-8de2-47b1-915a-7592bb25311c\") " Mar 16 16:56:06 crc kubenswrapper[4736]: I0316 16:56:06.002494 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f0bd710-8de2-47b1-915a-7592bb25311c-kube-api-access-wmpw6" (OuterVolumeSpecName: "kube-api-access-wmpw6") pod "2f0bd710-8de2-47b1-915a-7592bb25311c" (UID: "2f0bd710-8de2-47b1-915a-7592bb25311c"). InnerVolumeSpecName "kube-api-access-wmpw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:56:06 crc kubenswrapper[4736]: I0316 16:56:06.097257 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmpw6\" (UniqueName: \"kubernetes.io/projected/2f0bd710-8de2-47b1-915a-7592bb25311c-kube-api-access-wmpw6\") on node \"crc\" DevicePath \"\"" Mar 16 16:56:06 crc kubenswrapper[4736]: I0316 16:56:06.407094 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561336-n9gvg" event={"ID":"2f0bd710-8de2-47b1-915a-7592bb25311c","Type":"ContainerDied","Data":"108a934b97e016097b7ffcd2455fd8525c5f4643fcb79d7c146ead0598bfe32f"} Mar 16 16:56:06 crc kubenswrapper[4736]: I0316 16:56:06.407147 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="108a934b97e016097b7ffcd2455fd8525c5f4643fcb79d7c146ead0598bfe32f" Mar 16 16:56:06 crc kubenswrapper[4736]: I0316 16:56:06.407195 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561336-n9gvg" Mar 16 16:56:06 crc kubenswrapper[4736]: I0316 16:56:06.931258 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561330-5d67b"] Mar 16 16:56:06 crc kubenswrapper[4736]: I0316 16:56:06.940603 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561330-5d67b"] Mar 16 16:56:06 crc kubenswrapper[4736]: I0316 16:56:06.989397 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b0593c4-d000-41d2-b38c-d2778d8aa9bf" path="/var/lib/kubelet/pods/2b0593c4-d000-41d2-b38c-d2778d8aa9bf/volumes" Mar 16 16:56:38 crc kubenswrapper[4736]: I0316 16:56:38.508153 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:56:38 crc kubenswrapper[4736]: I0316 16:56:38.508700 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:56:59 crc kubenswrapper[4736]: I0316 16:56:59.912640 4736 scope.go:117] "RemoveContainer" containerID="0724de47fd8281840fcb9347dcb2cccf9cc7c0151206ec4761d7d5bde7b7ffbc" Mar 16 16:57:08 crc kubenswrapper[4736]: I0316 16:57:08.508294 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:57:08 crc kubenswrapper[4736]: I0316 16:57:08.509161 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:57:38 crc kubenswrapper[4736]: I0316 16:57:38.508668 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:57:38 crc kubenswrapper[4736]: I0316 16:57:38.510809 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:57:38 crc kubenswrapper[4736]: I0316 16:57:38.511018 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 16:57:38 crc kubenswrapper[4736]: I0316 16:57:38.512195 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ed377f9c884681c0b4f334e9353ccbadf6d52d9097e6c8a08c51af13c92cc52"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 16:57:38 crc kubenswrapper[4736]: I0316 16:57:38.512448 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://6ed377f9c884681c0b4f334e9353ccbadf6d52d9097e6c8a08c51af13c92cc52" gracePeriod=600 Mar 16 16:57:39 crc kubenswrapper[4736]: I0316 16:57:39.335168 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="6ed377f9c884681c0b4f334e9353ccbadf6d52d9097e6c8a08c51af13c92cc52" exitCode=0 Mar 16 16:57:39 crc kubenswrapper[4736]: I0316 16:57:39.335353 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"6ed377f9c884681c0b4f334e9353ccbadf6d52d9097e6c8a08c51af13c92cc52"} Mar 16 16:57:39 crc kubenswrapper[4736]: I0316 16:57:39.335572 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480"} Mar 16 16:57:39 crc kubenswrapper[4736]: I0316 16:57:39.335606 4736 scope.go:117] "RemoveContainer" containerID="e7c1468791a539172812b5a72cd765772d6803a03d448d3158d3c9ba94260ae6" Mar 16 16:58:00 crc kubenswrapper[4736]: I0316 16:58:00.168953 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561338-zn5d4"] Mar 16 16:58:00 crc kubenswrapper[4736]: E0316 16:58:00.171501 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f0bd710-8de2-47b1-915a-7592bb25311c" containerName="oc" Mar 16 16:58:00 crc kubenswrapper[4736]: I0316 16:58:00.171651 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f0bd710-8de2-47b1-915a-7592bb25311c" containerName="oc" Mar 16 16:58:00 crc kubenswrapper[4736]: I0316 16:58:00.171971 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f0bd710-8de2-47b1-915a-7592bb25311c" containerName="oc" Mar 16 16:58:00 crc kubenswrapper[4736]: I0316 16:58:00.174180 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561338-zn5d4" Mar 16 16:58:00 crc kubenswrapper[4736]: I0316 16:58:00.176535 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 16:58:00 crc kubenswrapper[4736]: I0316 16:58:00.176675 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 16:58:00 crc kubenswrapper[4736]: I0316 16:58:00.176625 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 16:58:00 crc kubenswrapper[4736]: I0316 16:58:00.178386 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561338-zn5d4"] Mar 16 16:58:00 crc kubenswrapper[4736]: I0316 16:58:00.350470 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glhmg\" (UniqueName: \"kubernetes.io/projected/d0d91b75-e3b3-4622-9acd-7d160728454c-kube-api-access-glhmg\") pod \"auto-csr-approver-29561338-zn5d4\" (UID: \"d0d91b75-e3b3-4622-9acd-7d160728454c\") " pod="openshift-infra/auto-csr-approver-29561338-zn5d4" Mar 16 16:58:00 crc kubenswrapper[4736]: I0316 16:58:00.452557 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glhmg\" (UniqueName: \"kubernetes.io/projected/d0d91b75-e3b3-4622-9acd-7d160728454c-kube-api-access-glhmg\") pod \"auto-csr-approver-29561338-zn5d4\" (UID: \"d0d91b75-e3b3-4622-9acd-7d160728454c\") " pod="openshift-infra/auto-csr-approver-29561338-zn5d4" Mar 16 16:58:00 crc kubenswrapper[4736]: I0316 16:58:00.476277 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glhmg\" (UniqueName: \"kubernetes.io/projected/d0d91b75-e3b3-4622-9acd-7d160728454c-kube-api-access-glhmg\") pod \"auto-csr-approver-29561338-zn5d4\" (UID: \"d0d91b75-e3b3-4622-9acd-7d160728454c\") " pod="openshift-infra/auto-csr-approver-29561338-zn5d4" Mar 16 16:58:00 crc kubenswrapper[4736]: I0316 16:58:00.492063 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561338-zn5d4" Mar 16 16:58:01 crc kubenswrapper[4736]: I0316 16:58:01.062807 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561338-zn5d4"] Mar 16 16:58:01 crc kubenswrapper[4736]: W0316 16:58:01.080430 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0d91b75_e3b3_4622_9acd_7d160728454c.slice/crio-c76bca318829a6fdf7ac055c584b0e534ded9b4b84106e6bac554f262ca5f057 WatchSource:0}: Error finding container c76bca318829a6fdf7ac055c584b0e534ded9b4b84106e6bac554f262ca5f057: Status 404 returned error can't find the container with id c76bca318829a6fdf7ac055c584b0e534ded9b4b84106e6bac554f262ca5f057 Mar 16 16:58:01 crc kubenswrapper[4736]: I0316 16:58:01.578409 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561338-zn5d4" event={"ID":"d0d91b75-e3b3-4622-9acd-7d160728454c","Type":"ContainerStarted","Data":"c76bca318829a6fdf7ac055c584b0e534ded9b4b84106e6bac554f262ca5f057"} Mar 16 16:58:02 crc kubenswrapper[4736]: I0316 16:58:02.588834 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561338-zn5d4" event={"ID":"d0d91b75-e3b3-4622-9acd-7d160728454c","Type":"ContainerStarted","Data":"0852a199e0a4eadd238709c50a64694592ece722171d3005ad43ae475fec1e29"} Mar 16 16:58:02 crc kubenswrapper[4736]: I0316 16:58:02.604453 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561338-zn5d4" podStartSLOduration=1.564704679 podStartE2EDuration="2.604437113s" podCreationTimestamp="2026-03-16 16:58:00 +0000 UTC" firstStartedPulling="2026-03-16 16:58:01.080962742 +0000 UTC m=+6282.808353059" lastFinishedPulling="2026-03-16 16:58:02.120695186 +0000 UTC m=+6283.848085493" observedRunningTime="2026-03-16 16:58:02.600880936 +0000 UTC m=+6284.328271223" watchObservedRunningTime="2026-03-16 16:58:02.604437113 +0000 UTC m=+6284.331827410" Mar 16 16:58:03 crc kubenswrapper[4736]: I0316 16:58:03.601327 4736 generic.go:334] "Generic (PLEG): container finished" podID="d0d91b75-e3b3-4622-9acd-7d160728454c" containerID="0852a199e0a4eadd238709c50a64694592ece722171d3005ad43ae475fec1e29" exitCode=0 Mar 16 16:58:03 crc kubenswrapper[4736]: I0316 16:58:03.601388 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561338-zn5d4" event={"ID":"d0d91b75-e3b3-4622-9acd-7d160728454c","Type":"ContainerDied","Data":"0852a199e0a4eadd238709c50a64694592ece722171d3005ad43ae475fec1e29"} Mar 16 16:58:05 crc kubenswrapper[4736]: I0316 16:58:05.019392 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561338-zn5d4" Mar 16 16:58:05 crc kubenswrapper[4736]: I0316 16:58:05.149705 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glhmg\" (UniqueName: \"kubernetes.io/projected/d0d91b75-e3b3-4622-9acd-7d160728454c-kube-api-access-glhmg\") pod \"d0d91b75-e3b3-4622-9acd-7d160728454c\" (UID: \"d0d91b75-e3b3-4622-9acd-7d160728454c\") " Mar 16 16:58:05 crc kubenswrapper[4736]: I0316 16:58:05.158592 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0d91b75-e3b3-4622-9acd-7d160728454c-kube-api-access-glhmg" (OuterVolumeSpecName: "kube-api-access-glhmg") pod "d0d91b75-e3b3-4622-9acd-7d160728454c" (UID: "d0d91b75-e3b3-4622-9acd-7d160728454c"). InnerVolumeSpecName "kube-api-access-glhmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 16:58:05 crc kubenswrapper[4736]: I0316 16:58:05.253889 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glhmg\" (UniqueName: \"kubernetes.io/projected/d0d91b75-e3b3-4622-9acd-7d160728454c-kube-api-access-glhmg\") on node \"crc\" DevicePath \"\"" Mar 16 16:58:05 crc kubenswrapper[4736]: I0316 16:58:05.629649 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561338-zn5d4" event={"ID":"d0d91b75-e3b3-4622-9acd-7d160728454c","Type":"ContainerDied","Data":"c76bca318829a6fdf7ac055c584b0e534ded9b4b84106e6bac554f262ca5f057"} Mar 16 16:58:05 crc kubenswrapper[4736]: I0316 16:58:05.630093 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c76bca318829a6fdf7ac055c584b0e534ded9b4b84106e6bac554f262ca5f057" Mar 16 16:58:05 crc kubenswrapper[4736]: I0316 16:58:05.629901 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561338-zn5d4" Mar 16 16:58:05 crc kubenswrapper[4736]: I0316 16:58:05.679477 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561332-kx4g9"] Mar 16 16:58:05 crc kubenswrapper[4736]: I0316 16:58:05.686741 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561332-kx4g9"] Mar 16 16:58:06 crc kubenswrapper[4736]: I0316 16:58:06.989012 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ee75f66-a179-4458-91b5-832d04d1d392" path="/var/lib/kubelet/pods/1ee75f66-a179-4458-91b5-832d04d1d392/volumes" Mar 16 16:59:00 crc kubenswrapper[4736]: I0316 16:59:00.029816 4736 scope.go:117] "RemoveContainer" containerID="ce136ed60825e8496cf131856d9c8ff0855ed153aff0945ea9075932df9a749f" Mar 16 16:59:38 crc kubenswrapper[4736]: I0316 16:59:38.507943 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 16:59:38 crc kubenswrapper[4736]: I0316 16:59:38.508489 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.068792 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fnp22"] Mar 16 16:59:56 crc kubenswrapper[4736]: E0316 16:59:56.069675 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d91b75-e3b3-4622-9acd-7d160728454c" containerName="oc" Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.069686 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d91b75-e3b3-4622-9acd-7d160728454c" containerName="oc" Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.069874 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d91b75-e3b3-4622-9acd-7d160728454c" containerName="oc" Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.074866 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.090818 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fnp22"] Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.116870 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae06e61-b772-46f1-b417-e0350f91bd5e-catalog-content\") pod \"redhat-operators-fnp22\" (UID: \"cae06e61-b772-46f1-b417-e0350f91bd5e\") " pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.116994 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b4x2\" (UniqueName: \"kubernetes.io/projected/cae06e61-b772-46f1-b417-e0350f91bd5e-kube-api-access-4b4x2\") pod \"redhat-operators-fnp22\" (UID: \"cae06e61-b772-46f1-b417-e0350f91bd5e\") " pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.117174 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae06e61-b772-46f1-b417-e0350f91bd5e-utilities\") pod \"redhat-operators-fnp22\" (UID: \"cae06e61-b772-46f1-b417-e0350f91bd5e\") " pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.221079 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae06e61-b772-46f1-b417-e0350f91bd5e-utilities\") pod \"redhat-operators-fnp22\" (UID: \"cae06e61-b772-46f1-b417-e0350f91bd5e\") " pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.221185 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae06e61-b772-46f1-b417-e0350f91bd5e-catalog-content\") pod \"redhat-operators-fnp22\" (UID: \"cae06e61-b772-46f1-b417-e0350f91bd5e\") " pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.221269 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b4x2\" (UniqueName: \"kubernetes.io/projected/cae06e61-b772-46f1-b417-e0350f91bd5e-kube-api-access-4b4x2\") pod \"redhat-operators-fnp22\" (UID: \"cae06e61-b772-46f1-b417-e0350f91bd5e\") " pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.221534 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae06e61-b772-46f1-b417-e0350f91bd5e-utilities\") pod \"redhat-operators-fnp22\" (UID: \"cae06e61-b772-46f1-b417-e0350f91bd5e\") " pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.221965 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae06e61-b772-46f1-b417-e0350f91bd5e-catalog-content\") pod \"redhat-operators-fnp22\" (UID: \"cae06e61-b772-46f1-b417-e0350f91bd5e\") " pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.249955 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b4x2\" (UniqueName: \"kubernetes.io/projected/cae06e61-b772-46f1-b417-e0350f91bd5e-kube-api-access-4b4x2\") pod \"redhat-operators-fnp22\" (UID: \"cae06e61-b772-46f1-b417-e0350f91bd5e\") " pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 16:59:56 crc kubenswrapper[4736]: I0316 16:59:56.398151 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 16:59:57 crc kubenswrapper[4736]: I0316 16:59:57.177209 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fnp22"] Mar 16 16:59:57 crc kubenswrapper[4736]: I0316 16:59:57.970852 4736 generic.go:334] "Generic (PLEG): container finished" podID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerID="672794bf7b61f4170614595f4aff3993de5da71794fa1c4eb8c51ec6b21e6faa" exitCode=0 Mar 16 16:59:57 crc kubenswrapper[4736]: I0316 16:59:57.971027 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnp22" event={"ID":"cae06e61-b772-46f1-b417-e0350f91bd5e","Type":"ContainerDied","Data":"672794bf7b61f4170614595f4aff3993de5da71794fa1c4eb8c51ec6b21e6faa"} Mar 16 16:59:57 crc kubenswrapper[4736]: I0316 16:59:57.971318 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnp22" event={"ID":"cae06e61-b772-46f1-b417-e0350f91bd5e","Type":"ContainerStarted","Data":"f6fe6b9995fbe5b0c694c3685b88d47e05acce4142aa1848be9ff5e7ae1e9737"} Mar 16 16:59:58 crc kubenswrapper[4736]: I0316 16:59:58.994247 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnp22" event={"ID":"cae06e61-b772-46f1-b417-e0350f91bd5e","Type":"ContainerStarted","Data":"c21345f85ee0827564935e8124445a2e6f45b2f8313f3ccb6f48d9b19cf02f13"} Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.170286 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561340-mkq5c"] Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.172206 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561340-mkq5c" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.175861 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.176152 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.176298 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.180154 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk"] Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.181449 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.192321 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561340-mkq5c"] Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.192864 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.192998 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.206836 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk"] Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.296670 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqfhx\" (UniqueName: \"kubernetes.io/projected/73a6883c-3afc-4930-91a7-510201651ed9-kube-api-access-fqfhx\") pod \"collect-profiles-29561340-9xvwk\" (UID: \"73a6883c-3afc-4930-91a7-510201651ed9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.296746 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sfjx\" (UniqueName: \"kubernetes.io/projected/cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37-kube-api-access-4sfjx\") pod \"auto-csr-approver-29561340-mkq5c\" (UID: \"cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37\") " pod="openshift-infra/auto-csr-approver-29561340-mkq5c" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.296835 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73a6883c-3afc-4930-91a7-510201651ed9-config-volume\") pod \"collect-profiles-29561340-9xvwk\" (UID: \"73a6883c-3afc-4930-91a7-510201651ed9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.296912 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73a6883c-3afc-4930-91a7-510201651ed9-secret-volume\") pod \"collect-profiles-29561340-9xvwk\" (UID: \"73a6883c-3afc-4930-91a7-510201651ed9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.398439 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73a6883c-3afc-4930-91a7-510201651ed9-secret-volume\") pod \"collect-profiles-29561340-9xvwk\" (UID: \"73a6883c-3afc-4930-91a7-510201651ed9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.398522 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqfhx\" (UniqueName: \"kubernetes.io/projected/73a6883c-3afc-4930-91a7-510201651ed9-kube-api-access-fqfhx\") pod \"collect-profiles-29561340-9xvwk\" (UID: \"73a6883c-3afc-4930-91a7-510201651ed9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.398569 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sfjx\" (UniqueName: \"kubernetes.io/projected/cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37-kube-api-access-4sfjx\") pod \"auto-csr-approver-29561340-mkq5c\" (UID: \"cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37\") " pod="openshift-infra/auto-csr-approver-29561340-mkq5c" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.398659 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73a6883c-3afc-4930-91a7-510201651ed9-config-volume\") pod \"collect-profiles-29561340-9xvwk\" (UID: \"73a6883c-3afc-4930-91a7-510201651ed9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.399451 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73a6883c-3afc-4930-91a7-510201651ed9-config-volume\") pod \"collect-profiles-29561340-9xvwk\" (UID: \"73a6883c-3afc-4930-91a7-510201651ed9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.409894 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73a6883c-3afc-4930-91a7-510201651ed9-secret-volume\") pod \"collect-profiles-29561340-9xvwk\" (UID: \"73a6883c-3afc-4930-91a7-510201651ed9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.426171 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sfjx\" (UniqueName: \"kubernetes.io/projected/cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37-kube-api-access-4sfjx\") pod \"auto-csr-approver-29561340-mkq5c\" (UID: \"cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37\") " pod="openshift-infra/auto-csr-approver-29561340-mkq5c" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.431473 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqfhx\" (UniqueName: \"kubernetes.io/projected/73a6883c-3afc-4930-91a7-510201651ed9-kube-api-access-fqfhx\") pod \"collect-profiles-29561340-9xvwk\" (UID: \"73a6883c-3afc-4930-91a7-510201651ed9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.506488 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561340-mkq5c" Mar 16 17:00:00 crc kubenswrapper[4736]: I0316 17:00:00.518509 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" Mar 16 17:00:01 crc kubenswrapper[4736]: I0316 17:00:01.080257 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561340-mkq5c"] Mar 16 17:00:01 crc kubenswrapper[4736]: W0316 17:00:01.089662 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb19c45f_badc_45c9_9d8c_5a2e5fdf4c37.slice/crio-a46a89b0ee470aae7e0ff5a93e7f22ca92562f4fda947766b85574ff856c6d0a WatchSource:0}: Error finding container a46a89b0ee470aae7e0ff5a93e7f22ca92562f4fda947766b85574ff856c6d0a: Status 404 returned error can't find the container with id a46a89b0ee470aae7e0ff5a93e7f22ca92562f4fda947766b85574ff856c6d0a Mar 16 17:00:01 crc kubenswrapper[4736]: I0316 17:00:01.101431 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk"] Mar 16 17:00:02 crc kubenswrapper[4736]: I0316 17:00:02.030468 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" event={"ID":"73a6883c-3afc-4930-91a7-510201651ed9","Type":"ContainerStarted","Data":"c4c991cc8b8e0001dd0ef00be6d1f19243a09c6a7b317782a3f91a88f5a1a26e"} Mar 16 17:00:02 crc kubenswrapper[4736]: I0316 17:00:02.030889 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" event={"ID":"73a6883c-3afc-4930-91a7-510201651ed9","Type":"ContainerStarted","Data":"58b658f9e9eaf7170edb67f8811ba70696bf4d4e473398ec5778d94ff921a825"} Mar 16 17:00:02 crc kubenswrapper[4736]: I0316 17:00:02.031842 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561340-mkq5c" event={"ID":"cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37","Type":"ContainerStarted","Data":"a46a89b0ee470aae7e0ff5a93e7f22ca92562f4fda947766b85574ff856c6d0a"} Mar 16 17:00:02 crc kubenswrapper[4736]: I0316 17:00:02.050182 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" podStartSLOduration=2.050162727 podStartE2EDuration="2.050162727s" podCreationTimestamp="2026-03-16 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 17:00:02.043852985 +0000 UTC m=+6403.771243282" watchObservedRunningTime="2026-03-16 17:00:02.050162727 +0000 UTC m=+6403.777553014" Mar 16 17:00:03 crc kubenswrapper[4736]: I0316 17:00:03.041030 4736 generic.go:334] "Generic (PLEG): container finished" podID="73a6883c-3afc-4930-91a7-510201651ed9" containerID="c4c991cc8b8e0001dd0ef00be6d1f19243a09c6a7b317782a3f91a88f5a1a26e" exitCode=0 Mar 16 17:00:03 crc kubenswrapper[4736]: I0316 17:00:03.041082 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" event={"ID":"73a6883c-3afc-4930-91a7-510201651ed9","Type":"ContainerDied","Data":"c4c991cc8b8e0001dd0ef00be6d1f19243a09c6a7b317782a3f91a88f5a1a26e"} Mar 16 17:00:04 crc kubenswrapper[4736]: I0316 17:00:04.054787 4736 generic.go:334] "Generic (PLEG): container finished" podID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerID="c21345f85ee0827564935e8124445a2e6f45b2f8313f3ccb6f48d9b19cf02f13" exitCode=0 Mar 16 17:00:04 crc kubenswrapper[4736]: I0316 17:00:04.055950 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnp22" event={"ID":"cae06e61-b772-46f1-b417-e0350f91bd5e","Type":"ContainerDied","Data":"c21345f85ee0827564935e8124445a2e6f45b2f8313f3ccb6f48d9b19cf02f13"} Mar 16 17:00:04 crc kubenswrapper[4736]: I0316 17:00:04.066654 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 17:00:04 crc kubenswrapper[4736]: I0316 17:00:04.624470 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" Mar 16 17:00:04 crc kubenswrapper[4736]: I0316 17:00:04.796358 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73a6883c-3afc-4930-91a7-510201651ed9-secret-volume\") pod \"73a6883c-3afc-4930-91a7-510201651ed9\" (UID: \"73a6883c-3afc-4930-91a7-510201651ed9\") " Mar 16 17:00:04 crc kubenswrapper[4736]: I0316 17:00:04.796424 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73a6883c-3afc-4930-91a7-510201651ed9-config-volume\") pod \"73a6883c-3afc-4930-91a7-510201651ed9\" (UID: \"73a6883c-3afc-4930-91a7-510201651ed9\") " Mar 16 17:00:04 crc kubenswrapper[4736]: I0316 17:00:04.796491 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqfhx\" (UniqueName: \"kubernetes.io/projected/73a6883c-3afc-4930-91a7-510201651ed9-kube-api-access-fqfhx\") pod \"73a6883c-3afc-4930-91a7-510201651ed9\" (UID: \"73a6883c-3afc-4930-91a7-510201651ed9\") " Mar 16 17:00:04 crc kubenswrapper[4736]: I0316 17:00:04.797325 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73a6883c-3afc-4930-91a7-510201651ed9-config-volume" (OuterVolumeSpecName: "config-volume") pod "73a6883c-3afc-4930-91a7-510201651ed9" (UID: "73a6883c-3afc-4930-91a7-510201651ed9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 17:00:04 crc kubenswrapper[4736]: I0316 17:00:04.803552 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73a6883c-3afc-4930-91a7-510201651ed9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "73a6883c-3afc-4930-91a7-510201651ed9" (UID: "73a6883c-3afc-4930-91a7-510201651ed9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:00:04 crc kubenswrapper[4736]: I0316 17:00:04.807241 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73a6883c-3afc-4930-91a7-510201651ed9-kube-api-access-fqfhx" (OuterVolumeSpecName: "kube-api-access-fqfhx") pod "73a6883c-3afc-4930-91a7-510201651ed9" (UID: "73a6883c-3afc-4930-91a7-510201651ed9"). InnerVolumeSpecName "kube-api-access-fqfhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:00:04 crc kubenswrapper[4736]: I0316 17:00:04.898622 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73a6883c-3afc-4930-91a7-510201651ed9-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 17:00:04 crc kubenswrapper[4736]: I0316 17:00:04.898674 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73a6883c-3afc-4930-91a7-510201651ed9-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 17:00:04 crc kubenswrapper[4736]: I0316 17:00:04.898686 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqfhx\" (UniqueName: \"kubernetes.io/projected/73a6883c-3afc-4930-91a7-510201651ed9-kube-api-access-fqfhx\") on node \"crc\" DevicePath \"\"" Mar 16 17:00:05 crc kubenswrapper[4736]: I0316 17:00:05.064987 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" event={"ID":"73a6883c-3afc-4930-91a7-510201651ed9","Type":"ContainerDied","Data":"58b658f9e9eaf7170edb67f8811ba70696bf4d4e473398ec5778d94ff921a825"} Mar 16 17:00:05 crc kubenswrapper[4736]: I0316 17:00:05.065271 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58b658f9e9eaf7170edb67f8811ba70696bf4d4e473398ec5778d94ff921a825" Mar 16 17:00:05 crc kubenswrapper[4736]: I0316 17:00:05.065194 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk" Mar 16 17:00:05 crc kubenswrapper[4736]: I0316 17:00:05.066200 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561340-mkq5c" event={"ID":"cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37","Type":"ContainerStarted","Data":"342782e2b2811c48ccd31483ff63adbd230a14dc85d6acae27d8e9c89c2099e0"} Mar 16 17:00:05 crc kubenswrapper[4736]: I0316 17:00:05.092127 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561340-mkq5c" podStartSLOduration=1.6952568449999998 podStartE2EDuration="5.092111856s" podCreationTimestamp="2026-03-16 17:00:00 +0000 UTC" firstStartedPulling="2026-03-16 17:00:01.09238331 +0000 UTC m=+6402.819773597" lastFinishedPulling="2026-03-16 17:00:04.489238321 +0000 UTC m=+6406.216628608" observedRunningTime="2026-03-16 17:00:05.090149533 +0000 UTC m=+6406.817539820" watchObservedRunningTime="2026-03-16 17:00:05.092111856 +0000 UTC m=+6406.819502133" Mar 16 17:00:05 crc kubenswrapper[4736]: I0316 17:00:05.710493 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q"] Mar 16 17:00:05 crc kubenswrapper[4736]: I0316 17:00:05.722872 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561295-75v2q"] Mar 16 17:00:06 crc kubenswrapper[4736]: I0316 17:00:06.076789 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnp22" event={"ID":"cae06e61-b772-46f1-b417-e0350f91bd5e","Type":"ContainerStarted","Data":"4decf01713c028f6a1b18c064b3d5b849b475cbe1d8e04c4c05da9ec09c087ea"} Mar 16 17:00:06 crc kubenswrapper[4736]: I0316 17:00:06.116304 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fnp22" podStartSLOduration=3.035969396 podStartE2EDuration="10.116276564s" podCreationTimestamp="2026-03-16 16:59:56 +0000 UTC" firstStartedPulling="2026-03-16 16:59:57.973224926 +0000 UTC m=+6399.700615213" lastFinishedPulling="2026-03-16 17:00:05.053532094 +0000 UTC m=+6406.780922381" observedRunningTime="2026-03-16 17:00:06.108047399 +0000 UTC m=+6407.835437706" watchObservedRunningTime="2026-03-16 17:00:06.116276564 +0000 UTC m=+6407.843666861" Mar 16 17:00:06 crc kubenswrapper[4736]: I0316 17:00:06.400699 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 17:00:06 crc kubenswrapper[4736]: I0316 17:00:06.400747 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 17:00:06 crc kubenswrapper[4736]: I0316 17:00:06.992071 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77cf92e2-f06a-4211-b2f3-9c65fdef5987" path="/var/lib/kubelet/pods/77cf92e2-f06a-4211-b2f3-9c65fdef5987/volumes" Mar 16 17:00:07 crc kubenswrapper[4736]: I0316 17:00:07.091236 4736 generic.go:334] "Generic (PLEG): container finished" podID="cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37" containerID="342782e2b2811c48ccd31483ff63adbd230a14dc85d6acae27d8e9c89c2099e0" exitCode=0 Mar 16 17:00:07 crc kubenswrapper[4736]: I0316 17:00:07.091277 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561340-mkq5c" event={"ID":"cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37","Type":"ContainerDied","Data":"342782e2b2811c48ccd31483ff63adbd230a14dc85d6acae27d8e9c89c2099e0"} Mar 16 17:00:07 crc kubenswrapper[4736]: I0316 17:00:07.456535 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fnp22" podUID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerName="registry-server" probeResult="failure" output=< Mar 16 17:00:07 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:00:07 crc kubenswrapper[4736]: > Mar 16 17:00:08 crc kubenswrapper[4736]: I0316 17:00:08.510253 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:00:08 crc kubenswrapper[4736]: I0316 17:00:08.510528 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:00:08 crc kubenswrapper[4736]: I0316 17:00:08.599941 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561340-mkq5c" Mar 16 17:00:08 crc kubenswrapper[4736]: I0316 17:00:08.669094 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sfjx\" (UniqueName: \"kubernetes.io/projected/cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37-kube-api-access-4sfjx\") pod \"cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37\" (UID: \"cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37\") " Mar 16 17:00:08 crc kubenswrapper[4736]: I0316 17:00:08.676263 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37-kube-api-access-4sfjx" (OuterVolumeSpecName: "kube-api-access-4sfjx") pod "cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37" (UID: "cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37"). InnerVolumeSpecName "kube-api-access-4sfjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:00:08 crc kubenswrapper[4736]: I0316 17:00:08.771320 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sfjx\" (UniqueName: \"kubernetes.io/projected/cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37-kube-api-access-4sfjx\") on node \"crc\" DevicePath \"\"" Mar 16 17:00:09 crc kubenswrapper[4736]: I0316 17:00:09.106979 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561340-mkq5c" event={"ID":"cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37","Type":"ContainerDied","Data":"a46a89b0ee470aae7e0ff5a93e7f22ca92562f4fda947766b85574ff856c6d0a"} Mar 16 17:00:09 crc kubenswrapper[4736]: I0316 17:00:09.107394 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a46a89b0ee470aae7e0ff5a93e7f22ca92562f4fda947766b85574ff856c6d0a" Mar 16 17:00:09 crc kubenswrapper[4736]: I0316 17:00:09.107012 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561340-mkq5c" Mar 16 17:00:09 crc kubenswrapper[4736]: I0316 17:00:09.179321 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561334-g9kpg"] Mar 16 17:00:09 crc kubenswrapper[4736]: I0316 17:00:09.179616 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561334-g9kpg"] Mar 16 17:00:10 crc kubenswrapper[4736]: I0316 17:00:10.989000 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6db902e-e71c-4740-8f5d-623c70017205" path="/var/lib/kubelet/pods/f6db902e-e71c-4740-8f5d-623c70017205/volumes" Mar 16 17:00:17 crc kubenswrapper[4736]: I0316 17:00:17.452069 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fnp22" podUID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerName="registry-server" probeResult="failure" output=< Mar 16 17:00:17 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:00:17 crc kubenswrapper[4736]: > Mar 16 17:00:27 crc kubenswrapper[4736]: I0316 17:00:27.461624 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fnp22" podUID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerName="registry-server" probeResult="failure" output=< Mar 16 17:00:27 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:00:27 crc kubenswrapper[4736]: > Mar 16 17:00:37 crc kubenswrapper[4736]: I0316 17:00:37.449584 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fnp22" podUID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerName="registry-server" probeResult="failure" output=< Mar 16 17:00:37 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:00:37 crc kubenswrapper[4736]: > Mar 16 17:00:38 crc kubenswrapper[4736]: I0316 17:00:38.508529 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:00:38 crc kubenswrapper[4736]: I0316 17:00:38.508610 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:00:38 crc kubenswrapper[4736]: I0316 17:00:38.508661 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 17:00:38 crc kubenswrapper[4736]: I0316 17:00:38.509494 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 17:00:38 crc kubenswrapper[4736]: I0316 17:00:38.509564 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" gracePeriod=600 Mar 16 17:00:38 crc kubenswrapper[4736]: E0316 17:00:38.644702 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:00:39 crc kubenswrapper[4736]: I0316 17:00:39.372315 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" exitCode=0 Mar 16 17:00:39 crc kubenswrapper[4736]: I0316 17:00:39.372358 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480"} Mar 16 17:00:39 crc kubenswrapper[4736]: I0316 17:00:39.372395 4736 scope.go:117] "RemoveContainer" containerID="6ed377f9c884681c0b4f334e9353ccbadf6d52d9097e6c8a08c51af13c92cc52" Mar 16 17:00:39 crc kubenswrapper[4736]: I0316 17:00:39.373126 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:00:39 crc kubenswrapper[4736]: E0316 17:00:39.373566 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:00:46 crc kubenswrapper[4736]: I0316 17:00:46.470493 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 17:00:46 crc kubenswrapper[4736]: I0316 17:00:46.542733 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 17:00:46 crc kubenswrapper[4736]: I0316 17:00:46.708206 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fnp22"] Mar 16 17:00:48 crc kubenswrapper[4736]: I0316 17:00:48.457019 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fnp22" podUID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerName="registry-server" containerID="cri-o://4decf01713c028f6a1b18c064b3d5b849b475cbe1d8e04c4c05da9ec09c087ea" gracePeriod=2 Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.206894 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.295974 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b4x2\" (UniqueName: \"kubernetes.io/projected/cae06e61-b772-46f1-b417-e0350f91bd5e-kube-api-access-4b4x2\") pod \"cae06e61-b772-46f1-b417-e0350f91bd5e\" (UID: \"cae06e61-b772-46f1-b417-e0350f91bd5e\") " Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.296045 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae06e61-b772-46f1-b417-e0350f91bd5e-catalog-content\") pod \"cae06e61-b772-46f1-b417-e0350f91bd5e\" (UID: \"cae06e61-b772-46f1-b417-e0350f91bd5e\") " Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.296155 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae06e61-b772-46f1-b417-e0350f91bd5e-utilities\") pod \"cae06e61-b772-46f1-b417-e0350f91bd5e\" (UID: \"cae06e61-b772-46f1-b417-e0350f91bd5e\") " Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.296881 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae06e61-b772-46f1-b417-e0350f91bd5e-utilities" (OuterVolumeSpecName: "utilities") pod "cae06e61-b772-46f1-b417-e0350f91bd5e" (UID: "cae06e61-b772-46f1-b417-e0350f91bd5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.312690 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae06e61-b772-46f1-b417-e0350f91bd5e-kube-api-access-4b4x2" (OuterVolumeSpecName: "kube-api-access-4b4x2") pod "cae06e61-b772-46f1-b417-e0350f91bd5e" (UID: "cae06e61-b772-46f1-b417-e0350f91bd5e"). InnerVolumeSpecName "kube-api-access-4b4x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.399035 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b4x2\" (UniqueName: \"kubernetes.io/projected/cae06e61-b772-46f1-b417-e0350f91bd5e-kube-api-access-4b4x2\") on node \"crc\" DevicePath \"\"" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.399096 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae06e61-b772-46f1-b417-e0350f91bd5e-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.421399 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae06e61-b772-46f1-b417-e0350f91bd5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cae06e61-b772-46f1-b417-e0350f91bd5e" (UID: "cae06e61-b772-46f1-b417-e0350f91bd5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.467636 4736 generic.go:334] "Generic (PLEG): container finished" podID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerID="4decf01713c028f6a1b18c064b3d5b849b475cbe1d8e04c4c05da9ec09c087ea" exitCode=0 Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.467689 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnp22" event={"ID":"cae06e61-b772-46f1-b417-e0350f91bd5e","Type":"ContainerDied","Data":"4decf01713c028f6a1b18c064b3d5b849b475cbe1d8e04c4c05da9ec09c087ea"} Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.467737 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fnp22" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.467753 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fnp22" event={"ID":"cae06e61-b772-46f1-b417-e0350f91bd5e","Type":"ContainerDied","Data":"f6fe6b9995fbe5b0c694c3685b88d47e05acce4142aa1848be9ff5e7ae1e9737"} Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.467779 4736 scope.go:117] "RemoveContainer" containerID="4decf01713c028f6a1b18c064b3d5b849b475cbe1d8e04c4c05da9ec09c087ea" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.501620 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae06e61-b772-46f1-b417-e0350f91bd5e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.517350 4736 scope.go:117] "RemoveContainer" containerID="c21345f85ee0827564935e8124445a2e6f45b2f8313f3ccb6f48d9b19cf02f13" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.522933 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fnp22"] Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.534120 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fnp22"] Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.543308 4736 scope.go:117] "RemoveContainer" containerID="672794bf7b61f4170614595f4aff3993de5da71794fa1c4eb8c51ec6b21e6faa" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.584164 4736 scope.go:117] "RemoveContainer" containerID="4decf01713c028f6a1b18c064b3d5b849b475cbe1d8e04c4c05da9ec09c087ea" Mar 16 17:00:49 crc kubenswrapper[4736]: E0316 17:00:49.589821 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4decf01713c028f6a1b18c064b3d5b849b475cbe1d8e04c4c05da9ec09c087ea\": container with ID starting with 4decf01713c028f6a1b18c064b3d5b849b475cbe1d8e04c4c05da9ec09c087ea not found: ID does not exist" containerID="4decf01713c028f6a1b18c064b3d5b849b475cbe1d8e04c4c05da9ec09c087ea" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.589934 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4decf01713c028f6a1b18c064b3d5b849b475cbe1d8e04c4c05da9ec09c087ea"} err="failed to get container status \"4decf01713c028f6a1b18c064b3d5b849b475cbe1d8e04c4c05da9ec09c087ea\": rpc error: code = NotFound desc = could not find container \"4decf01713c028f6a1b18c064b3d5b849b475cbe1d8e04c4c05da9ec09c087ea\": container with ID starting with 4decf01713c028f6a1b18c064b3d5b849b475cbe1d8e04c4c05da9ec09c087ea not found: ID does not exist" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.589979 4736 scope.go:117] "RemoveContainer" containerID="c21345f85ee0827564935e8124445a2e6f45b2f8313f3ccb6f48d9b19cf02f13" Mar 16 17:00:49 crc kubenswrapper[4736]: E0316 17:00:49.590370 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c21345f85ee0827564935e8124445a2e6f45b2f8313f3ccb6f48d9b19cf02f13\": container with ID starting with c21345f85ee0827564935e8124445a2e6f45b2f8313f3ccb6f48d9b19cf02f13 not found: ID does not exist" containerID="c21345f85ee0827564935e8124445a2e6f45b2f8313f3ccb6f48d9b19cf02f13" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.590416 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21345f85ee0827564935e8124445a2e6f45b2f8313f3ccb6f48d9b19cf02f13"} err="failed to get container status \"c21345f85ee0827564935e8124445a2e6f45b2f8313f3ccb6f48d9b19cf02f13\": rpc error: code = NotFound desc = could not find container \"c21345f85ee0827564935e8124445a2e6f45b2f8313f3ccb6f48d9b19cf02f13\": container with ID starting with c21345f85ee0827564935e8124445a2e6f45b2f8313f3ccb6f48d9b19cf02f13 not found: ID does not exist" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.590449 4736 scope.go:117] "RemoveContainer" containerID="672794bf7b61f4170614595f4aff3993de5da71794fa1c4eb8c51ec6b21e6faa" Mar 16 17:00:49 crc kubenswrapper[4736]: E0316 17:00:49.590791 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672794bf7b61f4170614595f4aff3993de5da71794fa1c4eb8c51ec6b21e6faa\": container with ID starting with 672794bf7b61f4170614595f4aff3993de5da71794fa1c4eb8c51ec6b21e6faa not found: ID does not exist" containerID="672794bf7b61f4170614595f4aff3993de5da71794fa1c4eb8c51ec6b21e6faa" Mar 16 17:00:49 crc kubenswrapper[4736]: I0316 17:00:49.590846 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672794bf7b61f4170614595f4aff3993de5da71794fa1c4eb8c51ec6b21e6faa"} err="failed to get container status \"672794bf7b61f4170614595f4aff3993de5da71794fa1c4eb8c51ec6b21e6faa\": rpc error: code = NotFound desc = could not find container \"672794bf7b61f4170614595f4aff3993de5da71794fa1c4eb8c51ec6b21e6faa\": container with ID starting with 672794bf7b61f4170614595f4aff3993de5da71794fa1c4eb8c51ec6b21e6faa not found: ID does not exist" Mar 16 17:00:50 crc kubenswrapper[4736]: I0316 17:00:50.991913 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae06e61-b772-46f1-b417-e0350f91bd5e" path="/var/lib/kubelet/pods/cae06e61-b772-46f1-b417-e0350f91bd5e/volumes" Mar 16 17:00:52 crc kubenswrapper[4736]: I0316 17:00:52.978434 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:00:52 crc kubenswrapper[4736]: E0316 17:00:52.979190 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.139963 4736 scope.go:117] "RemoveContainer" containerID="f47672a647ae58db1e8b3eeb127d92f433e71bc400d3022a01ba423add1472b9" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.197739 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29561341-2v97d"] Mar 16 17:01:00 crc kubenswrapper[4736]: E0316 17:01:00.198267 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerName="extract-content" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.198284 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerName="extract-content" Mar 16 17:01:00 crc kubenswrapper[4736]: E0316 17:01:00.198305 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerName="extract-utilities" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.198313 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerName="extract-utilities" Mar 16 17:01:00 crc kubenswrapper[4736]: E0316 17:01:00.198328 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerName="registry-server" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.198336 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerName="registry-server" Mar 16 17:01:00 crc kubenswrapper[4736]: E0316 17:01:00.198354 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73a6883c-3afc-4930-91a7-510201651ed9" containerName="collect-profiles" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.198361 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a6883c-3afc-4930-91a7-510201651ed9" containerName="collect-profiles" Mar 16 17:01:00 crc kubenswrapper[4736]: E0316 17:01:00.198386 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37" containerName="oc" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.198393 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37" containerName="oc" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.198681 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae06e61-b772-46f1-b417-e0350f91bd5e" containerName="registry-server" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.198699 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37" containerName="oc" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.198725 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="73a6883c-3afc-4930-91a7-510201651ed9" containerName="collect-profiles" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.199962 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.212789 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29561341-2v97d"] Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.225828 4736 scope.go:117] "RemoveContainer" containerID="3376f8edc3a0196b1244bf39e51ef57f81bc11a9d9afc707646c3073650fdf7d" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.308948 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-fernet-keys\") pod \"keystone-cron-29561341-2v97d\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.308992 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-combined-ca-bundle\") pod \"keystone-cron-29561341-2v97d\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.309375 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-842wr\" (UniqueName: \"kubernetes.io/projected/e106e4db-3f81-4cba-9c0a-697acced07dd-kube-api-access-842wr\") pod \"keystone-cron-29561341-2v97d\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.309482 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-config-data\") pod \"keystone-cron-29561341-2v97d\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.411688 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-842wr\" (UniqueName: \"kubernetes.io/projected/e106e4db-3f81-4cba-9c0a-697acced07dd-kube-api-access-842wr\") pod \"keystone-cron-29561341-2v97d\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.411763 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-config-data\") pod \"keystone-cron-29561341-2v97d\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.411857 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-fernet-keys\") pod \"keystone-cron-29561341-2v97d\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.411913 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-combined-ca-bundle\") pod \"keystone-cron-29561341-2v97d\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.420739 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-combined-ca-bundle\") pod \"keystone-cron-29561341-2v97d\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.420851 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-config-data\") pod \"keystone-cron-29561341-2v97d\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.421886 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-fernet-keys\") pod \"keystone-cron-29561341-2v97d\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.429680 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-842wr\" (UniqueName: \"kubernetes.io/projected/e106e4db-3f81-4cba-9c0a-697acced07dd-kube-api-access-842wr\") pod \"keystone-cron-29561341-2v97d\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:00 crc kubenswrapper[4736]: I0316 17:01:00.532480 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:01 crc kubenswrapper[4736]: I0316 17:01:01.051699 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29561341-2v97d"] Mar 16 17:01:01 crc kubenswrapper[4736]: I0316 17:01:01.588234 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29561341-2v97d" event={"ID":"e106e4db-3f81-4cba-9c0a-697acced07dd","Type":"ContainerStarted","Data":"33800fa190cb9fe4b653344ac694e48e29cdfa4aa014a000052ef4d44f1cdd8b"} Mar 16 17:01:01 crc kubenswrapper[4736]: I0316 17:01:01.588301 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29561341-2v97d" event={"ID":"e106e4db-3f81-4cba-9c0a-697acced07dd","Type":"ContainerStarted","Data":"035a82fd5b26689e0d7e0e5dc847a44ba904f612c39c43d1f6d2abfd59c1bb84"} Mar 16 17:01:01 crc kubenswrapper[4736]: I0316 17:01:01.618693 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29561341-2v97d" podStartSLOduration=1.6186708090000002 podStartE2EDuration="1.618670809s" podCreationTimestamp="2026-03-16 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 17:01:01.610813245 +0000 UTC m=+6463.338203542" watchObservedRunningTime="2026-03-16 17:01:01.618670809 +0000 UTC m=+6463.346061106" Mar 16 17:01:05 crc kubenswrapper[4736]: I0316 17:01:05.633932 4736 generic.go:334] "Generic (PLEG): container finished" podID="e106e4db-3f81-4cba-9c0a-697acced07dd" containerID="33800fa190cb9fe4b653344ac694e48e29cdfa4aa014a000052ef4d44f1cdd8b" exitCode=0 Mar 16 17:01:05 crc kubenswrapper[4736]: I0316 17:01:05.634003 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29561341-2v97d" event={"ID":"e106e4db-3f81-4cba-9c0a-697acced07dd","Type":"ContainerDied","Data":"33800fa190cb9fe4b653344ac694e48e29cdfa4aa014a000052ef4d44f1cdd8b"} Mar 16 17:01:05 crc kubenswrapper[4736]: I0316 17:01:05.978084 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:01:05 crc kubenswrapper[4736]: E0316 17:01:05.978378 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.047318 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.141449 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-fernet-keys\") pod \"e106e4db-3f81-4cba-9c0a-697acced07dd\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.141532 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-combined-ca-bundle\") pod \"e106e4db-3f81-4cba-9c0a-697acced07dd\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.141693 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-config-data\") pod \"e106e4db-3f81-4cba-9c0a-697acced07dd\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.141756 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-842wr\" (UniqueName: \"kubernetes.io/projected/e106e4db-3f81-4cba-9c0a-697acced07dd-kube-api-access-842wr\") pod \"e106e4db-3f81-4cba-9c0a-697acced07dd\" (UID: \"e106e4db-3f81-4cba-9c0a-697acced07dd\") " Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.147588 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e106e4db-3f81-4cba-9c0a-697acced07dd" (UID: "e106e4db-3f81-4cba-9c0a-697acced07dd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.147662 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e106e4db-3f81-4cba-9c0a-697acced07dd-kube-api-access-842wr" (OuterVolumeSpecName: "kube-api-access-842wr") pod "e106e4db-3f81-4cba-9c0a-697acced07dd" (UID: "e106e4db-3f81-4cba-9c0a-697acced07dd"). InnerVolumeSpecName "kube-api-access-842wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.184003 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e106e4db-3f81-4cba-9c0a-697acced07dd" (UID: "e106e4db-3f81-4cba-9c0a-697acced07dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.204621 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-config-data" (OuterVolumeSpecName: "config-data") pod "e106e4db-3f81-4cba-9c0a-697acced07dd" (UID: "e106e4db-3f81-4cba-9c0a-697acced07dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.244350 4736 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.244385 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.244395 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e106e4db-3f81-4cba-9c0a-697acced07dd-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.244404 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-842wr\" (UniqueName: \"kubernetes.io/projected/e106e4db-3f81-4cba-9c0a-697acced07dd-kube-api-access-842wr\") on node \"crc\" DevicePath \"\"" Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.656467 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29561341-2v97d" event={"ID":"e106e4db-3f81-4cba-9c0a-697acced07dd","Type":"ContainerDied","Data":"035a82fd5b26689e0d7e0e5dc847a44ba904f612c39c43d1f6d2abfd59c1bb84"} Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.656517 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="035a82fd5b26689e0d7e0e5dc847a44ba904f612c39c43d1f6d2abfd59c1bb84" Mar 16 17:01:07 crc kubenswrapper[4736]: I0316 17:01:07.656581 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29561341-2v97d" Mar 16 17:01:17 crc kubenswrapper[4736]: I0316 17:01:17.978859 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:01:17 crc kubenswrapper[4736]: E0316 17:01:17.985570 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:01:32 crc kubenswrapper[4736]: I0316 17:01:32.978627 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:01:32 crc kubenswrapper[4736]: E0316 17:01:32.979803 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:01:47 crc kubenswrapper[4736]: I0316 17:01:47.979254 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:01:47 crc kubenswrapper[4736]: E0316 17:01:47.980330 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:01:58 crc kubenswrapper[4736]: I0316 17:01:58.166487 4736 generic.go:334] "Generic (PLEG): container finished" podID="50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" containerID="98a899eb5d7c2fab7ff8f13bc38010546afd220ba2c4144b28b4a5b7d441beb2" exitCode=0 Mar 16 17:01:58 crc kubenswrapper[4736]: I0316 17:01:58.166585 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31","Type":"ContainerDied","Data":"98a899eb5d7c2fab7ff8f13bc38010546afd220ba2c4144b28b4a5b7d441beb2"} Mar 16 17:01:59 crc kubenswrapper[4736]: I0316 17:01:59.956177 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 17:01:59 crc kubenswrapper[4736]: I0316 17:01:59.978632 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:01:59 crc kubenswrapper[4736]: E0316 17:01:59.979076 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.039653 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-config-data\") pod \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.039789 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.039831 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-test-operator-ephemeral-temporary\") pod \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.039978 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-ssh-key\") pod \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.040000 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-test-operator-ephemeral-workdir\") pod \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.040085 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-openstack-config-secret\") pod \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.040139 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwndq\" (UniqueName: \"kubernetes.io/projected/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-kube-api-access-hwndq\") pod \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.040193 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-ca-certs\") pod \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.040227 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-openstack-config\") pod \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\" (UID: \"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31\") " Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.041835 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-config-data" (OuterVolumeSpecName: "config-data") pod "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" (UID: "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.043091 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" (UID: "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.046010 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" (UID: "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.049311 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-kube-api-access-hwndq" (OuterVolumeSpecName: "kube-api-access-hwndq") pod "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" (UID: "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31"). InnerVolumeSpecName "kube-api-access-hwndq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.077276 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" (UID: "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.077691 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" (UID: "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.083671 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" (UID: "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.104829 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" (UID: "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.112430 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" (UID: "50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.114255 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Mar 16 17:02:00 crc kubenswrapper[4736]: E0316 17:02:00.114610 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e106e4db-3f81-4cba-9c0a-697acced07dd" containerName="keystone-cron" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.114627 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e106e4db-3f81-4cba-9c0a-697acced07dd" containerName="keystone-cron" Mar 16 17:02:00 crc kubenswrapper[4736]: E0316 17:02:00.114642 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" containerName="tempest-tests-tempest-tests-runner" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.114649 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" containerName="tempest-tests-tempest-tests-runner" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.114867 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31" containerName="tempest-tests-tempest-tests-runner" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.114886 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e106e4db-3f81-4cba-9c0a-697acced07dd" containerName="keystone-cron" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.118372 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.120777 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.121140 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.129058 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.143080 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.143207 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwndq\" (UniqueName: \"kubernetes.io/projected/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-kube-api-access-hwndq\") on node \"crc\" DevicePath \"\"" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.143221 4736 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.143230 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.143240 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.143264 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.143274 4736 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.143286 4736 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.143296 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.162602 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.193908 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561342-kgtqt"] Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.195307 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561342-kgtqt" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.197589 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.200097 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.202738 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.230280 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31","Type":"ContainerDied","Data":"307caa4d112429eb1d3f33368ee6e6756b3e73b61c54e41e28b54dd934a24f20"} Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.230324 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="307caa4d112429eb1d3f33368ee6e6756b3e73b61c54e41e28b54dd934a24f20" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.230447 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.241326 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561342-kgtqt"] Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.259391 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz7g5\" (UniqueName: \"kubernetes.io/projected/252155f6-a310-43e1-bf80-1d17a2db2128-kube-api-access-fz7g5\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.259574 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg4bk\" (UniqueName: \"kubernetes.io/projected/055191d5-6303-4930-a098-dd5ee0577c3d-kube-api-access-lg4bk\") pod \"auto-csr-approver-29561342-kgtqt\" (UID: \"055191d5-6303-4930-a098-dd5ee0577c3d\") " pod="openshift-infra/auto-csr-approver-29561342-kgtqt" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.259691 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/252155f6-a310-43e1-bf80-1d17a2db2128-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.259806 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/252155f6-a310-43e1-bf80-1d17a2db2128-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.259840 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.259884 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.259942 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.260053 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/252155f6-a310-43e1-bf80-1d17a2db2128-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.260092 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/252155f6-a310-43e1-bf80-1d17a2db2128-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.260163 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.263567 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.300321 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.362895 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz7g5\" (UniqueName: \"kubernetes.io/projected/252155f6-a310-43e1-bf80-1d17a2db2128-kube-api-access-fz7g5\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.362970 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg4bk\" (UniqueName: \"kubernetes.io/projected/055191d5-6303-4930-a098-dd5ee0577c3d-kube-api-access-lg4bk\") pod \"auto-csr-approver-29561342-kgtqt\" (UID: \"055191d5-6303-4930-a098-dd5ee0577c3d\") " pod="openshift-infra/auto-csr-approver-29561342-kgtqt" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.363015 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/252155f6-a310-43e1-bf80-1d17a2db2128-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.363059 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/252155f6-a310-43e1-bf80-1d17a2db2128-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.363075 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.363097 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.363188 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/252155f6-a310-43e1-bf80-1d17a2db2128-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.363212 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/252155f6-a310-43e1-bf80-1d17a2db2128-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.363235 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.364740 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/252155f6-a310-43e1-bf80-1d17a2db2128-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.364913 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/252155f6-a310-43e1-bf80-1d17a2db2128-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.365180 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/252155f6-a310-43e1-bf80-1d17a2db2128-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.366489 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/252155f6-a310-43e1-bf80-1d17a2db2128-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.367050 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.367519 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.368152 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.379336 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg4bk\" (UniqueName: \"kubernetes.io/projected/055191d5-6303-4930-a098-dd5ee0577c3d-kube-api-access-lg4bk\") pod \"auto-csr-approver-29561342-kgtqt\" (UID: \"055191d5-6303-4930-a098-dd5ee0577c3d\") " pod="openshift-infra/auto-csr-approver-29561342-kgtqt" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.380566 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz7g5\" (UniqueName: \"kubernetes.io/projected/252155f6-a310-43e1-bf80-1d17a2db2128-kube-api-access-fz7g5\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.549137 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 17:02:00 crc kubenswrapper[4736]: I0316 17:02:00.569244 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561342-kgtqt" Mar 16 17:02:01 crc kubenswrapper[4736]: I0316 17:02:01.156532 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Mar 16 17:02:01 crc kubenswrapper[4736]: I0316 17:02:01.161852 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561342-kgtqt"] Mar 16 17:02:01 crc kubenswrapper[4736]: W0316 17:02:01.165264 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod055191d5_6303_4930_a098_dd5ee0577c3d.slice/crio-6e156bb2587876ce73aecef5dafd9cc2826468b67ec9839de9824f3b9b03c800 WatchSource:0}: Error finding container 6e156bb2587876ce73aecef5dafd9cc2826468b67ec9839de9824f3b9b03c800: Status 404 returned error can't find the container with id 6e156bb2587876ce73aecef5dafd9cc2826468b67ec9839de9824f3b9b03c800 Mar 16 17:02:01 crc kubenswrapper[4736]: I0316 17:02:01.248915 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"252155f6-a310-43e1-bf80-1d17a2db2128","Type":"ContainerStarted","Data":"20fffcdbdd87913f5e08278186713ac34ac9a6fe976cdc719543b545cad2cfdf"} Mar 16 17:02:01 crc kubenswrapper[4736]: I0316 17:02:01.251445 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561342-kgtqt" event={"ID":"055191d5-6303-4930-a098-dd5ee0577c3d","Type":"ContainerStarted","Data":"6e156bb2587876ce73aecef5dafd9cc2826468b67ec9839de9824f3b9b03c800"} Mar 16 17:02:03 crc kubenswrapper[4736]: I0316 17:02:03.272527 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561342-kgtqt" event={"ID":"055191d5-6303-4930-a098-dd5ee0577c3d","Type":"ContainerStarted","Data":"c88897e7b5e2fff733a9808148bfa43e6c2b57f8fbd24c80bc3baedd4ee10b44"} Mar 16 17:02:03 crc kubenswrapper[4736]: I0316 17:02:03.294441 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561342-kgtqt" podStartSLOduration=2.144103301 podStartE2EDuration="3.29440359s" podCreationTimestamp="2026-03-16 17:02:00 +0000 UTC" firstStartedPulling="2026-03-16 17:02:01.166744642 +0000 UTC m=+6522.894134929" lastFinishedPulling="2026-03-16 17:02:02.317044931 +0000 UTC m=+6524.044435218" observedRunningTime="2026-03-16 17:02:03.291904033 +0000 UTC m=+6525.019294320" watchObservedRunningTime="2026-03-16 17:02:03.29440359 +0000 UTC m=+6525.021793877" Mar 16 17:02:04 crc kubenswrapper[4736]: I0316 17:02:04.282512 4736 generic.go:334] "Generic (PLEG): container finished" podID="055191d5-6303-4930-a098-dd5ee0577c3d" containerID="c88897e7b5e2fff733a9808148bfa43e6c2b57f8fbd24c80bc3baedd4ee10b44" exitCode=0 Mar 16 17:02:04 crc kubenswrapper[4736]: I0316 17:02:04.282619 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561342-kgtqt" event={"ID":"055191d5-6303-4930-a098-dd5ee0577c3d","Type":"ContainerDied","Data":"c88897e7b5e2fff733a9808148bfa43e6c2b57f8fbd24c80bc3baedd4ee10b44"} Mar 16 17:02:04 crc kubenswrapper[4736]: I0316 17:02:04.285743 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"252155f6-a310-43e1-bf80-1d17a2db2128","Type":"ContainerStarted","Data":"c6fd4d0d0888ee52fc3ce7ef3616d645b9b8efdc7e9d72c27c702ca295bf4e41"} Mar 16 17:02:05 crc kubenswrapper[4736]: I0316 17:02:05.625979 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561342-kgtqt" Mar 16 17:02:05 crc kubenswrapper[4736]: I0316 17:02:05.642503 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" podStartSLOduration=5.642483463 podStartE2EDuration="5.642483463s" podCreationTimestamp="2026-03-16 17:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 17:02:04.323187475 +0000 UTC m=+6526.050577792" watchObservedRunningTime="2026-03-16 17:02:05.642483463 +0000 UTC m=+6527.369873750" Mar 16 17:02:05 crc kubenswrapper[4736]: I0316 17:02:05.698785 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg4bk\" (UniqueName: \"kubernetes.io/projected/055191d5-6303-4930-a098-dd5ee0577c3d-kube-api-access-lg4bk\") pod \"055191d5-6303-4930-a098-dd5ee0577c3d\" (UID: \"055191d5-6303-4930-a098-dd5ee0577c3d\") " Mar 16 17:02:05 crc kubenswrapper[4736]: I0316 17:02:05.704892 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/055191d5-6303-4930-a098-dd5ee0577c3d-kube-api-access-lg4bk" (OuterVolumeSpecName: "kube-api-access-lg4bk") pod "055191d5-6303-4930-a098-dd5ee0577c3d" (UID: "055191d5-6303-4930-a098-dd5ee0577c3d"). InnerVolumeSpecName "kube-api-access-lg4bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:02:05 crc kubenswrapper[4736]: I0316 17:02:05.801942 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg4bk\" (UniqueName: \"kubernetes.io/projected/055191d5-6303-4930-a098-dd5ee0577c3d-kube-api-access-lg4bk\") on node \"crc\" DevicePath \"\"" Mar 16 17:02:06 crc kubenswrapper[4736]: I0316 17:02:06.305321 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561342-kgtqt" event={"ID":"055191d5-6303-4930-a098-dd5ee0577c3d","Type":"ContainerDied","Data":"6e156bb2587876ce73aecef5dafd9cc2826468b67ec9839de9824f3b9b03c800"} Mar 16 17:02:06 crc kubenswrapper[4736]: I0316 17:02:06.305362 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e156bb2587876ce73aecef5dafd9cc2826468b67ec9839de9824f3b9b03c800" Mar 16 17:02:06 crc kubenswrapper[4736]: I0316 17:02:06.305421 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561342-kgtqt" Mar 16 17:02:06 crc kubenswrapper[4736]: I0316 17:02:06.370772 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561336-n9gvg"] Mar 16 17:02:06 crc kubenswrapper[4736]: I0316 17:02:06.379299 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561336-n9gvg"] Mar 16 17:02:06 crc kubenswrapper[4736]: I0316 17:02:06.989626 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f0bd710-8de2-47b1-915a-7592bb25311c" path="/var/lib/kubelet/pods/2f0bd710-8de2-47b1-915a-7592bb25311c/volumes" Mar 16 17:02:10 crc kubenswrapper[4736]: I0316 17:02:10.978686 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:02:10 crc kubenswrapper[4736]: E0316 17:02:10.979864 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:02:21 crc kubenswrapper[4736]: I0316 17:02:21.978042 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:02:21 crc kubenswrapper[4736]: E0316 17:02:21.978914 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:02:33 crc kubenswrapper[4736]: I0316 17:02:33.978526 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:02:33 crc kubenswrapper[4736]: E0316 17:02:33.979400 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:02:46 crc kubenswrapper[4736]: I0316 17:02:46.978034 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:02:46 crc kubenswrapper[4736]: E0316 17:02:46.979056 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:02:58 crc kubenswrapper[4736]: I0316 17:02:58.991829 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:02:58 crc kubenswrapper[4736]: E0316 17:02:58.992819 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:03:00 crc kubenswrapper[4736]: I0316 17:03:00.339350 4736 scope.go:117] "RemoveContainer" containerID="0d8f5c1b8839167b068c3c626559d159a42a339ea808791a952895b5ef7bf56e" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.403002 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d578f4777-v7g9k"] Mar 16 17:03:05 crc kubenswrapper[4736]: E0316 17:03:05.404105 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="055191d5-6303-4930-a098-dd5ee0577c3d" containerName="oc" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.404151 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="055191d5-6303-4930-a098-dd5ee0577c3d" containerName="oc" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.404385 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="055191d5-6303-4930-a098-dd5ee0577c3d" containerName="oc" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.405614 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.423254 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d578f4777-v7g9k"] Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.486589 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-config\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.486965 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-public-tls-certs\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.487063 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-combined-ca-bundle\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.489974 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lclw8\" (UniqueName: \"kubernetes.io/projected/58e24fbd-004d-44bd-a19a-3ab77b210e84-kube-api-access-lclw8\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.490140 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-httpd-config\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.490263 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-internal-tls-certs\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.490338 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-ovndb-tls-certs\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.593040 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lclw8\" (UniqueName: \"kubernetes.io/projected/58e24fbd-004d-44bd-a19a-3ab77b210e84-kube-api-access-lclw8\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.593820 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-httpd-config\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.595195 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-internal-tls-certs\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.595655 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-ovndb-tls-certs\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.595897 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-config\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.596085 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-public-tls-certs\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.596233 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-combined-ca-bundle\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.629362 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-combined-ca-bundle\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.641812 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-public-tls-certs\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.642442 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-config\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.643721 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lclw8\" (UniqueName: \"kubernetes.io/projected/58e24fbd-004d-44bd-a19a-3ab77b210e84-kube-api-access-lclw8\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.644665 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-ovndb-tls-certs\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.657220 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-httpd-config\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.657729 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-internal-tls-certs\") pod \"neutron-6d578f4777-v7g9k\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:05 crc kubenswrapper[4736]: I0316 17:03:05.727354 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:06 crc kubenswrapper[4736]: I0316 17:03:06.411910 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d578f4777-v7g9k"] Mar 16 17:03:06 crc kubenswrapper[4736]: I0316 17:03:06.902179 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d578f4777-v7g9k" event={"ID":"58e24fbd-004d-44bd-a19a-3ab77b210e84","Type":"ContainerStarted","Data":"7647f5f97855fd348234de5f521ad8b3bd622297e950bb3b347854dbfca4d881"} Mar 16 17:03:06 crc kubenswrapper[4736]: I0316 17:03:06.902549 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d578f4777-v7g9k" event={"ID":"58e24fbd-004d-44bd-a19a-3ab77b210e84","Type":"ContainerStarted","Data":"bb7591e2394d320f8150cf1fee9dd42381493664411f7e46e51145425497b4d8"} Mar 16 17:03:06 crc kubenswrapper[4736]: I0316 17:03:06.902563 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d578f4777-v7g9k" event={"ID":"58e24fbd-004d-44bd-a19a-3ab77b210e84","Type":"ContainerStarted","Data":"c4a8d22b4870fff10b520c979c6be084edbb2256fa2132e4148fe4cfd3de0db6"} Mar 16 17:03:06 crc kubenswrapper[4736]: I0316 17:03:06.902679 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:06 crc kubenswrapper[4736]: I0316 17:03:06.922355 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d578f4777-v7g9k" podStartSLOduration=1.922332505 podStartE2EDuration="1.922332505s" podCreationTimestamp="2026-03-16 17:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 17:03:06.917917304 +0000 UTC m=+6588.645307591" watchObservedRunningTime="2026-03-16 17:03:06.922332505 +0000 UTC m=+6588.649722782" Mar 16 17:03:10 crc kubenswrapper[4736]: I0316 17:03:10.977917 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:03:10 crc kubenswrapper[4736]: E0316 17:03:10.978856 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:03:23 crc kubenswrapper[4736]: I0316 17:03:23.977824 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:03:23 crc kubenswrapper[4736]: E0316 17:03:23.978741 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:03:35 crc kubenswrapper[4736]: I0316 17:03:35.743753 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 17:03:35 crc kubenswrapper[4736]: I0316 17:03:35.835937 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5db557dd69-7c5jd"] Mar 16 17:03:35 crc kubenswrapper[4736]: I0316 17:03:35.836379 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5db557dd69-7c5jd" podUID="f529e0b5-791d-4f53-a035-b0112d59d7b8" containerName="neutron-api" containerID="cri-o://e10c42b60a1ab70081f329e9150dce172acdc088ad3d2aecca633c9c890a9497" gracePeriod=30 Mar 16 17:03:35 crc kubenswrapper[4736]: I0316 17:03:35.836579 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5db557dd69-7c5jd" podUID="f529e0b5-791d-4f53-a035-b0112d59d7b8" containerName="neutron-httpd" containerID="cri-o://a9da06a2bea8b11a8dcf443efedb3eb7c850a37344810275093138e32c14867a" gracePeriod=30 Mar 16 17:03:36 crc kubenswrapper[4736]: I0316 17:03:36.172679 4736 generic.go:334] "Generic (PLEG): container finished" podID="f529e0b5-791d-4f53-a035-b0112d59d7b8" containerID="a9da06a2bea8b11a8dcf443efedb3eb7c850a37344810275093138e32c14867a" exitCode=0 Mar 16 17:03:36 crc kubenswrapper[4736]: I0316 17:03:36.172858 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db557dd69-7c5jd" event={"ID":"f529e0b5-791d-4f53-a035-b0112d59d7b8","Type":"ContainerDied","Data":"a9da06a2bea8b11a8dcf443efedb3eb7c850a37344810275093138e32c14867a"} Mar 16 17:03:36 crc kubenswrapper[4736]: I0316 17:03:36.978454 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:03:36 crc kubenswrapper[4736]: E0316 17:03:36.979005 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:03:39 crc kubenswrapper[4736]: I0316 17:03:39.976515 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5db557dd69-7c5jd" podUID="f529e0b5-791d-4f53-a035-b0112d59d7b8" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.172:9696/\": dial tcp 10.217.0.172:9696: connect: connection refused" Mar 16 17:03:46 crc kubenswrapper[4736]: I0316 17:03:46.971607 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8cxrm"] Mar 16 17:03:46 crc kubenswrapper[4736]: I0316 17:03:46.974657 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:46 crc kubenswrapper[4736]: I0316 17:03:46.993566 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8cxrm"] Mar 16 17:03:47 crc kubenswrapper[4736]: I0316 17:03:47.090877 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x58fr\" (UniqueName: \"kubernetes.io/projected/f5a51684-25b8-4845-8e60-0823ef234022-kube-api-access-x58fr\") pod \"certified-operators-8cxrm\" (UID: \"f5a51684-25b8-4845-8e60-0823ef234022\") " pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:47 crc kubenswrapper[4736]: I0316 17:03:47.091186 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a51684-25b8-4845-8e60-0823ef234022-utilities\") pod \"certified-operators-8cxrm\" (UID: \"f5a51684-25b8-4845-8e60-0823ef234022\") " pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:47 crc kubenswrapper[4736]: I0316 17:03:47.091562 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a51684-25b8-4845-8e60-0823ef234022-catalog-content\") pod \"certified-operators-8cxrm\" (UID: \"f5a51684-25b8-4845-8e60-0823ef234022\") " pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:47 crc kubenswrapper[4736]: I0316 17:03:47.193860 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a51684-25b8-4845-8e60-0823ef234022-catalog-content\") pod \"certified-operators-8cxrm\" (UID: \"f5a51684-25b8-4845-8e60-0823ef234022\") " pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:47 crc kubenswrapper[4736]: I0316 17:03:47.193941 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x58fr\" (UniqueName: \"kubernetes.io/projected/f5a51684-25b8-4845-8e60-0823ef234022-kube-api-access-x58fr\") pod \"certified-operators-8cxrm\" (UID: \"f5a51684-25b8-4845-8e60-0823ef234022\") " pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:47 crc kubenswrapper[4736]: I0316 17:03:47.193982 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a51684-25b8-4845-8e60-0823ef234022-utilities\") pod \"certified-operators-8cxrm\" (UID: \"f5a51684-25b8-4845-8e60-0823ef234022\") " pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:47 crc kubenswrapper[4736]: I0316 17:03:47.194490 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a51684-25b8-4845-8e60-0823ef234022-utilities\") pod \"certified-operators-8cxrm\" (UID: \"f5a51684-25b8-4845-8e60-0823ef234022\") " pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:47 crc kubenswrapper[4736]: I0316 17:03:47.194569 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a51684-25b8-4845-8e60-0823ef234022-catalog-content\") pod \"certified-operators-8cxrm\" (UID: \"f5a51684-25b8-4845-8e60-0823ef234022\") " pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:47 crc kubenswrapper[4736]: I0316 17:03:47.225057 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x58fr\" (UniqueName: \"kubernetes.io/projected/f5a51684-25b8-4845-8e60-0823ef234022-kube-api-access-x58fr\") pod \"certified-operators-8cxrm\" (UID: \"f5a51684-25b8-4845-8e60-0823ef234022\") " pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:47 crc kubenswrapper[4736]: I0316 17:03:47.317315 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:48 crc kubenswrapper[4736]: I0316 17:03:48.008089 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8cxrm"] Mar 16 17:03:48 crc kubenswrapper[4736]: W0316 17:03:48.014993 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5a51684_25b8_4845_8e60_0823ef234022.slice/crio-6606af96e40a1decdc5f0b2649eeee2e8dabd52c739077bf9d28e3c6d4be7c37 WatchSource:0}: Error finding container 6606af96e40a1decdc5f0b2649eeee2e8dabd52c739077bf9d28e3c6d4be7c37: Status 404 returned error can't find the container with id 6606af96e40a1decdc5f0b2649eeee2e8dabd52c739077bf9d28e3c6d4be7c37 Mar 16 17:03:48 crc kubenswrapper[4736]: I0316 17:03:48.290624 4736 generic.go:334] "Generic (PLEG): container finished" podID="f5a51684-25b8-4845-8e60-0823ef234022" containerID="191872f38d4707eeb6f13ac12a523af33aa433129a43830f52510fceefd758cd" exitCode=0 Mar 16 17:03:48 crc kubenswrapper[4736]: I0316 17:03:48.290668 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8cxrm" event={"ID":"f5a51684-25b8-4845-8e60-0823ef234022","Type":"ContainerDied","Data":"191872f38d4707eeb6f13ac12a523af33aa433129a43830f52510fceefd758cd"} Mar 16 17:03:48 crc kubenswrapper[4736]: I0316 17:03:48.290695 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8cxrm" event={"ID":"f5a51684-25b8-4845-8e60-0823ef234022","Type":"ContainerStarted","Data":"6606af96e40a1decdc5f0b2649eeee2e8dabd52c739077bf9d28e3c6d4be7c37"} Mar 16 17:03:49 crc kubenswrapper[4736]: I0316 17:03:49.304250 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8cxrm" event={"ID":"f5a51684-25b8-4845-8e60-0823ef234022","Type":"ContainerStarted","Data":"f5bc020555d8ecebcfd9895fed021d11b6a53540aca328470c1094e7f3817904"} Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.314861 4736 generic.go:334] "Generic (PLEG): container finished" podID="f529e0b5-791d-4f53-a035-b0112d59d7b8" containerID="e10c42b60a1ab70081f329e9150dce172acdc088ad3d2aecca633c9c890a9497" exitCode=0 Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.314917 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db557dd69-7c5jd" event={"ID":"f529e0b5-791d-4f53-a035-b0112d59d7b8","Type":"ContainerDied","Data":"e10c42b60a1ab70081f329e9150dce172acdc088ad3d2aecca633c9c890a9497"} Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.541501 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.596203 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-config\") pod \"f529e0b5-791d-4f53-a035-b0112d59d7b8\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.596369 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf2qv\" (UniqueName: \"kubernetes.io/projected/f529e0b5-791d-4f53-a035-b0112d59d7b8-kube-api-access-lf2qv\") pod \"f529e0b5-791d-4f53-a035-b0112d59d7b8\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.596420 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-ovndb-tls-certs\") pod \"f529e0b5-791d-4f53-a035-b0112d59d7b8\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.596464 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-public-tls-certs\") pod \"f529e0b5-791d-4f53-a035-b0112d59d7b8\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.596526 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-internal-tls-certs\") pod \"f529e0b5-791d-4f53-a035-b0112d59d7b8\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.597209 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-httpd-config\") pod \"f529e0b5-791d-4f53-a035-b0112d59d7b8\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.597257 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-combined-ca-bundle\") pod \"f529e0b5-791d-4f53-a035-b0112d59d7b8\" (UID: \"f529e0b5-791d-4f53-a035-b0112d59d7b8\") " Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.613545 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f529e0b5-791d-4f53-a035-b0112d59d7b8" (UID: "f529e0b5-791d-4f53-a035-b0112d59d7b8"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.614523 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f529e0b5-791d-4f53-a035-b0112d59d7b8-kube-api-access-lf2qv" (OuterVolumeSpecName: "kube-api-access-lf2qv") pod "f529e0b5-791d-4f53-a035-b0112d59d7b8" (UID: "f529e0b5-791d-4f53-a035-b0112d59d7b8"). InnerVolumeSpecName "kube-api-access-lf2qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.651647 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f529e0b5-791d-4f53-a035-b0112d59d7b8" (UID: "f529e0b5-791d-4f53-a035-b0112d59d7b8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.673682 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f529e0b5-791d-4f53-a035-b0112d59d7b8" (UID: "f529e0b5-791d-4f53-a035-b0112d59d7b8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.692240 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f529e0b5-791d-4f53-a035-b0112d59d7b8" (UID: "f529e0b5-791d-4f53-a035-b0112d59d7b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.700931 4736 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.700963 4736 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.700974 4736 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.700985 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.700997 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf2qv\" (UniqueName: \"kubernetes.io/projected/f529e0b5-791d-4f53-a035-b0112d59d7b8-kube-api-access-lf2qv\") on node \"crc\" DevicePath \"\"" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.709688 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f529e0b5-791d-4f53-a035-b0112d59d7b8" (UID: "f529e0b5-791d-4f53-a035-b0112d59d7b8"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.710898 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-config" (OuterVolumeSpecName: "config") pod "f529e0b5-791d-4f53-a035-b0112d59d7b8" (UID: "f529e0b5-791d-4f53-a035-b0112d59d7b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.803092 4736 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.803139 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f529e0b5-791d-4f53-a035-b0112d59d7b8-config\") on node \"crc\" DevicePath \"\"" Mar 16 17:03:50 crc kubenswrapper[4736]: I0316 17:03:50.978351 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:03:50 crc kubenswrapper[4736]: E0316 17:03:50.978739 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:03:51 crc kubenswrapper[4736]: I0316 17:03:51.330455 4736 generic.go:334] "Generic (PLEG): container finished" podID="f5a51684-25b8-4845-8e60-0823ef234022" containerID="f5bc020555d8ecebcfd9895fed021d11b6a53540aca328470c1094e7f3817904" exitCode=0 Mar 16 17:03:51 crc kubenswrapper[4736]: I0316 17:03:51.330523 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8cxrm" event={"ID":"f5a51684-25b8-4845-8e60-0823ef234022","Type":"ContainerDied","Data":"f5bc020555d8ecebcfd9895fed021d11b6a53540aca328470c1094e7f3817904"} Mar 16 17:03:51 crc kubenswrapper[4736]: I0316 17:03:51.336916 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5db557dd69-7c5jd" event={"ID":"f529e0b5-791d-4f53-a035-b0112d59d7b8","Type":"ContainerDied","Data":"05f6466cd4d85ef1b03314db78912c872fc2a088ec525bb45628389fa1dece00"} Mar 16 17:03:51 crc kubenswrapper[4736]: I0316 17:03:51.336971 4736 scope.go:117] "RemoveContainer" containerID="a9da06a2bea8b11a8dcf443efedb3eb7c850a37344810275093138e32c14867a" Mar 16 17:03:51 crc kubenswrapper[4736]: I0316 17:03:51.337166 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5db557dd69-7c5jd" Mar 16 17:03:51 crc kubenswrapper[4736]: I0316 17:03:51.386731 4736 scope.go:117] "RemoveContainer" containerID="e10c42b60a1ab70081f329e9150dce172acdc088ad3d2aecca633c9c890a9497" Mar 16 17:03:51 crc kubenswrapper[4736]: I0316 17:03:51.387176 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5db557dd69-7c5jd"] Mar 16 17:03:51 crc kubenswrapper[4736]: I0316 17:03:51.392335 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5db557dd69-7c5jd"] Mar 16 17:03:52 crc kubenswrapper[4736]: I0316 17:03:52.349406 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8cxrm" event={"ID":"f5a51684-25b8-4845-8e60-0823ef234022","Type":"ContainerStarted","Data":"3db71db5bd0c8dce137700f3a1e04438665d654aea9b759b9b48b8f1ab149244"} Mar 16 17:03:52 crc kubenswrapper[4736]: I0316 17:03:52.375080 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8cxrm" podStartSLOduration=2.929906504 podStartE2EDuration="6.375062382s" podCreationTimestamp="2026-03-16 17:03:46 +0000 UTC" firstStartedPulling="2026-03-16 17:03:48.292681431 +0000 UTC m=+6630.020071718" lastFinishedPulling="2026-03-16 17:03:51.737837309 +0000 UTC m=+6633.465227596" observedRunningTime="2026-03-16 17:03:52.366871089 +0000 UTC m=+6634.094261386" watchObservedRunningTime="2026-03-16 17:03:52.375062382 +0000 UTC m=+6634.102452659" Mar 16 17:03:52 crc kubenswrapper[4736]: I0316 17:03:52.988939 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f529e0b5-791d-4f53-a035-b0112d59d7b8" path="/var/lib/kubelet/pods/f529e0b5-791d-4f53-a035-b0112d59d7b8/volumes" Mar 16 17:03:57 crc kubenswrapper[4736]: I0316 17:03:57.317905 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:57 crc kubenswrapper[4736]: I0316 17:03:57.318310 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:57 crc kubenswrapper[4736]: I0316 17:03:57.366072 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:57 crc kubenswrapper[4736]: I0316 17:03:57.462949 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:03:57 crc kubenswrapper[4736]: I0316 17:03:57.617244 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8cxrm"] Mar 16 17:03:59 crc kubenswrapper[4736]: I0316 17:03:59.418057 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8cxrm" podUID="f5a51684-25b8-4845-8e60-0823ef234022" containerName="registry-server" containerID="cri-o://3db71db5bd0c8dce137700f3a1e04438665d654aea9b759b9b48b8f1ab149244" gracePeriod=2 Mar 16 17:03:59 crc kubenswrapper[4736]: I0316 17:03:59.961416 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.108643 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a51684-25b8-4845-8e60-0823ef234022-utilities\") pod \"f5a51684-25b8-4845-8e60-0823ef234022\" (UID: \"f5a51684-25b8-4845-8e60-0823ef234022\") " Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.108762 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a51684-25b8-4845-8e60-0823ef234022-catalog-content\") pod \"f5a51684-25b8-4845-8e60-0823ef234022\" (UID: \"f5a51684-25b8-4845-8e60-0823ef234022\") " Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.108934 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x58fr\" (UniqueName: \"kubernetes.io/projected/f5a51684-25b8-4845-8e60-0823ef234022-kube-api-access-x58fr\") pod \"f5a51684-25b8-4845-8e60-0823ef234022\" (UID: \"f5a51684-25b8-4845-8e60-0823ef234022\") " Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.110873 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5a51684-25b8-4845-8e60-0823ef234022-utilities" (OuterVolumeSpecName: "utilities") pod "f5a51684-25b8-4845-8e60-0823ef234022" (UID: "f5a51684-25b8-4845-8e60-0823ef234022"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.121323 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5a51684-25b8-4845-8e60-0823ef234022-kube-api-access-x58fr" (OuterVolumeSpecName: "kube-api-access-x58fr") pod "f5a51684-25b8-4845-8e60-0823ef234022" (UID: "f5a51684-25b8-4845-8e60-0823ef234022"). InnerVolumeSpecName "kube-api-access-x58fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.160423 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561344-74fs8"] Mar 16 17:04:00 crc kubenswrapper[4736]: E0316 17:04:00.160947 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f529e0b5-791d-4f53-a035-b0112d59d7b8" containerName="neutron-httpd" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.160972 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f529e0b5-791d-4f53-a035-b0112d59d7b8" containerName="neutron-httpd" Mar 16 17:04:00 crc kubenswrapper[4736]: E0316 17:04:00.160988 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f529e0b5-791d-4f53-a035-b0112d59d7b8" containerName="neutron-api" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.160996 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f529e0b5-791d-4f53-a035-b0112d59d7b8" containerName="neutron-api" Mar 16 17:04:00 crc kubenswrapper[4736]: E0316 17:04:00.161008 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a51684-25b8-4845-8e60-0823ef234022" containerName="registry-server" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.161014 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a51684-25b8-4845-8e60-0823ef234022" containerName="registry-server" Mar 16 17:04:00 crc kubenswrapper[4736]: E0316 17:04:00.161021 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a51684-25b8-4845-8e60-0823ef234022" containerName="extract-utilities" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.161029 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a51684-25b8-4845-8e60-0823ef234022" containerName="extract-utilities" Mar 16 17:04:00 crc kubenswrapper[4736]: E0316 17:04:00.161052 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5a51684-25b8-4845-8e60-0823ef234022" containerName="extract-content" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.161062 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5a51684-25b8-4845-8e60-0823ef234022" containerName="extract-content" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.163638 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5a51684-25b8-4845-8e60-0823ef234022" containerName="registry-server" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.163682 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f529e0b5-791d-4f53-a035-b0112d59d7b8" containerName="neutron-httpd" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.163694 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f529e0b5-791d-4f53-a035-b0112d59d7b8" containerName="neutron-api" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.164799 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561344-74fs8" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.167207 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.167429 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.167800 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.183896 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561344-74fs8"] Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.192794 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5a51684-25b8-4845-8e60-0823ef234022-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5a51684-25b8-4845-8e60-0823ef234022" (UID: "f5a51684-25b8-4845-8e60-0823ef234022"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.217417 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlj5f\" (UniqueName: \"kubernetes.io/projected/ee2bbb31-8823-4403-bc49-02e2d0f8ebfa-kube-api-access-xlj5f\") pod \"auto-csr-approver-29561344-74fs8\" (UID: \"ee2bbb31-8823-4403-bc49-02e2d0f8ebfa\") " pod="openshift-infra/auto-csr-approver-29561344-74fs8" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.217601 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5a51684-25b8-4845-8e60-0823ef234022-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.217614 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x58fr\" (UniqueName: \"kubernetes.io/projected/f5a51684-25b8-4845-8e60-0823ef234022-kube-api-access-x58fr\") on node \"crc\" DevicePath \"\"" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.217623 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5a51684-25b8-4845-8e60-0823ef234022-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.320297 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlj5f\" (UniqueName: \"kubernetes.io/projected/ee2bbb31-8823-4403-bc49-02e2d0f8ebfa-kube-api-access-xlj5f\") pod \"auto-csr-approver-29561344-74fs8\" (UID: \"ee2bbb31-8823-4403-bc49-02e2d0f8ebfa\") " pod="openshift-infra/auto-csr-approver-29561344-74fs8" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.342076 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlj5f\" (UniqueName: \"kubernetes.io/projected/ee2bbb31-8823-4403-bc49-02e2d0f8ebfa-kube-api-access-xlj5f\") pod \"auto-csr-approver-29561344-74fs8\" (UID: \"ee2bbb31-8823-4403-bc49-02e2d0f8ebfa\") " pod="openshift-infra/auto-csr-approver-29561344-74fs8" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.429832 4736 generic.go:334] "Generic (PLEG): container finished" podID="f5a51684-25b8-4845-8e60-0823ef234022" containerID="3db71db5bd0c8dce137700f3a1e04438665d654aea9b759b9b48b8f1ab149244" exitCode=0 Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.429871 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8cxrm" event={"ID":"f5a51684-25b8-4845-8e60-0823ef234022","Type":"ContainerDied","Data":"3db71db5bd0c8dce137700f3a1e04438665d654aea9b759b9b48b8f1ab149244"} Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.429895 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8cxrm" event={"ID":"f5a51684-25b8-4845-8e60-0823ef234022","Type":"ContainerDied","Data":"6606af96e40a1decdc5f0b2649eeee2e8dabd52c739077bf9d28e3c6d4be7c37"} Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.429913 4736 scope.go:117] "RemoveContainer" containerID="3db71db5bd0c8dce137700f3a1e04438665d654aea9b759b9b48b8f1ab149244" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.430038 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8cxrm" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.467363 4736 scope.go:117] "RemoveContainer" containerID="f5bc020555d8ecebcfd9895fed021d11b6a53540aca328470c1094e7f3817904" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.473041 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8cxrm"] Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.481543 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8cxrm"] Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.499893 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561344-74fs8" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.513067 4736 scope.go:117] "RemoveContainer" containerID="191872f38d4707eeb6f13ac12a523af33aa433129a43830f52510fceefd758cd" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.566535 4736 scope.go:117] "RemoveContainer" containerID="3db71db5bd0c8dce137700f3a1e04438665d654aea9b759b9b48b8f1ab149244" Mar 16 17:04:00 crc kubenswrapper[4736]: E0316 17:04:00.567281 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3db71db5bd0c8dce137700f3a1e04438665d654aea9b759b9b48b8f1ab149244\": container with ID starting with 3db71db5bd0c8dce137700f3a1e04438665d654aea9b759b9b48b8f1ab149244 not found: ID does not exist" containerID="3db71db5bd0c8dce137700f3a1e04438665d654aea9b759b9b48b8f1ab149244" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.567332 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3db71db5bd0c8dce137700f3a1e04438665d654aea9b759b9b48b8f1ab149244"} err="failed to get container status \"3db71db5bd0c8dce137700f3a1e04438665d654aea9b759b9b48b8f1ab149244\": rpc error: code = NotFound desc = could not find container \"3db71db5bd0c8dce137700f3a1e04438665d654aea9b759b9b48b8f1ab149244\": container with ID starting with 3db71db5bd0c8dce137700f3a1e04438665d654aea9b759b9b48b8f1ab149244 not found: ID does not exist" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.567367 4736 scope.go:117] "RemoveContainer" containerID="f5bc020555d8ecebcfd9895fed021d11b6a53540aca328470c1094e7f3817904" Mar 16 17:04:00 crc kubenswrapper[4736]: E0316 17:04:00.567711 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5bc020555d8ecebcfd9895fed021d11b6a53540aca328470c1094e7f3817904\": container with ID starting with f5bc020555d8ecebcfd9895fed021d11b6a53540aca328470c1094e7f3817904 not found: ID does not exist" containerID="f5bc020555d8ecebcfd9895fed021d11b6a53540aca328470c1094e7f3817904" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.567747 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5bc020555d8ecebcfd9895fed021d11b6a53540aca328470c1094e7f3817904"} err="failed to get container status \"f5bc020555d8ecebcfd9895fed021d11b6a53540aca328470c1094e7f3817904\": rpc error: code = NotFound desc = could not find container \"f5bc020555d8ecebcfd9895fed021d11b6a53540aca328470c1094e7f3817904\": container with ID starting with f5bc020555d8ecebcfd9895fed021d11b6a53540aca328470c1094e7f3817904 not found: ID does not exist" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.567770 4736 scope.go:117] "RemoveContainer" containerID="191872f38d4707eeb6f13ac12a523af33aa433129a43830f52510fceefd758cd" Mar 16 17:04:00 crc kubenswrapper[4736]: E0316 17:04:00.569243 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"191872f38d4707eeb6f13ac12a523af33aa433129a43830f52510fceefd758cd\": container with ID starting with 191872f38d4707eeb6f13ac12a523af33aa433129a43830f52510fceefd758cd not found: ID does not exist" containerID="191872f38d4707eeb6f13ac12a523af33aa433129a43830f52510fceefd758cd" Mar 16 17:04:00 crc kubenswrapper[4736]: I0316 17:04:00.569284 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"191872f38d4707eeb6f13ac12a523af33aa433129a43830f52510fceefd758cd"} err="failed to get container status \"191872f38d4707eeb6f13ac12a523af33aa433129a43830f52510fceefd758cd\": rpc error: code = NotFound desc = could not find container \"191872f38d4707eeb6f13ac12a523af33aa433129a43830f52510fceefd758cd\": container with ID starting with 191872f38d4707eeb6f13ac12a523af33aa433129a43830f52510fceefd758cd not found: ID does not exist" Mar 16 17:04:01 crc kubenswrapper[4736]: I0316 17:04:00.997998 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5a51684-25b8-4845-8e60-0823ef234022" path="/var/lib/kubelet/pods/f5a51684-25b8-4845-8e60-0823ef234022/volumes" Mar 16 17:04:01 crc kubenswrapper[4736]: I0316 17:04:01.001716 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561344-74fs8"] Mar 16 17:04:01 crc kubenswrapper[4736]: I0316 17:04:01.441507 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561344-74fs8" event={"ID":"ee2bbb31-8823-4403-bc49-02e2d0f8ebfa","Type":"ContainerStarted","Data":"6a6cd5ed7cde36654a8c525f2cc4af9a8ce570a896ffe754e22bc6e36b339ba7"} Mar 16 17:04:02 crc kubenswrapper[4736]: I0316 17:04:02.466346 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561344-74fs8" event={"ID":"ee2bbb31-8823-4403-bc49-02e2d0f8ebfa","Type":"ContainerStarted","Data":"f7c4cf37cd7bee9eaafffc91e55cec1c19bd5d5bf8b6dbe7b6e0b16c6afd9145"} Mar 16 17:04:02 crc kubenswrapper[4736]: I0316 17:04:02.490633 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561344-74fs8" podStartSLOduration=1.59441988 podStartE2EDuration="2.490607076s" podCreationTimestamp="2026-03-16 17:04:00 +0000 UTC" firstStartedPulling="2026-03-16 17:04:01.001685051 +0000 UTC m=+6642.729075378" lastFinishedPulling="2026-03-16 17:04:01.897872267 +0000 UTC m=+6643.625262574" observedRunningTime="2026-03-16 17:04:02.486637557 +0000 UTC m=+6644.214027854" watchObservedRunningTime="2026-03-16 17:04:02.490607076 +0000 UTC m=+6644.217997403" Mar 16 17:04:02 crc kubenswrapper[4736]: I0316 17:04:02.978997 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:04:02 crc kubenswrapper[4736]: E0316 17:04:02.979782 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:04:03 crc kubenswrapper[4736]: I0316 17:04:03.481596 4736 generic.go:334] "Generic (PLEG): container finished" podID="ee2bbb31-8823-4403-bc49-02e2d0f8ebfa" containerID="f7c4cf37cd7bee9eaafffc91e55cec1c19bd5d5bf8b6dbe7b6e0b16c6afd9145" exitCode=0 Mar 16 17:04:03 crc kubenswrapper[4736]: I0316 17:04:03.481709 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561344-74fs8" event={"ID":"ee2bbb31-8823-4403-bc49-02e2d0f8ebfa","Type":"ContainerDied","Data":"f7c4cf37cd7bee9eaafffc91e55cec1c19bd5d5bf8b6dbe7b6e0b16c6afd9145"} Mar 16 17:04:04 crc kubenswrapper[4736]: I0316 17:04:04.857054 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561344-74fs8" Mar 16 17:04:04 crc kubenswrapper[4736]: I0316 17:04:04.915269 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlj5f\" (UniqueName: \"kubernetes.io/projected/ee2bbb31-8823-4403-bc49-02e2d0f8ebfa-kube-api-access-xlj5f\") pod \"ee2bbb31-8823-4403-bc49-02e2d0f8ebfa\" (UID: \"ee2bbb31-8823-4403-bc49-02e2d0f8ebfa\") " Mar 16 17:04:04 crc kubenswrapper[4736]: I0316 17:04:04.921854 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee2bbb31-8823-4403-bc49-02e2d0f8ebfa-kube-api-access-xlj5f" (OuterVolumeSpecName: "kube-api-access-xlj5f") pod "ee2bbb31-8823-4403-bc49-02e2d0f8ebfa" (UID: "ee2bbb31-8823-4403-bc49-02e2d0f8ebfa"). InnerVolumeSpecName "kube-api-access-xlj5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:04:05 crc kubenswrapper[4736]: I0316 17:04:05.019264 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlj5f\" (UniqueName: \"kubernetes.io/projected/ee2bbb31-8823-4403-bc49-02e2d0f8ebfa-kube-api-access-xlj5f\") on node \"crc\" DevicePath \"\"" Mar 16 17:04:05 crc kubenswrapper[4736]: I0316 17:04:05.499579 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561344-74fs8" event={"ID":"ee2bbb31-8823-4403-bc49-02e2d0f8ebfa","Type":"ContainerDied","Data":"6a6cd5ed7cde36654a8c525f2cc4af9a8ce570a896ffe754e22bc6e36b339ba7"} Mar 16 17:04:05 crc kubenswrapper[4736]: I0316 17:04:05.499912 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6cd5ed7cde36654a8c525f2cc4af9a8ce570a896ffe754e22bc6e36b339ba7" Mar 16 17:04:05 crc kubenswrapper[4736]: I0316 17:04:05.499756 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561344-74fs8" Mar 16 17:04:05 crc kubenswrapper[4736]: I0316 17:04:05.579949 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561338-zn5d4"] Mar 16 17:04:05 crc kubenswrapper[4736]: I0316 17:04:05.589365 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561338-zn5d4"] Mar 16 17:04:06 crc kubenswrapper[4736]: I0316 17:04:06.993070 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0d91b75-e3b3-4622-9acd-7d160728454c" path="/var/lib/kubelet/pods/d0d91b75-e3b3-4622-9acd-7d160728454c/volumes" Mar 16 17:04:12 crc kubenswrapper[4736]: E0316 17:04:12.052532 4736 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.30:59276->38.102.83.30:38289: write tcp 38.102.83.30:59276->38.102.83.30:38289: write: broken pipe Mar 16 17:04:16 crc kubenswrapper[4736]: I0316 17:04:16.978257 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:04:16 crc kubenswrapper[4736]: E0316 17:04:16.979057 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.033226 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l466k"] Mar 16 17:04:21 crc kubenswrapper[4736]: E0316 17:04:21.033917 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee2bbb31-8823-4403-bc49-02e2d0f8ebfa" containerName="oc" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.033931 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee2bbb31-8823-4403-bc49-02e2d0f8ebfa" containerName="oc" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.034165 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee2bbb31-8823-4403-bc49-02e2d0f8ebfa" containerName="oc" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.035445 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.055130 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l466k"] Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.147760 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-utilities\") pod \"community-operators-l466k\" (UID: \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\") " pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.148189 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-catalog-content\") pod \"community-operators-l466k\" (UID: \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\") " pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.148237 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgdqw\" (UniqueName: \"kubernetes.io/projected/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-kube-api-access-tgdqw\") pod \"community-operators-l466k\" (UID: \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\") " pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.250071 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgdqw\" (UniqueName: \"kubernetes.io/projected/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-kube-api-access-tgdqw\") pod \"community-operators-l466k\" (UID: \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\") " pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.250470 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-utilities\") pod \"community-operators-l466k\" (UID: \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\") " pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.250531 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-catalog-content\") pod \"community-operators-l466k\" (UID: \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\") " pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.250924 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-utilities\") pod \"community-operators-l466k\" (UID: \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\") " pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.251003 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-catalog-content\") pod \"community-operators-l466k\" (UID: \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\") " pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.273952 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgdqw\" (UniqueName: \"kubernetes.io/projected/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-kube-api-access-tgdqw\") pod \"community-operators-l466k\" (UID: \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\") " pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.359803 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:21 crc kubenswrapper[4736]: I0316 17:04:21.819546 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l466k"] Mar 16 17:04:22 crc kubenswrapper[4736]: I0316 17:04:22.687571 4736 generic.go:334] "Generic (PLEG): container finished" podID="a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" containerID="d0e19d65fac7bd07c8173afde64bd6fd5e4697c501ec82a06d5619f6f4290a53" exitCode=0 Mar 16 17:04:22 crc kubenswrapper[4736]: I0316 17:04:22.687631 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l466k" event={"ID":"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3","Type":"ContainerDied","Data":"d0e19d65fac7bd07c8173afde64bd6fd5e4697c501ec82a06d5619f6f4290a53"} Mar 16 17:04:22 crc kubenswrapper[4736]: I0316 17:04:22.687811 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l466k" event={"ID":"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3","Type":"ContainerStarted","Data":"88a6843475e69bbd815815d8f97e5c31fff67589001be811a783f1bed1f3ec20"} Mar 16 17:04:23 crc kubenswrapper[4736]: I0316 17:04:23.698884 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l466k" event={"ID":"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3","Type":"ContainerStarted","Data":"196ba9803ef61b2e54c571e7489c15d78bec0ef48144d368e83438328a183106"} Mar 16 17:04:25 crc kubenswrapper[4736]: I0316 17:04:25.723599 4736 generic.go:334] "Generic (PLEG): container finished" podID="a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" containerID="196ba9803ef61b2e54c571e7489c15d78bec0ef48144d368e83438328a183106" exitCode=0 Mar 16 17:04:25 crc kubenswrapper[4736]: I0316 17:04:25.723722 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l466k" event={"ID":"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3","Type":"ContainerDied","Data":"196ba9803ef61b2e54c571e7489c15d78bec0ef48144d368e83438328a183106"} Mar 16 17:04:26 crc kubenswrapper[4736]: I0316 17:04:26.739233 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l466k" event={"ID":"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3","Type":"ContainerStarted","Data":"b2bd7ea5fd2407445029ecb0702a8c7b765497a819ef0bf7dd45f55e439910f5"} Mar 16 17:04:26 crc kubenswrapper[4736]: I0316 17:04:26.764839 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l466k" podStartSLOduration=2.210021334 podStartE2EDuration="5.764815943s" podCreationTimestamp="2026-03-16 17:04:21 +0000 UTC" firstStartedPulling="2026-03-16 17:04:22.689378302 +0000 UTC m=+6664.416768589" lastFinishedPulling="2026-03-16 17:04:26.244172921 +0000 UTC m=+6667.971563198" observedRunningTime="2026-03-16 17:04:26.759810236 +0000 UTC m=+6668.487200523" watchObservedRunningTime="2026-03-16 17:04:26.764815943 +0000 UTC m=+6668.492206230" Mar 16 17:04:31 crc kubenswrapper[4736]: I0316 17:04:31.361317 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:31 crc kubenswrapper[4736]: I0316 17:04:31.361915 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:31 crc kubenswrapper[4736]: I0316 17:04:31.979021 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:04:31 crc kubenswrapper[4736]: E0316 17:04:31.979571 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:04:32 crc kubenswrapper[4736]: I0316 17:04:32.404782 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-l466k" podUID="a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" containerName="registry-server" probeResult="failure" output=< Mar 16 17:04:32 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:04:32 crc kubenswrapper[4736]: > Mar 16 17:04:41 crc kubenswrapper[4736]: I0316 17:04:41.428257 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:41 crc kubenswrapper[4736]: I0316 17:04:41.486598 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:41 crc kubenswrapper[4736]: I0316 17:04:41.673973 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l466k"] Mar 16 17:04:42 crc kubenswrapper[4736]: I0316 17:04:42.919373 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l466k" podUID="a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" containerName="registry-server" containerID="cri-o://b2bd7ea5fd2407445029ecb0702a8c7b765497a819ef0bf7dd45f55e439910f5" gracePeriod=2 Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.431683 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.595672 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-utilities\") pod \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\" (UID: \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\") " Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.595773 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgdqw\" (UniqueName: \"kubernetes.io/projected/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-kube-api-access-tgdqw\") pod \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\" (UID: \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\") " Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.595843 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-catalog-content\") pod \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\" (UID: \"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3\") " Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.596587 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-utilities" (OuterVolumeSpecName: "utilities") pod "a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" (UID: "a8f88ca0-59a5-4901-8c33-5a1e88cf59d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.596969 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.609231 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-kube-api-access-tgdqw" (OuterVolumeSpecName: "kube-api-access-tgdqw") pod "a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" (UID: "a8f88ca0-59a5-4901-8c33-5a1e88cf59d3"). InnerVolumeSpecName "kube-api-access-tgdqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.685993 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" (UID: "a8f88ca0-59a5-4901-8c33-5a1e88cf59d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.698231 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgdqw\" (UniqueName: \"kubernetes.io/projected/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-kube-api-access-tgdqw\") on node \"crc\" DevicePath \"\"" Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.698254 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.927641 4736 generic.go:334] "Generic (PLEG): container finished" podID="a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" containerID="b2bd7ea5fd2407445029ecb0702a8c7b765497a819ef0bf7dd45f55e439910f5" exitCode=0 Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.927688 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l466k" event={"ID":"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3","Type":"ContainerDied","Data":"b2bd7ea5fd2407445029ecb0702a8c7b765497a819ef0bf7dd45f55e439910f5"} Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.927702 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l466k" Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.927723 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l466k" event={"ID":"a8f88ca0-59a5-4901-8c33-5a1e88cf59d3","Type":"ContainerDied","Data":"88a6843475e69bbd815815d8f97e5c31fff67589001be811a783f1bed1f3ec20"} Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.927748 4736 scope.go:117] "RemoveContainer" containerID="b2bd7ea5fd2407445029ecb0702a8c7b765497a819ef0bf7dd45f55e439910f5" Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.962358 4736 scope.go:117] "RemoveContainer" containerID="196ba9803ef61b2e54c571e7489c15d78bec0ef48144d368e83438328a183106" Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.985814 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l466k"] Mar 16 17:04:43 crc kubenswrapper[4736]: I0316 17:04:43.995993 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l466k"] Mar 16 17:04:44 crc kubenswrapper[4736]: I0316 17:04:44.007261 4736 scope.go:117] "RemoveContainer" containerID="d0e19d65fac7bd07c8173afde64bd6fd5e4697c501ec82a06d5619f6f4290a53" Mar 16 17:04:44 crc kubenswrapper[4736]: I0316 17:04:44.046198 4736 scope.go:117] "RemoveContainer" containerID="b2bd7ea5fd2407445029ecb0702a8c7b765497a819ef0bf7dd45f55e439910f5" Mar 16 17:04:44 crc kubenswrapper[4736]: E0316 17:04:44.046843 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2bd7ea5fd2407445029ecb0702a8c7b765497a819ef0bf7dd45f55e439910f5\": container with ID starting with b2bd7ea5fd2407445029ecb0702a8c7b765497a819ef0bf7dd45f55e439910f5 not found: ID does not exist" containerID="b2bd7ea5fd2407445029ecb0702a8c7b765497a819ef0bf7dd45f55e439910f5" Mar 16 17:04:44 crc kubenswrapper[4736]: I0316 17:04:44.046880 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2bd7ea5fd2407445029ecb0702a8c7b765497a819ef0bf7dd45f55e439910f5"} err="failed to get container status \"b2bd7ea5fd2407445029ecb0702a8c7b765497a819ef0bf7dd45f55e439910f5\": rpc error: code = NotFound desc = could not find container \"b2bd7ea5fd2407445029ecb0702a8c7b765497a819ef0bf7dd45f55e439910f5\": container with ID starting with b2bd7ea5fd2407445029ecb0702a8c7b765497a819ef0bf7dd45f55e439910f5 not found: ID does not exist" Mar 16 17:04:44 crc kubenswrapper[4736]: I0316 17:04:44.046902 4736 scope.go:117] "RemoveContainer" containerID="196ba9803ef61b2e54c571e7489c15d78bec0ef48144d368e83438328a183106" Mar 16 17:04:44 crc kubenswrapper[4736]: E0316 17:04:44.047124 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"196ba9803ef61b2e54c571e7489c15d78bec0ef48144d368e83438328a183106\": container with ID starting with 196ba9803ef61b2e54c571e7489c15d78bec0ef48144d368e83438328a183106 not found: ID does not exist" containerID="196ba9803ef61b2e54c571e7489c15d78bec0ef48144d368e83438328a183106" Mar 16 17:04:44 crc kubenswrapper[4736]: I0316 17:04:44.047146 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196ba9803ef61b2e54c571e7489c15d78bec0ef48144d368e83438328a183106"} err="failed to get container status \"196ba9803ef61b2e54c571e7489c15d78bec0ef48144d368e83438328a183106\": rpc error: code = NotFound desc = could not find container \"196ba9803ef61b2e54c571e7489c15d78bec0ef48144d368e83438328a183106\": container with ID starting with 196ba9803ef61b2e54c571e7489c15d78bec0ef48144d368e83438328a183106 not found: ID does not exist" Mar 16 17:04:44 crc kubenswrapper[4736]: I0316 17:04:44.047162 4736 scope.go:117] "RemoveContainer" containerID="d0e19d65fac7bd07c8173afde64bd6fd5e4697c501ec82a06d5619f6f4290a53" Mar 16 17:04:44 crc kubenswrapper[4736]: E0316 17:04:44.047653 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0e19d65fac7bd07c8173afde64bd6fd5e4697c501ec82a06d5619f6f4290a53\": container with ID starting with d0e19d65fac7bd07c8173afde64bd6fd5e4697c501ec82a06d5619f6f4290a53 not found: ID does not exist" containerID="d0e19d65fac7bd07c8173afde64bd6fd5e4697c501ec82a06d5619f6f4290a53" Mar 16 17:04:44 crc kubenswrapper[4736]: I0316 17:04:44.047693 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0e19d65fac7bd07c8173afde64bd6fd5e4697c501ec82a06d5619f6f4290a53"} err="failed to get container status \"d0e19d65fac7bd07c8173afde64bd6fd5e4697c501ec82a06d5619f6f4290a53\": rpc error: code = NotFound desc = could not find container \"d0e19d65fac7bd07c8173afde64bd6fd5e4697c501ec82a06d5619f6f4290a53\": container with ID starting with d0e19d65fac7bd07c8173afde64bd6fd5e4697c501ec82a06d5619f6f4290a53 not found: ID does not exist" Mar 16 17:04:44 crc kubenswrapper[4736]: I0316 17:04:44.996445 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" path="/var/lib/kubelet/pods/a8f88ca0-59a5-4901-8c33-5a1e88cf59d3/volumes" Mar 16 17:04:45 crc kubenswrapper[4736]: I0316 17:04:45.979396 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:04:45 crc kubenswrapper[4736]: E0316 17:04:45.980163 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:04:56 crc kubenswrapper[4736]: I0316 17:04:56.978714 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:04:56 crc kubenswrapper[4736]: E0316 17:04:56.979464 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:05:00 crc kubenswrapper[4736]: I0316 17:05:00.509577 4736 scope.go:117] "RemoveContainer" containerID="0852a199e0a4eadd238709c50a64694592ece722171d3005ad43ae475fec1e29" Mar 16 17:05:11 crc kubenswrapper[4736]: I0316 17:05:11.978582 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:05:11 crc kubenswrapper[4736]: E0316 17:05:11.979426 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:05:25 crc kubenswrapper[4736]: I0316 17:05:25.978609 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:05:25 crc kubenswrapper[4736]: E0316 17:05:25.979351 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:05:37 crc kubenswrapper[4736]: I0316 17:05:37.978236 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:05:37 crc kubenswrapper[4736]: E0316 17:05:37.978958 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:05:51 crc kubenswrapper[4736]: I0316 17:05:51.978287 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:05:52 crc kubenswrapper[4736]: I0316 17:05:52.597226 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"38f41d5c4b53fbe69afa1aa3b97e7510f16462cbbd888f88f1544742fc962516"} Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.142918 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561346-xt99l"] Mar 16 17:06:00 crc kubenswrapper[4736]: E0316 17:06:00.143826 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" containerName="extract-content" Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.143839 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" containerName="extract-content" Mar 16 17:06:00 crc kubenswrapper[4736]: E0316 17:06:00.143857 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" containerName="extract-utilities" Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.143864 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" containerName="extract-utilities" Mar 16 17:06:00 crc kubenswrapper[4736]: E0316 17:06:00.143879 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" containerName="registry-server" Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.143886 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" containerName="registry-server" Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.144074 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f88ca0-59a5-4901-8c33-5a1e88cf59d3" containerName="registry-server" Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.144673 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561346-xt99l" Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.147181 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.147383 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.147652 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.153974 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561346-xt99l"] Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.301892 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84w2v\" (UniqueName: \"kubernetes.io/projected/4c9f77c0-581d-47de-8309-1961ae783cdf-kube-api-access-84w2v\") pod \"auto-csr-approver-29561346-xt99l\" (UID: \"4c9f77c0-581d-47de-8309-1961ae783cdf\") " pod="openshift-infra/auto-csr-approver-29561346-xt99l" Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.403805 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84w2v\" (UniqueName: \"kubernetes.io/projected/4c9f77c0-581d-47de-8309-1961ae783cdf-kube-api-access-84w2v\") pod \"auto-csr-approver-29561346-xt99l\" (UID: \"4c9f77c0-581d-47de-8309-1961ae783cdf\") " pod="openshift-infra/auto-csr-approver-29561346-xt99l" Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.424470 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84w2v\" (UniqueName: \"kubernetes.io/projected/4c9f77c0-581d-47de-8309-1961ae783cdf-kube-api-access-84w2v\") pod \"auto-csr-approver-29561346-xt99l\" (UID: \"4c9f77c0-581d-47de-8309-1961ae783cdf\") " pod="openshift-infra/auto-csr-approver-29561346-xt99l" Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.460929 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561346-xt99l" Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.975062 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561346-xt99l"] Mar 16 17:06:00 crc kubenswrapper[4736]: I0316 17:06:00.987093 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 17:06:01 crc kubenswrapper[4736]: I0316 17:06:01.707035 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561346-xt99l" event={"ID":"4c9f77c0-581d-47de-8309-1961ae783cdf","Type":"ContainerStarted","Data":"542576305e7cf962e8f750c00cfb5787d34154151ad25a1928783037db7dff97"} Mar 16 17:06:02 crc kubenswrapper[4736]: I0316 17:06:02.716660 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561346-xt99l" event={"ID":"4c9f77c0-581d-47de-8309-1961ae783cdf","Type":"ContainerStarted","Data":"ea665db206b46309654faff4a29d52fbba0df0fd4e5a958747556151296cd7e6"} Mar 16 17:06:02 crc kubenswrapper[4736]: I0316 17:06:02.740030 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561346-xt99l" podStartSLOduration=1.623998598 podStartE2EDuration="2.740003241s" podCreationTimestamp="2026-03-16 17:06:00 +0000 UTC" firstStartedPulling="2026-03-16 17:06:00.984652388 +0000 UTC m=+6762.712042685" lastFinishedPulling="2026-03-16 17:06:02.100657041 +0000 UTC m=+6763.828047328" observedRunningTime="2026-03-16 17:06:02.73301912 +0000 UTC m=+6764.460409408" watchObservedRunningTime="2026-03-16 17:06:02.740003241 +0000 UTC m=+6764.467393568" Mar 16 17:06:03 crc kubenswrapper[4736]: I0316 17:06:03.727338 4736 generic.go:334] "Generic (PLEG): container finished" podID="4c9f77c0-581d-47de-8309-1961ae783cdf" containerID="ea665db206b46309654faff4a29d52fbba0df0fd4e5a958747556151296cd7e6" exitCode=0 Mar 16 17:06:03 crc kubenswrapper[4736]: I0316 17:06:03.727410 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561346-xt99l" event={"ID":"4c9f77c0-581d-47de-8309-1961ae783cdf","Type":"ContainerDied","Data":"ea665db206b46309654faff4a29d52fbba0df0fd4e5a958747556151296cd7e6"} Mar 16 17:06:05 crc kubenswrapper[4736]: I0316 17:06:05.152720 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561346-xt99l" Mar 16 17:06:05 crc kubenswrapper[4736]: I0316 17:06:05.301231 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84w2v\" (UniqueName: \"kubernetes.io/projected/4c9f77c0-581d-47de-8309-1961ae783cdf-kube-api-access-84w2v\") pod \"4c9f77c0-581d-47de-8309-1961ae783cdf\" (UID: \"4c9f77c0-581d-47de-8309-1961ae783cdf\") " Mar 16 17:06:05 crc kubenswrapper[4736]: I0316 17:06:05.306675 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c9f77c0-581d-47de-8309-1961ae783cdf-kube-api-access-84w2v" (OuterVolumeSpecName: "kube-api-access-84w2v") pod "4c9f77c0-581d-47de-8309-1961ae783cdf" (UID: "4c9f77c0-581d-47de-8309-1961ae783cdf"). InnerVolumeSpecName "kube-api-access-84w2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:06:05 crc kubenswrapper[4736]: I0316 17:06:05.403756 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84w2v\" (UniqueName: \"kubernetes.io/projected/4c9f77c0-581d-47de-8309-1961ae783cdf-kube-api-access-84w2v\") on node \"crc\" DevicePath \"\"" Mar 16 17:06:05 crc kubenswrapper[4736]: I0316 17:06:05.780180 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561346-xt99l" event={"ID":"4c9f77c0-581d-47de-8309-1961ae783cdf","Type":"ContainerDied","Data":"542576305e7cf962e8f750c00cfb5787d34154151ad25a1928783037db7dff97"} Mar 16 17:06:05 crc kubenswrapper[4736]: I0316 17:06:05.780237 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="542576305e7cf962e8f750c00cfb5787d34154151ad25a1928783037db7dff97" Mar 16 17:06:05 crc kubenswrapper[4736]: I0316 17:06:05.780309 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561346-xt99l" Mar 16 17:06:05 crc kubenswrapper[4736]: I0316 17:06:05.824419 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561340-mkq5c"] Mar 16 17:06:05 crc kubenswrapper[4736]: I0316 17:06:05.834522 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561340-mkq5c"] Mar 16 17:06:06 crc kubenswrapper[4736]: I0316 17:06:06.995505 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37" path="/var/lib/kubelet/pods/cb19c45f-badc-45c9-9d8c-5a2e5fdf4c37/volumes" Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.544882 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bkxw9"] Mar 16 17:06:23 crc kubenswrapper[4736]: E0316 17:06:23.545806 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9f77c0-581d-47de-8309-1961ae783cdf" containerName="oc" Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.545819 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9f77c0-581d-47de-8309-1961ae783cdf" containerName="oc" Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.546015 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c9f77c0-581d-47de-8309-1961ae783cdf" containerName="oc" Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.547310 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.568835 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkxw9"] Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.608869 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c116b89b-acd7-4965-9455-43b857dcf5c7-catalog-content\") pod \"redhat-marketplace-bkxw9\" (UID: \"c116b89b-acd7-4965-9455-43b857dcf5c7\") " pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.608987 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrvtr\" (UniqueName: \"kubernetes.io/projected/c116b89b-acd7-4965-9455-43b857dcf5c7-kube-api-access-qrvtr\") pod \"redhat-marketplace-bkxw9\" (UID: \"c116b89b-acd7-4965-9455-43b857dcf5c7\") " pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.609025 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c116b89b-acd7-4965-9455-43b857dcf5c7-utilities\") pod \"redhat-marketplace-bkxw9\" (UID: \"c116b89b-acd7-4965-9455-43b857dcf5c7\") " pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.710231 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrvtr\" (UniqueName: \"kubernetes.io/projected/c116b89b-acd7-4965-9455-43b857dcf5c7-kube-api-access-qrvtr\") pod \"redhat-marketplace-bkxw9\" (UID: \"c116b89b-acd7-4965-9455-43b857dcf5c7\") " pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.710586 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c116b89b-acd7-4965-9455-43b857dcf5c7-utilities\") pod \"redhat-marketplace-bkxw9\" (UID: \"c116b89b-acd7-4965-9455-43b857dcf5c7\") " pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.710808 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c116b89b-acd7-4965-9455-43b857dcf5c7-catalog-content\") pod \"redhat-marketplace-bkxw9\" (UID: \"c116b89b-acd7-4965-9455-43b857dcf5c7\") " pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.710959 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c116b89b-acd7-4965-9455-43b857dcf5c7-utilities\") pod \"redhat-marketplace-bkxw9\" (UID: \"c116b89b-acd7-4965-9455-43b857dcf5c7\") " pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.711331 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c116b89b-acd7-4965-9455-43b857dcf5c7-catalog-content\") pod \"redhat-marketplace-bkxw9\" (UID: \"c116b89b-acd7-4965-9455-43b857dcf5c7\") " pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.748915 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrvtr\" (UniqueName: \"kubernetes.io/projected/c116b89b-acd7-4965-9455-43b857dcf5c7-kube-api-access-qrvtr\") pod \"redhat-marketplace-bkxw9\" (UID: \"c116b89b-acd7-4965-9455-43b857dcf5c7\") " pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:23 crc kubenswrapper[4736]: I0316 17:06:23.885624 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:24 crc kubenswrapper[4736]: I0316 17:06:24.467703 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkxw9"] Mar 16 17:06:24 crc kubenswrapper[4736]: I0316 17:06:24.968721 4736 generic.go:334] "Generic (PLEG): container finished" podID="c116b89b-acd7-4965-9455-43b857dcf5c7" containerID="dda5e510bfd6320c1664cdef9452729c45c717694634b52caef9164c8e143b30" exitCode=0 Mar 16 17:06:24 crc kubenswrapper[4736]: I0316 17:06:24.968804 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkxw9" event={"ID":"c116b89b-acd7-4965-9455-43b857dcf5c7","Type":"ContainerDied","Data":"dda5e510bfd6320c1664cdef9452729c45c717694634b52caef9164c8e143b30"} Mar 16 17:06:24 crc kubenswrapper[4736]: I0316 17:06:24.969046 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkxw9" event={"ID":"c116b89b-acd7-4965-9455-43b857dcf5c7","Type":"ContainerStarted","Data":"ef12a20395c41e9b1515e2d95fe69852fcb84e7558dc58fde53855ec84949126"} Mar 16 17:06:25 crc kubenswrapper[4736]: I0316 17:06:25.996838 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkxw9" event={"ID":"c116b89b-acd7-4965-9455-43b857dcf5c7","Type":"ContainerStarted","Data":"cb8cd7a0933b16417a7a02252c9e784b8053bef643cd60093b55ca80abe399c2"} Mar 16 17:06:27 crc kubenswrapper[4736]: I0316 17:06:27.006996 4736 generic.go:334] "Generic (PLEG): container finished" podID="c116b89b-acd7-4965-9455-43b857dcf5c7" containerID="cb8cd7a0933b16417a7a02252c9e784b8053bef643cd60093b55ca80abe399c2" exitCode=0 Mar 16 17:06:27 crc kubenswrapper[4736]: I0316 17:06:27.007026 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkxw9" event={"ID":"c116b89b-acd7-4965-9455-43b857dcf5c7","Type":"ContainerDied","Data":"cb8cd7a0933b16417a7a02252c9e784b8053bef643cd60093b55ca80abe399c2"} Mar 16 17:06:28 crc kubenswrapper[4736]: I0316 17:06:28.018791 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkxw9" event={"ID":"c116b89b-acd7-4965-9455-43b857dcf5c7","Type":"ContainerStarted","Data":"8ea66e45d8770216b34ca521b62a4b52dc5c4443f1e605a94ff9594a2c1606b1"} Mar 16 17:06:28 crc kubenswrapper[4736]: I0316 17:06:28.044607 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bkxw9" podStartSLOduration=2.526240349 podStartE2EDuration="5.044588505s" podCreationTimestamp="2026-03-16 17:06:23 +0000 UTC" firstStartedPulling="2026-03-16 17:06:24.971032574 +0000 UTC m=+6786.698422861" lastFinishedPulling="2026-03-16 17:06:27.48938073 +0000 UTC m=+6789.216771017" observedRunningTime="2026-03-16 17:06:28.033701068 +0000 UTC m=+6789.761091355" watchObservedRunningTime="2026-03-16 17:06:28.044588505 +0000 UTC m=+6789.771978792" Mar 16 17:06:33 crc kubenswrapper[4736]: I0316 17:06:33.886365 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:33 crc kubenswrapper[4736]: I0316 17:06:33.886989 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:34 crc kubenswrapper[4736]: I0316 17:06:34.944783 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-bkxw9" podUID="c116b89b-acd7-4965-9455-43b857dcf5c7" containerName="registry-server" probeResult="failure" output=< Mar 16 17:06:34 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:06:34 crc kubenswrapper[4736]: > Mar 16 17:06:43 crc kubenswrapper[4736]: I0316 17:06:43.961639 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:44 crc kubenswrapper[4736]: I0316 17:06:44.034233 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:44 crc kubenswrapper[4736]: I0316 17:06:44.213200 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkxw9"] Mar 16 17:06:45 crc kubenswrapper[4736]: I0316 17:06:45.183093 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bkxw9" podUID="c116b89b-acd7-4965-9455-43b857dcf5c7" containerName="registry-server" containerID="cri-o://8ea66e45d8770216b34ca521b62a4b52dc5c4443f1e605a94ff9594a2c1606b1" gracePeriod=2 Mar 16 17:06:45 crc kubenswrapper[4736]: I0316 17:06:45.682349 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:45 crc kubenswrapper[4736]: I0316 17:06:45.854012 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c116b89b-acd7-4965-9455-43b857dcf5c7-catalog-content\") pod \"c116b89b-acd7-4965-9455-43b857dcf5c7\" (UID: \"c116b89b-acd7-4965-9455-43b857dcf5c7\") " Mar 16 17:06:45 crc kubenswrapper[4736]: I0316 17:06:45.856448 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrvtr\" (UniqueName: \"kubernetes.io/projected/c116b89b-acd7-4965-9455-43b857dcf5c7-kube-api-access-qrvtr\") pod \"c116b89b-acd7-4965-9455-43b857dcf5c7\" (UID: \"c116b89b-acd7-4965-9455-43b857dcf5c7\") " Mar 16 17:06:45 crc kubenswrapper[4736]: I0316 17:06:45.856554 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c116b89b-acd7-4965-9455-43b857dcf5c7-utilities\") pod \"c116b89b-acd7-4965-9455-43b857dcf5c7\" (UID: \"c116b89b-acd7-4965-9455-43b857dcf5c7\") " Mar 16 17:06:45 crc kubenswrapper[4736]: I0316 17:06:45.857747 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c116b89b-acd7-4965-9455-43b857dcf5c7-utilities" (OuterVolumeSpecName: "utilities") pod "c116b89b-acd7-4965-9455-43b857dcf5c7" (UID: "c116b89b-acd7-4965-9455-43b857dcf5c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:06:45 crc kubenswrapper[4736]: I0316 17:06:45.863867 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c116b89b-acd7-4965-9455-43b857dcf5c7-kube-api-access-qrvtr" (OuterVolumeSpecName: "kube-api-access-qrvtr") pod "c116b89b-acd7-4965-9455-43b857dcf5c7" (UID: "c116b89b-acd7-4965-9455-43b857dcf5c7"). InnerVolumeSpecName "kube-api-access-qrvtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:06:45 crc kubenswrapper[4736]: I0316 17:06:45.884488 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c116b89b-acd7-4965-9455-43b857dcf5c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c116b89b-acd7-4965-9455-43b857dcf5c7" (UID: "c116b89b-acd7-4965-9455-43b857dcf5c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:06:45 crc kubenswrapper[4736]: I0316 17:06:45.959231 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c116b89b-acd7-4965-9455-43b857dcf5c7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:06:45 crc kubenswrapper[4736]: I0316 17:06:45.959286 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrvtr\" (UniqueName: \"kubernetes.io/projected/c116b89b-acd7-4965-9455-43b857dcf5c7-kube-api-access-qrvtr\") on node \"crc\" DevicePath \"\"" Mar 16 17:06:45 crc kubenswrapper[4736]: I0316 17:06:45.959306 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c116b89b-acd7-4965-9455-43b857dcf5c7-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.198982 4736 generic.go:334] "Generic (PLEG): container finished" podID="c116b89b-acd7-4965-9455-43b857dcf5c7" containerID="8ea66e45d8770216b34ca521b62a4b52dc5c4443f1e605a94ff9594a2c1606b1" exitCode=0 Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.199046 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkxw9" event={"ID":"c116b89b-acd7-4965-9455-43b857dcf5c7","Type":"ContainerDied","Data":"8ea66e45d8770216b34ca521b62a4b52dc5c4443f1e605a94ff9594a2c1606b1"} Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.199088 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bkxw9" event={"ID":"c116b89b-acd7-4965-9455-43b857dcf5c7","Type":"ContainerDied","Data":"ef12a20395c41e9b1515e2d95fe69852fcb84e7558dc58fde53855ec84949126"} Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.199137 4736 scope.go:117] "RemoveContainer" containerID="8ea66e45d8770216b34ca521b62a4b52dc5c4443f1e605a94ff9594a2c1606b1" Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.199051 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bkxw9" Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.245283 4736 scope.go:117] "RemoveContainer" containerID="cb8cd7a0933b16417a7a02252c9e784b8053bef643cd60093b55ca80abe399c2" Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.275545 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkxw9"] Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.286804 4736 scope.go:117] "RemoveContainer" containerID="dda5e510bfd6320c1664cdef9452729c45c717694634b52caef9164c8e143b30" Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.291164 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bkxw9"] Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.343647 4736 scope.go:117] "RemoveContainer" containerID="8ea66e45d8770216b34ca521b62a4b52dc5c4443f1e605a94ff9594a2c1606b1" Mar 16 17:06:46 crc kubenswrapper[4736]: E0316 17:06:46.344017 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ea66e45d8770216b34ca521b62a4b52dc5c4443f1e605a94ff9594a2c1606b1\": container with ID starting with 8ea66e45d8770216b34ca521b62a4b52dc5c4443f1e605a94ff9594a2c1606b1 not found: ID does not exist" containerID="8ea66e45d8770216b34ca521b62a4b52dc5c4443f1e605a94ff9594a2c1606b1" Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.344066 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ea66e45d8770216b34ca521b62a4b52dc5c4443f1e605a94ff9594a2c1606b1"} err="failed to get container status \"8ea66e45d8770216b34ca521b62a4b52dc5c4443f1e605a94ff9594a2c1606b1\": rpc error: code = NotFound desc = could not find container \"8ea66e45d8770216b34ca521b62a4b52dc5c4443f1e605a94ff9594a2c1606b1\": container with ID starting with 8ea66e45d8770216b34ca521b62a4b52dc5c4443f1e605a94ff9594a2c1606b1 not found: ID does not exist" Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.344094 4736 scope.go:117] "RemoveContainer" containerID="cb8cd7a0933b16417a7a02252c9e784b8053bef643cd60093b55ca80abe399c2" Mar 16 17:06:46 crc kubenswrapper[4736]: E0316 17:06:46.344404 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb8cd7a0933b16417a7a02252c9e784b8053bef643cd60093b55ca80abe399c2\": container with ID starting with cb8cd7a0933b16417a7a02252c9e784b8053bef643cd60093b55ca80abe399c2 not found: ID does not exist" containerID="cb8cd7a0933b16417a7a02252c9e784b8053bef643cd60093b55ca80abe399c2" Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.344440 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb8cd7a0933b16417a7a02252c9e784b8053bef643cd60093b55ca80abe399c2"} err="failed to get container status \"cb8cd7a0933b16417a7a02252c9e784b8053bef643cd60093b55ca80abe399c2\": rpc error: code = NotFound desc = could not find container \"cb8cd7a0933b16417a7a02252c9e784b8053bef643cd60093b55ca80abe399c2\": container with ID starting with cb8cd7a0933b16417a7a02252c9e784b8053bef643cd60093b55ca80abe399c2 not found: ID does not exist" Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.344459 4736 scope.go:117] "RemoveContainer" containerID="dda5e510bfd6320c1664cdef9452729c45c717694634b52caef9164c8e143b30" Mar 16 17:06:46 crc kubenswrapper[4736]: E0316 17:06:46.344892 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dda5e510bfd6320c1664cdef9452729c45c717694634b52caef9164c8e143b30\": container with ID starting with dda5e510bfd6320c1664cdef9452729c45c717694634b52caef9164c8e143b30 not found: ID does not exist" containerID="dda5e510bfd6320c1664cdef9452729c45c717694634b52caef9164c8e143b30" Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.344921 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dda5e510bfd6320c1664cdef9452729c45c717694634b52caef9164c8e143b30"} err="failed to get container status \"dda5e510bfd6320c1664cdef9452729c45c717694634b52caef9164c8e143b30\": rpc error: code = NotFound desc = could not find container \"dda5e510bfd6320c1664cdef9452729c45c717694634b52caef9164c8e143b30\": container with ID starting with dda5e510bfd6320c1664cdef9452729c45c717694634b52caef9164c8e143b30 not found: ID does not exist" Mar 16 17:06:46 crc kubenswrapper[4736]: I0316 17:06:46.993401 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c116b89b-acd7-4965-9455-43b857dcf5c7" path="/var/lib/kubelet/pods/c116b89b-acd7-4965-9455-43b857dcf5c7/volumes" Mar 16 17:07:00 crc kubenswrapper[4736]: I0316 17:07:00.699689 4736 scope.go:117] "RemoveContainer" containerID="342782e2b2811c48ccd31483ff63adbd230a14dc85d6acae27d8e9c89c2099e0" Mar 16 17:08:00 crc kubenswrapper[4736]: I0316 17:08:00.143495 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561348-ml2ms"] Mar 16 17:08:00 crc kubenswrapper[4736]: E0316 17:08:00.144297 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c116b89b-acd7-4965-9455-43b857dcf5c7" containerName="extract-utilities" Mar 16 17:08:00 crc kubenswrapper[4736]: I0316 17:08:00.144309 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c116b89b-acd7-4965-9455-43b857dcf5c7" containerName="extract-utilities" Mar 16 17:08:00 crc kubenswrapper[4736]: E0316 17:08:00.144338 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c116b89b-acd7-4965-9455-43b857dcf5c7" containerName="registry-server" Mar 16 17:08:00 crc kubenswrapper[4736]: I0316 17:08:00.144346 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c116b89b-acd7-4965-9455-43b857dcf5c7" containerName="registry-server" Mar 16 17:08:00 crc kubenswrapper[4736]: E0316 17:08:00.144355 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c116b89b-acd7-4965-9455-43b857dcf5c7" containerName="extract-content" Mar 16 17:08:00 crc kubenswrapper[4736]: I0316 17:08:00.144363 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c116b89b-acd7-4965-9455-43b857dcf5c7" containerName="extract-content" Mar 16 17:08:00 crc kubenswrapper[4736]: I0316 17:08:00.144533 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c116b89b-acd7-4965-9455-43b857dcf5c7" containerName="registry-server" Mar 16 17:08:00 crc kubenswrapper[4736]: I0316 17:08:00.145131 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561348-ml2ms" Mar 16 17:08:00 crc kubenswrapper[4736]: I0316 17:08:00.147749 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:08:00 crc kubenswrapper[4736]: I0316 17:08:00.148186 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:08:00 crc kubenswrapper[4736]: I0316 17:08:00.148335 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:08:00 crc kubenswrapper[4736]: I0316 17:08:00.168992 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561348-ml2ms"] Mar 16 17:08:00 crc kubenswrapper[4736]: I0316 17:08:00.248780 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsssd\" (UniqueName: \"kubernetes.io/projected/de592875-7551-4595-bab1-55152596e7b9-kube-api-access-bsssd\") pod \"auto-csr-approver-29561348-ml2ms\" (UID: \"de592875-7551-4595-bab1-55152596e7b9\") " pod="openshift-infra/auto-csr-approver-29561348-ml2ms" Mar 16 17:08:00 crc kubenswrapper[4736]: I0316 17:08:00.350761 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsssd\" (UniqueName: \"kubernetes.io/projected/de592875-7551-4595-bab1-55152596e7b9-kube-api-access-bsssd\") pod \"auto-csr-approver-29561348-ml2ms\" (UID: \"de592875-7551-4595-bab1-55152596e7b9\") " pod="openshift-infra/auto-csr-approver-29561348-ml2ms" Mar 16 17:08:00 crc kubenswrapper[4736]: I0316 17:08:00.370224 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsssd\" (UniqueName: \"kubernetes.io/projected/de592875-7551-4595-bab1-55152596e7b9-kube-api-access-bsssd\") pod \"auto-csr-approver-29561348-ml2ms\" (UID: \"de592875-7551-4595-bab1-55152596e7b9\") " pod="openshift-infra/auto-csr-approver-29561348-ml2ms" Mar 16 17:08:00 crc kubenswrapper[4736]: I0316 17:08:00.513994 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561348-ml2ms" Mar 16 17:08:01 crc kubenswrapper[4736]: I0316 17:08:01.297100 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561348-ml2ms"] Mar 16 17:08:01 crc kubenswrapper[4736]: I0316 17:08:01.947764 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561348-ml2ms" event={"ID":"de592875-7551-4595-bab1-55152596e7b9","Type":"ContainerStarted","Data":"48ba59aca434aa7e028f45b3e4b8232e9cf79da0c1966b0fa689219fcec9c664"} Mar 16 17:08:03 crc kubenswrapper[4736]: I0316 17:08:03.972486 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561348-ml2ms" event={"ID":"de592875-7551-4595-bab1-55152596e7b9","Type":"ContainerStarted","Data":"ceba6341fe90f629e5a04a26d56c8b669372694088858d6696664786da8c1402"} Mar 16 17:08:04 crc kubenswrapper[4736]: I0316 17:08:04.006237 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561348-ml2ms" podStartSLOduration=1.828067689 podStartE2EDuration="4.006217306s" podCreationTimestamp="2026-03-16 17:08:00 +0000 UTC" firstStartedPulling="2026-03-16 17:08:01.293914859 +0000 UTC m=+6883.021305146" lastFinishedPulling="2026-03-16 17:08:03.472064476 +0000 UTC m=+6885.199454763" observedRunningTime="2026-03-16 17:08:03.99499056 +0000 UTC m=+6885.722380887" watchObservedRunningTime="2026-03-16 17:08:04.006217306 +0000 UTC m=+6885.733607593" Mar 16 17:08:05 crc kubenswrapper[4736]: I0316 17:08:05.999176 4736 generic.go:334] "Generic (PLEG): container finished" podID="de592875-7551-4595-bab1-55152596e7b9" containerID="ceba6341fe90f629e5a04a26d56c8b669372694088858d6696664786da8c1402" exitCode=0 Mar 16 17:08:05 crc kubenswrapper[4736]: I0316 17:08:05.999258 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561348-ml2ms" event={"ID":"de592875-7551-4595-bab1-55152596e7b9","Type":"ContainerDied","Data":"ceba6341fe90f629e5a04a26d56c8b669372694088858d6696664786da8c1402"} Mar 16 17:08:07 crc kubenswrapper[4736]: I0316 17:08:07.599576 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561348-ml2ms" Mar 16 17:08:07 crc kubenswrapper[4736]: I0316 17:08:07.765422 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsssd\" (UniqueName: \"kubernetes.io/projected/de592875-7551-4595-bab1-55152596e7b9-kube-api-access-bsssd\") pod \"de592875-7551-4595-bab1-55152596e7b9\" (UID: \"de592875-7551-4595-bab1-55152596e7b9\") " Mar 16 17:08:07 crc kubenswrapper[4736]: I0316 17:08:07.772353 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de592875-7551-4595-bab1-55152596e7b9-kube-api-access-bsssd" (OuterVolumeSpecName: "kube-api-access-bsssd") pod "de592875-7551-4595-bab1-55152596e7b9" (UID: "de592875-7551-4595-bab1-55152596e7b9"). InnerVolumeSpecName "kube-api-access-bsssd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:08:07 crc kubenswrapper[4736]: I0316 17:08:07.867839 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsssd\" (UniqueName: \"kubernetes.io/projected/de592875-7551-4595-bab1-55152596e7b9-kube-api-access-bsssd\") on node \"crc\" DevicePath \"\"" Mar 16 17:08:08 crc kubenswrapper[4736]: I0316 17:08:08.019534 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561348-ml2ms" event={"ID":"de592875-7551-4595-bab1-55152596e7b9","Type":"ContainerDied","Data":"48ba59aca434aa7e028f45b3e4b8232e9cf79da0c1966b0fa689219fcec9c664"} Mar 16 17:08:08 crc kubenswrapper[4736]: I0316 17:08:08.019576 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48ba59aca434aa7e028f45b3e4b8232e9cf79da0c1966b0fa689219fcec9c664" Mar 16 17:08:08 crc kubenswrapper[4736]: I0316 17:08:08.019588 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561348-ml2ms" Mar 16 17:08:08 crc kubenswrapper[4736]: I0316 17:08:08.092265 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561342-kgtqt"] Mar 16 17:08:08 crc kubenswrapper[4736]: I0316 17:08:08.099935 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561342-kgtqt"] Mar 16 17:08:08 crc kubenswrapper[4736]: I0316 17:08:08.507855 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:08:08 crc kubenswrapper[4736]: I0316 17:08:08.508233 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:08:08 crc kubenswrapper[4736]: I0316 17:08:08.989614 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="055191d5-6303-4930-a098-dd5ee0577c3d" path="/var/lib/kubelet/pods/055191d5-6303-4930-a098-dd5ee0577c3d/volumes" Mar 16 17:08:38 crc kubenswrapper[4736]: I0316 17:08:38.508209 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:08:38 crc kubenswrapper[4736]: I0316 17:08:38.508802 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:09:00 crc kubenswrapper[4736]: I0316 17:09:00.866318 4736 scope.go:117] "RemoveContainer" containerID="c88897e7b5e2fff733a9808148bfa43e6c2b57f8fbd24c80bc3baedd4ee10b44" Mar 16 17:09:08 crc kubenswrapper[4736]: I0316 17:09:08.508174 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:09:08 crc kubenswrapper[4736]: I0316 17:09:08.508708 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:09:08 crc kubenswrapper[4736]: I0316 17:09:08.508782 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 17:09:08 crc kubenswrapper[4736]: I0316 17:09:08.510227 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38f41d5c4b53fbe69afa1aa3b97e7510f16462cbbd888f88f1544742fc962516"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 17:09:08 crc kubenswrapper[4736]: I0316 17:09:08.510332 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://38f41d5c4b53fbe69afa1aa3b97e7510f16462cbbd888f88f1544742fc962516" gracePeriod=600 Mar 16 17:09:08 crc kubenswrapper[4736]: I0316 17:09:08.650224 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="38f41d5c4b53fbe69afa1aa3b97e7510f16462cbbd888f88f1544742fc962516" exitCode=0 Mar 16 17:09:08 crc kubenswrapper[4736]: I0316 17:09:08.650275 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"38f41d5c4b53fbe69afa1aa3b97e7510f16462cbbd888f88f1544742fc962516"} Mar 16 17:09:08 crc kubenswrapper[4736]: I0316 17:09:08.650311 4736 scope.go:117] "RemoveContainer" containerID="8120f89b7bd8c93b5af22dc939dcca385066270c9a867445b0a0f76b2dcc0480" Mar 16 17:09:09 crc kubenswrapper[4736]: I0316 17:09:09.666217 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd"} Mar 16 17:10:00 crc kubenswrapper[4736]: I0316 17:10:00.148634 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561350-fwgrh"] Mar 16 17:10:00 crc kubenswrapper[4736]: E0316 17:10:00.149597 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de592875-7551-4595-bab1-55152596e7b9" containerName="oc" Mar 16 17:10:00 crc kubenswrapper[4736]: I0316 17:10:00.149608 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="de592875-7551-4595-bab1-55152596e7b9" containerName="oc" Mar 16 17:10:00 crc kubenswrapper[4736]: I0316 17:10:00.149784 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="de592875-7551-4595-bab1-55152596e7b9" containerName="oc" Mar 16 17:10:00 crc kubenswrapper[4736]: I0316 17:10:00.150350 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561350-fwgrh" Mar 16 17:10:00 crc kubenswrapper[4736]: I0316 17:10:00.153557 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:10:00 crc kubenswrapper[4736]: I0316 17:10:00.153725 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:10:00 crc kubenswrapper[4736]: I0316 17:10:00.154218 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:10:00 crc kubenswrapper[4736]: I0316 17:10:00.166909 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561350-fwgrh"] Mar 16 17:10:00 crc kubenswrapper[4736]: I0316 17:10:00.210066 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzzqd\" (UniqueName: \"kubernetes.io/projected/f06f6919-59df-4a8a-acff-57e7d7f9546e-kube-api-access-bzzqd\") pod \"auto-csr-approver-29561350-fwgrh\" (UID: \"f06f6919-59df-4a8a-acff-57e7d7f9546e\") " pod="openshift-infra/auto-csr-approver-29561350-fwgrh" Mar 16 17:10:00 crc kubenswrapper[4736]: I0316 17:10:00.312997 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzzqd\" (UniqueName: \"kubernetes.io/projected/f06f6919-59df-4a8a-acff-57e7d7f9546e-kube-api-access-bzzqd\") pod \"auto-csr-approver-29561350-fwgrh\" (UID: \"f06f6919-59df-4a8a-acff-57e7d7f9546e\") " pod="openshift-infra/auto-csr-approver-29561350-fwgrh" Mar 16 17:10:00 crc kubenswrapper[4736]: I0316 17:10:00.338164 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzzqd\" (UniqueName: \"kubernetes.io/projected/f06f6919-59df-4a8a-acff-57e7d7f9546e-kube-api-access-bzzqd\") pod \"auto-csr-approver-29561350-fwgrh\" (UID: \"f06f6919-59df-4a8a-acff-57e7d7f9546e\") " pod="openshift-infra/auto-csr-approver-29561350-fwgrh" Mar 16 17:10:00 crc kubenswrapper[4736]: I0316 17:10:00.469777 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561350-fwgrh" Mar 16 17:10:01 crc kubenswrapper[4736]: I0316 17:10:01.028553 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561350-fwgrh"] Mar 16 17:10:01 crc kubenswrapper[4736]: I0316 17:10:01.161879 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561350-fwgrh" event={"ID":"f06f6919-59df-4a8a-acff-57e7d7f9546e","Type":"ContainerStarted","Data":"893ac5cfa7c93bd7ccc76078ab04ae8d06fb2ad8a8bbc4d9fb6c126f102b1def"} Mar 16 17:10:03 crc kubenswrapper[4736]: I0316 17:10:03.192811 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561350-fwgrh" event={"ID":"f06f6919-59df-4a8a-acff-57e7d7f9546e","Type":"ContainerStarted","Data":"41442a39fc9a39880defd3f4fd3390cd3456f2e3808af63d0a5c20b5735b3e9a"} Mar 16 17:10:03 crc kubenswrapper[4736]: I0316 17:10:03.215924 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561350-fwgrh" podStartSLOduration=1.653163129 podStartE2EDuration="3.215904047s" podCreationTimestamp="2026-03-16 17:10:00 +0000 UTC" firstStartedPulling="2026-03-16 17:10:01.041890204 +0000 UTC m=+7002.769280491" lastFinishedPulling="2026-03-16 17:10:02.604631122 +0000 UTC m=+7004.332021409" observedRunningTime="2026-03-16 17:10:03.207992341 +0000 UTC m=+7004.935382638" watchObservedRunningTime="2026-03-16 17:10:03.215904047 +0000 UTC m=+7004.943294334" Mar 16 17:10:04 crc kubenswrapper[4736]: I0316 17:10:04.203919 4736 generic.go:334] "Generic (PLEG): container finished" podID="f06f6919-59df-4a8a-acff-57e7d7f9546e" containerID="41442a39fc9a39880defd3f4fd3390cd3456f2e3808af63d0a5c20b5735b3e9a" exitCode=0 Mar 16 17:10:04 crc kubenswrapper[4736]: I0316 17:10:04.203978 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561350-fwgrh" event={"ID":"f06f6919-59df-4a8a-acff-57e7d7f9546e","Type":"ContainerDied","Data":"41442a39fc9a39880defd3f4fd3390cd3456f2e3808af63d0a5c20b5735b3e9a"} Mar 16 17:10:05 crc kubenswrapper[4736]: I0316 17:10:05.715721 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561350-fwgrh" Mar 16 17:10:05 crc kubenswrapper[4736]: I0316 17:10:05.765218 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzzqd\" (UniqueName: \"kubernetes.io/projected/f06f6919-59df-4a8a-acff-57e7d7f9546e-kube-api-access-bzzqd\") pod \"f06f6919-59df-4a8a-acff-57e7d7f9546e\" (UID: \"f06f6919-59df-4a8a-acff-57e7d7f9546e\") " Mar 16 17:10:05 crc kubenswrapper[4736]: I0316 17:10:05.786440 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06f6919-59df-4a8a-acff-57e7d7f9546e-kube-api-access-bzzqd" (OuterVolumeSpecName: "kube-api-access-bzzqd") pod "f06f6919-59df-4a8a-acff-57e7d7f9546e" (UID: "f06f6919-59df-4a8a-acff-57e7d7f9546e"). InnerVolumeSpecName "kube-api-access-bzzqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:10:05 crc kubenswrapper[4736]: I0316 17:10:05.867051 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzzqd\" (UniqueName: \"kubernetes.io/projected/f06f6919-59df-4a8a-acff-57e7d7f9546e-kube-api-access-bzzqd\") on node \"crc\" DevicePath \"\"" Mar 16 17:10:06 crc kubenswrapper[4736]: I0316 17:10:06.230356 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561350-fwgrh" event={"ID":"f06f6919-59df-4a8a-acff-57e7d7f9546e","Type":"ContainerDied","Data":"893ac5cfa7c93bd7ccc76078ab04ae8d06fb2ad8a8bbc4d9fb6c126f102b1def"} Mar 16 17:10:06 crc kubenswrapper[4736]: I0316 17:10:06.230396 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="893ac5cfa7c93bd7ccc76078ab04ae8d06fb2ad8a8bbc4d9fb6c126f102b1def" Mar 16 17:10:06 crc kubenswrapper[4736]: I0316 17:10:06.230447 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561350-fwgrh" Mar 16 17:10:06 crc kubenswrapper[4736]: I0316 17:10:06.300535 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561344-74fs8"] Mar 16 17:10:06 crc kubenswrapper[4736]: I0316 17:10:06.312451 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561344-74fs8"] Mar 16 17:10:06 crc kubenswrapper[4736]: E0316 17:10:06.353516 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf06f6919_59df_4a8a_acff_57e7d7f9546e.slice/crio-893ac5cfa7c93bd7ccc76078ab04ae8d06fb2ad8a8bbc4d9fb6c126f102b1def\": RecentStats: unable to find data in memory cache]" Mar 16 17:10:06 crc kubenswrapper[4736]: I0316 17:10:06.991791 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee2bbb31-8823-4403-bc49-02e2d0f8ebfa" path="/var/lib/kubelet/pods/ee2bbb31-8823-4403-bc49-02e2d0f8ebfa/volumes" Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.526426 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qrrlc"] Mar 16 17:10:23 crc kubenswrapper[4736]: E0316 17:10:23.528225 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06f6919-59df-4a8a-acff-57e7d7f9546e" containerName="oc" Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.528332 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06f6919-59df-4a8a-acff-57e7d7f9546e" containerName="oc" Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.528642 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06f6919-59df-4a8a-acff-57e7d7f9546e" containerName="oc" Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.530082 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.545126 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrrlc"] Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.688176 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzfs4\" (UniqueName: \"kubernetes.io/projected/729d8319-5020-4d4b-9dab-d7491306aff7-kube-api-access-pzfs4\") pod \"redhat-operators-qrrlc\" (UID: \"729d8319-5020-4d4b-9dab-d7491306aff7\") " pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.688227 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729d8319-5020-4d4b-9dab-d7491306aff7-utilities\") pod \"redhat-operators-qrrlc\" (UID: \"729d8319-5020-4d4b-9dab-d7491306aff7\") " pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.688269 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729d8319-5020-4d4b-9dab-d7491306aff7-catalog-content\") pod \"redhat-operators-qrrlc\" (UID: \"729d8319-5020-4d4b-9dab-d7491306aff7\") " pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.790650 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729d8319-5020-4d4b-9dab-d7491306aff7-catalog-content\") pod \"redhat-operators-qrrlc\" (UID: \"729d8319-5020-4d4b-9dab-d7491306aff7\") " pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.791059 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729d8319-5020-4d4b-9dab-d7491306aff7-catalog-content\") pod \"redhat-operators-qrrlc\" (UID: \"729d8319-5020-4d4b-9dab-d7491306aff7\") " pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.791538 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzfs4\" (UniqueName: \"kubernetes.io/projected/729d8319-5020-4d4b-9dab-d7491306aff7-kube-api-access-pzfs4\") pod \"redhat-operators-qrrlc\" (UID: \"729d8319-5020-4d4b-9dab-d7491306aff7\") " pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.791814 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729d8319-5020-4d4b-9dab-d7491306aff7-utilities\") pod \"redhat-operators-qrrlc\" (UID: \"729d8319-5020-4d4b-9dab-d7491306aff7\") " pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.792081 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729d8319-5020-4d4b-9dab-d7491306aff7-utilities\") pod \"redhat-operators-qrrlc\" (UID: \"729d8319-5020-4d4b-9dab-d7491306aff7\") " pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.818973 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzfs4\" (UniqueName: \"kubernetes.io/projected/729d8319-5020-4d4b-9dab-d7491306aff7-kube-api-access-pzfs4\") pod \"redhat-operators-qrrlc\" (UID: \"729d8319-5020-4d4b-9dab-d7491306aff7\") " pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:10:23 crc kubenswrapper[4736]: I0316 17:10:23.854041 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:10:24 crc kubenswrapper[4736]: I0316 17:10:24.444092 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qrrlc"] Mar 16 17:10:24 crc kubenswrapper[4736]: W0316 17:10:24.458380 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod729d8319_5020_4d4b_9dab_d7491306aff7.slice/crio-e2c1477345c252a23266a0c5dc07773362bc251fe357e39d953af6ac86bbbe0c WatchSource:0}: Error finding container e2c1477345c252a23266a0c5dc07773362bc251fe357e39d953af6ac86bbbe0c: Status 404 returned error can't find the container with id e2c1477345c252a23266a0c5dc07773362bc251fe357e39d953af6ac86bbbe0c Mar 16 17:10:25 crc kubenswrapper[4736]: I0316 17:10:25.409538 4736 generic.go:334] "Generic (PLEG): container finished" podID="729d8319-5020-4d4b-9dab-d7491306aff7" containerID="df8a260447cb97637b77a47007384b7793de8e7ada297c4c5b35e3cf3c496d22" exitCode=0 Mar 16 17:10:25 crc kubenswrapper[4736]: I0316 17:10:25.409603 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrrlc" event={"ID":"729d8319-5020-4d4b-9dab-d7491306aff7","Type":"ContainerDied","Data":"df8a260447cb97637b77a47007384b7793de8e7ada297c4c5b35e3cf3c496d22"} Mar 16 17:10:25 crc kubenswrapper[4736]: I0316 17:10:25.409867 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrrlc" event={"ID":"729d8319-5020-4d4b-9dab-d7491306aff7","Type":"ContainerStarted","Data":"e2c1477345c252a23266a0c5dc07773362bc251fe357e39d953af6ac86bbbe0c"} Mar 16 17:10:26 crc kubenswrapper[4736]: I0316 17:10:26.419786 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrrlc" event={"ID":"729d8319-5020-4d4b-9dab-d7491306aff7","Type":"ContainerStarted","Data":"b9fc5061524b402a8bd958a2350bf1c0c0d4707732a354dc5d12755d2fc91648"} Mar 16 17:10:31 crc kubenswrapper[4736]: I0316 17:10:31.466807 4736 generic.go:334] "Generic (PLEG): container finished" podID="729d8319-5020-4d4b-9dab-d7491306aff7" containerID="b9fc5061524b402a8bd958a2350bf1c0c0d4707732a354dc5d12755d2fc91648" exitCode=0 Mar 16 17:10:31 crc kubenswrapper[4736]: I0316 17:10:31.466920 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrrlc" event={"ID":"729d8319-5020-4d4b-9dab-d7491306aff7","Type":"ContainerDied","Data":"b9fc5061524b402a8bd958a2350bf1c0c0d4707732a354dc5d12755d2fc91648"} Mar 16 17:10:32 crc kubenswrapper[4736]: I0316 17:10:32.485384 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrrlc" event={"ID":"729d8319-5020-4d4b-9dab-d7491306aff7","Type":"ContainerStarted","Data":"80990e5856be9b183333e45f1e4090a89a5bc0fc5afeca38dd7e4d6739b2d170"} Mar 16 17:10:32 crc kubenswrapper[4736]: I0316 17:10:32.517689 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qrrlc" podStartSLOduration=3.011847467 podStartE2EDuration="9.517668705s" podCreationTimestamp="2026-03-16 17:10:23 +0000 UTC" firstStartedPulling="2026-03-16 17:10:25.41139817 +0000 UTC m=+7027.138788457" lastFinishedPulling="2026-03-16 17:10:31.917219408 +0000 UTC m=+7033.644609695" observedRunningTime="2026-03-16 17:10:32.505688049 +0000 UTC m=+7034.233078356" watchObservedRunningTime="2026-03-16 17:10:32.517668705 +0000 UTC m=+7034.245058992" Mar 16 17:10:33 crc kubenswrapper[4736]: I0316 17:10:33.854832 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:10:33 crc kubenswrapper[4736]: I0316 17:10:33.855183 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:10:34 crc kubenswrapper[4736]: I0316 17:10:34.899787 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qrrlc" podUID="729d8319-5020-4d4b-9dab-d7491306aff7" containerName="registry-server" probeResult="failure" output=< Mar 16 17:10:34 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:10:34 crc kubenswrapper[4736]: > Mar 16 17:10:44 crc kubenswrapper[4736]: I0316 17:10:44.913593 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qrrlc" podUID="729d8319-5020-4d4b-9dab-d7491306aff7" containerName="registry-server" probeResult="failure" output=< Mar 16 17:10:44 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:10:44 crc kubenswrapper[4736]: > Mar 16 17:10:54 crc kubenswrapper[4736]: I0316 17:10:54.913902 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qrrlc" podUID="729d8319-5020-4d4b-9dab-d7491306aff7" containerName="registry-server" probeResult="failure" output=< Mar 16 17:10:54 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:10:54 crc kubenswrapper[4736]: > Mar 16 17:11:00 crc kubenswrapper[4736]: I0316 17:11:00.988530 4736 scope.go:117] "RemoveContainer" containerID="f7c4cf37cd7bee9eaafffc91e55cec1c19bd5d5bf8b6dbe7b6e0b16c6afd9145" Mar 16 17:11:03 crc kubenswrapper[4736]: I0316 17:11:03.902569 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:11:03 crc kubenswrapper[4736]: I0316 17:11:03.953766 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:11:04 crc kubenswrapper[4736]: I0316 17:11:04.138894 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrrlc"] Mar 16 17:11:05 crc kubenswrapper[4736]: I0316 17:11:05.779097 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qrrlc" podUID="729d8319-5020-4d4b-9dab-d7491306aff7" containerName="registry-server" containerID="cri-o://80990e5856be9b183333e45f1e4090a89a5bc0fc5afeca38dd7e4d6739b2d170" gracePeriod=2 Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.500948 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.595250 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzfs4\" (UniqueName: \"kubernetes.io/projected/729d8319-5020-4d4b-9dab-d7491306aff7-kube-api-access-pzfs4\") pod \"729d8319-5020-4d4b-9dab-d7491306aff7\" (UID: \"729d8319-5020-4d4b-9dab-d7491306aff7\") " Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.595322 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729d8319-5020-4d4b-9dab-d7491306aff7-catalog-content\") pod \"729d8319-5020-4d4b-9dab-d7491306aff7\" (UID: \"729d8319-5020-4d4b-9dab-d7491306aff7\") " Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.595367 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729d8319-5020-4d4b-9dab-d7491306aff7-utilities\") pod \"729d8319-5020-4d4b-9dab-d7491306aff7\" (UID: \"729d8319-5020-4d4b-9dab-d7491306aff7\") " Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.596041 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/729d8319-5020-4d4b-9dab-d7491306aff7-utilities" (OuterVolumeSpecName: "utilities") pod "729d8319-5020-4d4b-9dab-d7491306aff7" (UID: "729d8319-5020-4d4b-9dab-d7491306aff7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.618903 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729d8319-5020-4d4b-9dab-d7491306aff7-kube-api-access-pzfs4" (OuterVolumeSpecName: "kube-api-access-pzfs4") pod "729d8319-5020-4d4b-9dab-d7491306aff7" (UID: "729d8319-5020-4d4b-9dab-d7491306aff7"). InnerVolumeSpecName "kube-api-access-pzfs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.698155 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzfs4\" (UniqueName: \"kubernetes.io/projected/729d8319-5020-4d4b-9dab-d7491306aff7-kube-api-access-pzfs4\") on node \"crc\" DevicePath \"\"" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.698180 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/729d8319-5020-4d4b-9dab-d7491306aff7-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.734590 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/729d8319-5020-4d4b-9dab-d7491306aff7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "729d8319-5020-4d4b-9dab-d7491306aff7" (UID: "729d8319-5020-4d4b-9dab-d7491306aff7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.788198 4736 generic.go:334] "Generic (PLEG): container finished" podID="729d8319-5020-4d4b-9dab-d7491306aff7" containerID="80990e5856be9b183333e45f1e4090a89a5bc0fc5afeca38dd7e4d6739b2d170" exitCode=0 Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.788245 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrrlc" event={"ID":"729d8319-5020-4d4b-9dab-d7491306aff7","Type":"ContainerDied","Data":"80990e5856be9b183333e45f1e4090a89a5bc0fc5afeca38dd7e4d6739b2d170"} Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.788293 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qrrlc" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.788473 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qrrlc" event={"ID":"729d8319-5020-4d4b-9dab-d7491306aff7","Type":"ContainerDied","Data":"e2c1477345c252a23266a0c5dc07773362bc251fe357e39d953af6ac86bbbe0c"} Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.788497 4736 scope.go:117] "RemoveContainer" containerID="80990e5856be9b183333e45f1e4090a89a5bc0fc5afeca38dd7e4d6739b2d170" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.799833 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/729d8319-5020-4d4b-9dab-d7491306aff7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.825708 4736 scope.go:117] "RemoveContainer" containerID="b9fc5061524b402a8bd958a2350bf1c0c0d4707732a354dc5d12755d2fc91648" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.838826 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qrrlc"] Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.846850 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qrrlc"] Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.858968 4736 scope.go:117] "RemoveContainer" containerID="df8a260447cb97637b77a47007384b7793de8e7ada297c4c5b35e3cf3c496d22" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.926702 4736 scope.go:117] "RemoveContainer" containerID="80990e5856be9b183333e45f1e4090a89a5bc0fc5afeca38dd7e4d6739b2d170" Mar 16 17:11:06 crc kubenswrapper[4736]: E0316 17:11:06.936832 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80990e5856be9b183333e45f1e4090a89a5bc0fc5afeca38dd7e4d6739b2d170\": container with ID starting with 80990e5856be9b183333e45f1e4090a89a5bc0fc5afeca38dd7e4d6739b2d170 not found: ID does not exist" containerID="80990e5856be9b183333e45f1e4090a89a5bc0fc5afeca38dd7e4d6739b2d170" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.936892 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80990e5856be9b183333e45f1e4090a89a5bc0fc5afeca38dd7e4d6739b2d170"} err="failed to get container status \"80990e5856be9b183333e45f1e4090a89a5bc0fc5afeca38dd7e4d6739b2d170\": rpc error: code = NotFound desc = could not find container \"80990e5856be9b183333e45f1e4090a89a5bc0fc5afeca38dd7e4d6739b2d170\": container with ID starting with 80990e5856be9b183333e45f1e4090a89a5bc0fc5afeca38dd7e4d6739b2d170 not found: ID does not exist" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.936922 4736 scope.go:117] "RemoveContainer" containerID="b9fc5061524b402a8bd958a2350bf1c0c0d4707732a354dc5d12755d2fc91648" Mar 16 17:11:06 crc kubenswrapper[4736]: E0316 17:11:06.937581 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9fc5061524b402a8bd958a2350bf1c0c0d4707732a354dc5d12755d2fc91648\": container with ID starting with b9fc5061524b402a8bd958a2350bf1c0c0d4707732a354dc5d12755d2fc91648 not found: ID does not exist" containerID="b9fc5061524b402a8bd958a2350bf1c0c0d4707732a354dc5d12755d2fc91648" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.937629 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9fc5061524b402a8bd958a2350bf1c0c0d4707732a354dc5d12755d2fc91648"} err="failed to get container status \"b9fc5061524b402a8bd958a2350bf1c0c0d4707732a354dc5d12755d2fc91648\": rpc error: code = NotFound desc = could not find container \"b9fc5061524b402a8bd958a2350bf1c0c0d4707732a354dc5d12755d2fc91648\": container with ID starting with b9fc5061524b402a8bd958a2350bf1c0c0d4707732a354dc5d12755d2fc91648 not found: ID does not exist" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.937652 4736 scope.go:117] "RemoveContainer" containerID="df8a260447cb97637b77a47007384b7793de8e7ada297c4c5b35e3cf3c496d22" Mar 16 17:11:06 crc kubenswrapper[4736]: E0316 17:11:06.937945 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df8a260447cb97637b77a47007384b7793de8e7ada297c4c5b35e3cf3c496d22\": container with ID starting with df8a260447cb97637b77a47007384b7793de8e7ada297c4c5b35e3cf3c496d22 not found: ID does not exist" containerID="df8a260447cb97637b77a47007384b7793de8e7ada297c4c5b35e3cf3c496d22" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.937967 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df8a260447cb97637b77a47007384b7793de8e7ada297c4c5b35e3cf3c496d22"} err="failed to get container status \"df8a260447cb97637b77a47007384b7793de8e7ada297c4c5b35e3cf3c496d22\": rpc error: code = NotFound desc = could not find container \"df8a260447cb97637b77a47007384b7793de8e7ada297c4c5b35e3cf3c496d22\": container with ID starting with df8a260447cb97637b77a47007384b7793de8e7ada297c4c5b35e3cf3c496d22 not found: ID does not exist" Mar 16 17:11:06 crc kubenswrapper[4736]: I0316 17:11:06.990364 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729d8319-5020-4d4b-9dab-d7491306aff7" path="/var/lib/kubelet/pods/729d8319-5020-4d4b-9dab-d7491306aff7/volumes" Mar 16 17:11:08 crc kubenswrapper[4736]: I0316 17:11:08.507514 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:11:08 crc kubenswrapper[4736]: I0316 17:11:08.508046 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:11:38 crc kubenswrapper[4736]: I0316 17:11:38.507629 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:11:38 crc kubenswrapper[4736]: I0316 17:11:38.508281 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:12:00 crc kubenswrapper[4736]: I0316 17:12:00.170972 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561352-g2x4b"] Mar 16 17:12:00 crc kubenswrapper[4736]: E0316 17:12:00.172042 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729d8319-5020-4d4b-9dab-d7491306aff7" containerName="extract-utilities" Mar 16 17:12:00 crc kubenswrapper[4736]: I0316 17:12:00.172065 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="729d8319-5020-4d4b-9dab-d7491306aff7" containerName="extract-utilities" Mar 16 17:12:00 crc kubenswrapper[4736]: E0316 17:12:00.172082 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729d8319-5020-4d4b-9dab-d7491306aff7" containerName="registry-server" Mar 16 17:12:00 crc kubenswrapper[4736]: I0316 17:12:00.172092 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="729d8319-5020-4d4b-9dab-d7491306aff7" containerName="registry-server" Mar 16 17:12:00 crc kubenswrapper[4736]: E0316 17:12:00.172169 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729d8319-5020-4d4b-9dab-d7491306aff7" containerName="extract-content" Mar 16 17:12:00 crc kubenswrapper[4736]: I0316 17:12:00.172180 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="729d8319-5020-4d4b-9dab-d7491306aff7" containerName="extract-content" Mar 16 17:12:00 crc kubenswrapper[4736]: I0316 17:12:00.172458 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="729d8319-5020-4d4b-9dab-d7491306aff7" containerName="registry-server" Mar 16 17:12:00 crc kubenswrapper[4736]: I0316 17:12:00.173540 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561352-g2x4b" Mar 16 17:12:00 crc kubenswrapper[4736]: I0316 17:12:00.176221 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:12:00 crc kubenswrapper[4736]: I0316 17:12:00.177342 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:12:00 crc kubenswrapper[4736]: I0316 17:12:00.177498 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:12:00 crc kubenswrapper[4736]: I0316 17:12:00.181761 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561352-g2x4b"] Mar 16 17:12:00 crc kubenswrapper[4736]: I0316 17:12:00.320251 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkpg9\" (UniqueName: \"kubernetes.io/projected/10b5063a-d652-4112-8fe7-a9a7a8726f2b-kube-api-access-wkpg9\") pod \"auto-csr-approver-29561352-g2x4b\" (UID: \"10b5063a-d652-4112-8fe7-a9a7a8726f2b\") " pod="openshift-infra/auto-csr-approver-29561352-g2x4b" Mar 16 17:12:00 crc kubenswrapper[4736]: I0316 17:12:00.422523 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkpg9\" (UniqueName: \"kubernetes.io/projected/10b5063a-d652-4112-8fe7-a9a7a8726f2b-kube-api-access-wkpg9\") pod \"auto-csr-approver-29561352-g2x4b\" (UID: \"10b5063a-d652-4112-8fe7-a9a7a8726f2b\") " pod="openshift-infra/auto-csr-approver-29561352-g2x4b" Mar 16 17:12:00 crc kubenswrapper[4736]: I0316 17:12:00.454345 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkpg9\" (UniqueName: \"kubernetes.io/projected/10b5063a-d652-4112-8fe7-a9a7a8726f2b-kube-api-access-wkpg9\") pod \"auto-csr-approver-29561352-g2x4b\" (UID: \"10b5063a-d652-4112-8fe7-a9a7a8726f2b\") " pod="openshift-infra/auto-csr-approver-29561352-g2x4b" Mar 16 17:12:00 crc kubenswrapper[4736]: I0316 17:12:00.495595 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561352-g2x4b" Mar 16 17:12:01 crc kubenswrapper[4736]: I0316 17:12:01.053931 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561352-g2x4b"] Mar 16 17:12:01 crc kubenswrapper[4736]: W0316 17:12:01.061283 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10b5063a_d652_4112_8fe7_a9a7a8726f2b.slice/crio-2dddbe089dc27431622344be2ab32adba9fa4b32e6537d1946476431ab5a38c7 WatchSource:0}: Error finding container 2dddbe089dc27431622344be2ab32adba9fa4b32e6537d1946476431ab5a38c7: Status 404 returned error can't find the container with id 2dddbe089dc27431622344be2ab32adba9fa4b32e6537d1946476431ab5a38c7 Mar 16 17:12:01 crc kubenswrapper[4736]: I0316 17:12:01.065040 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 17:12:01 crc kubenswrapper[4736]: I0316 17:12:01.326097 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561352-g2x4b" event={"ID":"10b5063a-d652-4112-8fe7-a9a7a8726f2b","Type":"ContainerStarted","Data":"2dddbe089dc27431622344be2ab32adba9fa4b32e6537d1946476431ab5a38c7"} Mar 16 17:12:03 crc kubenswrapper[4736]: I0316 17:12:03.349027 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561352-g2x4b" event={"ID":"10b5063a-d652-4112-8fe7-a9a7a8726f2b","Type":"ContainerStarted","Data":"ad42148a8300c3d5cebdf7b307dd435f319eb815cd183ca662c1167a3bdbd666"} Mar 16 17:12:03 crc kubenswrapper[4736]: I0316 17:12:03.379349 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561352-g2x4b" podStartSLOduration=2.306557571 podStartE2EDuration="3.379312914s" podCreationTimestamp="2026-03-16 17:12:00 +0000 UTC" firstStartedPulling="2026-03-16 17:12:01.063192714 +0000 UTC m=+7122.790583001" lastFinishedPulling="2026-03-16 17:12:02.135948047 +0000 UTC m=+7123.863338344" observedRunningTime="2026-03-16 17:12:03.367153092 +0000 UTC m=+7125.094543419" watchObservedRunningTime="2026-03-16 17:12:03.379312914 +0000 UTC m=+7125.106703241" Mar 16 17:12:04 crc kubenswrapper[4736]: I0316 17:12:04.361681 4736 generic.go:334] "Generic (PLEG): container finished" podID="10b5063a-d652-4112-8fe7-a9a7a8726f2b" containerID="ad42148a8300c3d5cebdf7b307dd435f319eb815cd183ca662c1167a3bdbd666" exitCode=0 Mar 16 17:12:04 crc kubenswrapper[4736]: I0316 17:12:04.361720 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561352-g2x4b" event={"ID":"10b5063a-d652-4112-8fe7-a9a7a8726f2b","Type":"ContainerDied","Data":"ad42148a8300c3d5cebdf7b307dd435f319eb815cd183ca662c1167a3bdbd666"} Mar 16 17:12:05 crc kubenswrapper[4736]: I0316 17:12:05.752267 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561352-g2x4b" Mar 16 17:12:05 crc kubenswrapper[4736]: I0316 17:12:05.835421 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkpg9\" (UniqueName: \"kubernetes.io/projected/10b5063a-d652-4112-8fe7-a9a7a8726f2b-kube-api-access-wkpg9\") pod \"10b5063a-d652-4112-8fe7-a9a7a8726f2b\" (UID: \"10b5063a-d652-4112-8fe7-a9a7a8726f2b\") " Mar 16 17:12:05 crc kubenswrapper[4736]: I0316 17:12:05.841590 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b5063a-d652-4112-8fe7-a9a7a8726f2b-kube-api-access-wkpg9" (OuterVolumeSpecName: "kube-api-access-wkpg9") pod "10b5063a-d652-4112-8fe7-a9a7a8726f2b" (UID: "10b5063a-d652-4112-8fe7-a9a7a8726f2b"). InnerVolumeSpecName "kube-api-access-wkpg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:12:05 crc kubenswrapper[4736]: I0316 17:12:05.939649 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkpg9\" (UniqueName: \"kubernetes.io/projected/10b5063a-d652-4112-8fe7-a9a7a8726f2b-kube-api-access-wkpg9\") on node \"crc\" DevicePath \"\"" Mar 16 17:12:06 crc kubenswrapper[4736]: I0316 17:12:06.389397 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561352-g2x4b" event={"ID":"10b5063a-d652-4112-8fe7-a9a7a8726f2b","Type":"ContainerDied","Data":"2dddbe089dc27431622344be2ab32adba9fa4b32e6537d1946476431ab5a38c7"} Mar 16 17:12:06 crc kubenswrapper[4736]: I0316 17:12:06.389439 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dddbe089dc27431622344be2ab32adba9fa4b32e6537d1946476431ab5a38c7" Mar 16 17:12:06 crc kubenswrapper[4736]: I0316 17:12:06.389489 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561352-g2x4b" Mar 16 17:12:06 crc kubenswrapper[4736]: I0316 17:12:06.464341 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561346-xt99l"] Mar 16 17:12:06 crc kubenswrapper[4736]: I0316 17:12:06.475865 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561346-xt99l"] Mar 16 17:12:06 crc kubenswrapper[4736]: I0316 17:12:06.991874 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c9f77c0-581d-47de-8309-1961ae783cdf" path="/var/lib/kubelet/pods/4c9f77c0-581d-47de-8309-1961ae783cdf/volumes" Mar 16 17:12:08 crc kubenswrapper[4736]: I0316 17:12:08.507739 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:12:08 crc kubenswrapper[4736]: I0316 17:12:08.507807 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:12:08 crc kubenswrapper[4736]: I0316 17:12:08.507864 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 17:12:08 crc kubenswrapper[4736]: I0316 17:12:08.508594 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 17:12:08 crc kubenswrapper[4736]: I0316 17:12:08.508675 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" gracePeriod=600 Mar 16 17:12:08 crc kubenswrapper[4736]: E0316 17:12:08.647511 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:12:09 crc kubenswrapper[4736]: I0316 17:12:09.421000 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" exitCode=0 Mar 16 17:12:09 crc kubenswrapper[4736]: I0316 17:12:09.421315 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd"} Mar 16 17:12:09 crc kubenswrapper[4736]: I0316 17:12:09.421442 4736 scope.go:117] "RemoveContainer" containerID="38f41d5c4b53fbe69afa1aa3b97e7510f16462cbbd888f88f1544742fc962516" Mar 16 17:12:09 crc kubenswrapper[4736]: I0316 17:12:09.422013 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:12:09 crc kubenswrapper[4736]: E0316 17:12:09.422363 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:12:19 crc kubenswrapper[4736]: I0316 17:12:19.978612 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:12:19 crc kubenswrapper[4736]: E0316 17:12:19.979335 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:12:33 crc kubenswrapper[4736]: I0316 17:12:33.978274 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:12:33 crc kubenswrapper[4736]: E0316 17:12:33.980178 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:12:48 crc kubenswrapper[4736]: I0316 17:12:48.986203 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:12:48 crc kubenswrapper[4736]: E0316 17:12:48.987963 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:12:59 crc kubenswrapper[4736]: I0316 17:12:59.978068 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:12:59 crc kubenswrapper[4736]: E0316 17:12:59.978788 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:13:01 crc kubenswrapper[4736]: I0316 17:13:01.154019 4736 scope.go:117] "RemoveContainer" containerID="ea665db206b46309654faff4a29d52fbba0df0fd4e5a958747556151296cd7e6" Mar 16 17:13:10 crc kubenswrapper[4736]: I0316 17:13:10.978361 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:13:10 crc kubenswrapper[4736]: E0316 17:13:10.979554 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:13:23 crc kubenswrapper[4736]: I0316 17:13:23.978908 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:13:23 crc kubenswrapper[4736]: E0316 17:13:23.979763 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:13:38 crc kubenswrapper[4736]: I0316 17:13:38.983888 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:13:38 crc kubenswrapper[4736]: E0316 17:13:38.984640 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:13:51 crc kubenswrapper[4736]: I0316 17:13:51.978699 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:13:51 crc kubenswrapper[4736]: E0316 17:13:51.979930 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:14:00 crc kubenswrapper[4736]: I0316 17:14:00.160396 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561354-wmvp5"] Mar 16 17:14:00 crc kubenswrapper[4736]: E0316 17:14:00.161994 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b5063a-d652-4112-8fe7-a9a7a8726f2b" containerName="oc" Mar 16 17:14:00 crc kubenswrapper[4736]: I0316 17:14:00.162025 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b5063a-d652-4112-8fe7-a9a7a8726f2b" containerName="oc" Mar 16 17:14:00 crc kubenswrapper[4736]: I0316 17:14:00.162524 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="10b5063a-d652-4112-8fe7-a9a7a8726f2b" containerName="oc" Mar 16 17:14:00 crc kubenswrapper[4736]: I0316 17:14:00.163737 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561354-wmvp5" Mar 16 17:14:00 crc kubenswrapper[4736]: I0316 17:14:00.169858 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:14:00 crc kubenswrapper[4736]: I0316 17:14:00.170254 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:14:00 crc kubenswrapper[4736]: I0316 17:14:00.170586 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:14:00 crc kubenswrapper[4736]: I0316 17:14:00.174079 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561354-wmvp5"] Mar 16 17:14:00 crc kubenswrapper[4736]: I0316 17:14:00.316979 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqsrp\" (UniqueName: \"kubernetes.io/projected/3429d24b-73f4-4243-8f32-28d562143b19-kube-api-access-fqsrp\") pod \"auto-csr-approver-29561354-wmvp5\" (UID: \"3429d24b-73f4-4243-8f32-28d562143b19\") " pod="openshift-infra/auto-csr-approver-29561354-wmvp5" Mar 16 17:14:00 crc kubenswrapper[4736]: I0316 17:14:00.420167 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqsrp\" (UniqueName: \"kubernetes.io/projected/3429d24b-73f4-4243-8f32-28d562143b19-kube-api-access-fqsrp\") pod \"auto-csr-approver-29561354-wmvp5\" (UID: \"3429d24b-73f4-4243-8f32-28d562143b19\") " pod="openshift-infra/auto-csr-approver-29561354-wmvp5" Mar 16 17:14:00 crc kubenswrapper[4736]: I0316 17:14:00.444935 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqsrp\" (UniqueName: \"kubernetes.io/projected/3429d24b-73f4-4243-8f32-28d562143b19-kube-api-access-fqsrp\") pod \"auto-csr-approver-29561354-wmvp5\" (UID: \"3429d24b-73f4-4243-8f32-28d562143b19\") " pod="openshift-infra/auto-csr-approver-29561354-wmvp5" Mar 16 17:14:00 crc kubenswrapper[4736]: I0316 17:14:00.546082 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561354-wmvp5" Mar 16 17:14:01 crc kubenswrapper[4736]: I0316 17:14:01.050846 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561354-wmvp5"] Mar 16 17:14:01 crc kubenswrapper[4736]: I0316 17:14:01.870461 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561354-wmvp5" event={"ID":"3429d24b-73f4-4243-8f32-28d562143b19","Type":"ContainerStarted","Data":"8abe52acd519db6a63522f426e9cfd4c993eb9a63816ae478aedcd9d66d9bf32"} Mar 16 17:14:02 crc kubenswrapper[4736]: I0316 17:14:02.891858 4736 generic.go:334] "Generic (PLEG): container finished" podID="3429d24b-73f4-4243-8f32-28d562143b19" containerID="9410fcca2c165e04d3f1fd3bec2735987dbe68c971e734b70ea64e9487cd4120" exitCode=0 Mar 16 17:14:02 crc kubenswrapper[4736]: I0316 17:14:02.891940 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561354-wmvp5" event={"ID":"3429d24b-73f4-4243-8f32-28d562143b19","Type":"ContainerDied","Data":"9410fcca2c165e04d3f1fd3bec2735987dbe68c971e734b70ea64e9487cd4120"} Mar 16 17:14:02 crc kubenswrapper[4736]: I0316 17:14:02.978244 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:14:02 crc kubenswrapper[4736]: E0316 17:14:02.978516 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:14:04 crc kubenswrapper[4736]: I0316 17:14:04.349202 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561354-wmvp5" Mar 16 17:14:04 crc kubenswrapper[4736]: I0316 17:14:04.506039 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsrp\" (UniqueName: \"kubernetes.io/projected/3429d24b-73f4-4243-8f32-28d562143b19-kube-api-access-fqsrp\") pod \"3429d24b-73f4-4243-8f32-28d562143b19\" (UID: \"3429d24b-73f4-4243-8f32-28d562143b19\") " Mar 16 17:14:04 crc kubenswrapper[4736]: I0316 17:14:04.512500 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3429d24b-73f4-4243-8f32-28d562143b19-kube-api-access-fqsrp" (OuterVolumeSpecName: "kube-api-access-fqsrp") pod "3429d24b-73f4-4243-8f32-28d562143b19" (UID: "3429d24b-73f4-4243-8f32-28d562143b19"). InnerVolumeSpecName "kube-api-access-fqsrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:14:04 crc kubenswrapper[4736]: I0316 17:14:04.608464 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsrp\" (UniqueName: \"kubernetes.io/projected/3429d24b-73f4-4243-8f32-28d562143b19-kube-api-access-fqsrp\") on node \"crc\" DevicePath \"\"" Mar 16 17:14:04 crc kubenswrapper[4736]: I0316 17:14:04.917955 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561354-wmvp5" event={"ID":"3429d24b-73f4-4243-8f32-28d562143b19","Type":"ContainerDied","Data":"8abe52acd519db6a63522f426e9cfd4c993eb9a63816ae478aedcd9d66d9bf32"} Mar 16 17:14:04 crc kubenswrapper[4736]: I0316 17:14:04.918353 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8abe52acd519db6a63522f426e9cfd4c993eb9a63816ae478aedcd9d66d9bf32" Mar 16 17:14:04 crc kubenswrapper[4736]: I0316 17:14:04.918032 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561354-wmvp5" Mar 16 17:14:05 crc kubenswrapper[4736]: I0316 17:14:05.431541 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561348-ml2ms"] Mar 16 17:14:05 crc kubenswrapper[4736]: I0316 17:14:05.442002 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561348-ml2ms"] Mar 16 17:14:06 crc kubenswrapper[4736]: I0316 17:14:06.989257 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de592875-7551-4595-bab1-55152596e7b9" path="/var/lib/kubelet/pods/de592875-7551-4595-bab1-55152596e7b9/volumes" Mar 16 17:14:16 crc kubenswrapper[4736]: I0316 17:14:16.978681 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:14:16 crc kubenswrapper[4736]: E0316 17:14:16.979501 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:14:29 crc kubenswrapper[4736]: I0316 17:14:29.978026 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:14:29 crc kubenswrapper[4736]: E0316 17:14:29.979196 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.371441 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-28qk9"] Mar 16 17:14:41 crc kubenswrapper[4736]: E0316 17:14:41.372844 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3429d24b-73f4-4243-8f32-28d562143b19" containerName="oc" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.372873 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3429d24b-73f4-4243-8f32-28d562143b19" containerName="oc" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.373319 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3429d24b-73f4-4243-8f32-28d562143b19" containerName="oc" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.378781 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.415060 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-28qk9"] Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.479293 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4hn8\" (UniqueName: \"kubernetes.io/projected/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-kube-api-access-h4hn8\") pod \"certified-operators-28qk9\" (UID: \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\") " pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.479641 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-catalog-content\") pod \"certified-operators-28qk9\" (UID: \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\") " pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.479924 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-utilities\") pod \"certified-operators-28qk9\" (UID: \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\") " pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.581585 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-catalog-content\") pod \"certified-operators-28qk9\" (UID: \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\") " pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.581689 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-utilities\") pod \"certified-operators-28qk9\" (UID: \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\") " pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.581726 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4hn8\" (UniqueName: \"kubernetes.io/projected/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-kube-api-access-h4hn8\") pod \"certified-operators-28qk9\" (UID: \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\") " pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.582563 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-utilities\") pod \"certified-operators-28qk9\" (UID: \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\") " pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.582563 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-catalog-content\") pod \"certified-operators-28qk9\" (UID: \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\") " pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.607547 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4hn8\" (UniqueName: \"kubernetes.io/projected/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-kube-api-access-h4hn8\") pod \"certified-operators-28qk9\" (UID: \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\") " pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.711342 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:14:41 crc kubenswrapper[4736]: I0316 17:14:41.978421 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:14:41 crc kubenswrapper[4736]: E0316 17:14:41.978990 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:14:42 crc kubenswrapper[4736]: I0316 17:14:42.252470 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-28qk9"] Mar 16 17:14:42 crc kubenswrapper[4736]: I0316 17:14:42.386243 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28qk9" event={"ID":"c20184e6-addc-49c2-a6e4-3c0ef7f56e42","Type":"ContainerStarted","Data":"e939b574fb6102167340955ad9aa56a98ab7311fb8111dd941e966d34895402e"} Mar 16 17:14:43 crc kubenswrapper[4736]: I0316 17:14:43.402078 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28qk9" event={"ID":"c20184e6-addc-49c2-a6e4-3c0ef7f56e42","Type":"ContainerDied","Data":"d55c3db7f6fd8dc06c2601fd62ec65b09fefde2ccdef4e6d9f1625425eb5be3c"} Mar 16 17:14:43 crc kubenswrapper[4736]: I0316 17:14:43.402074 4736 generic.go:334] "Generic (PLEG): container finished" podID="c20184e6-addc-49c2-a6e4-3c0ef7f56e42" containerID="d55c3db7f6fd8dc06c2601fd62ec65b09fefde2ccdef4e6d9f1625425eb5be3c" exitCode=0 Mar 16 17:14:45 crc kubenswrapper[4736]: I0316 17:14:45.419963 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28qk9" event={"ID":"c20184e6-addc-49c2-a6e4-3c0ef7f56e42","Type":"ContainerStarted","Data":"80221fd482a420cd0edd75fdd3fbdbd633a47edfa49d96a98e27bd5c8ac9ca75"} Mar 16 17:14:46 crc kubenswrapper[4736]: I0316 17:14:46.433849 4736 generic.go:334] "Generic (PLEG): container finished" podID="c20184e6-addc-49c2-a6e4-3c0ef7f56e42" containerID="80221fd482a420cd0edd75fdd3fbdbd633a47edfa49d96a98e27bd5c8ac9ca75" exitCode=0 Mar 16 17:14:46 crc kubenswrapper[4736]: I0316 17:14:46.433912 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28qk9" event={"ID":"c20184e6-addc-49c2-a6e4-3c0ef7f56e42","Type":"ContainerDied","Data":"80221fd482a420cd0edd75fdd3fbdbd633a47edfa49d96a98e27bd5c8ac9ca75"} Mar 16 17:14:47 crc kubenswrapper[4736]: I0316 17:14:47.462528 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28qk9" event={"ID":"c20184e6-addc-49c2-a6e4-3c0ef7f56e42","Type":"ContainerStarted","Data":"6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d"} Mar 16 17:14:47 crc kubenswrapper[4736]: I0316 17:14:47.488768 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-28qk9" podStartSLOduration=3.003970758 podStartE2EDuration="6.488751067s" podCreationTimestamp="2026-03-16 17:14:41 +0000 UTC" firstStartedPulling="2026-03-16 17:14:43.404128915 +0000 UTC m=+7285.131519202" lastFinishedPulling="2026-03-16 17:14:46.888909194 +0000 UTC m=+7288.616299511" observedRunningTime="2026-03-16 17:14:47.487459382 +0000 UTC m=+7289.214849669" watchObservedRunningTime="2026-03-16 17:14:47.488751067 +0000 UTC m=+7289.216141354" Mar 16 17:14:51 crc kubenswrapper[4736]: I0316 17:14:51.712293 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:14:51 crc kubenswrapper[4736]: I0316 17:14:51.714571 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:14:52 crc kubenswrapper[4736]: I0316 17:14:52.771304 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-28qk9" podUID="c20184e6-addc-49c2-a6e4-3c0ef7f56e42" containerName="registry-server" probeResult="failure" output=< Mar 16 17:14:52 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:14:52 crc kubenswrapper[4736]: > Mar 16 17:14:52 crc kubenswrapper[4736]: I0316 17:14:52.981259 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:14:52 crc kubenswrapper[4736]: E0316 17:14:52.981507 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.153714 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4"] Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.156602 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.160680 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.163639 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.170490 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4"] Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.262392 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af5e6705-3151-42d7-a383-e6f0a25c3fb0-secret-volume\") pod \"collect-profiles-29561355-9m7n4\" (UID: \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.262670 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g284m\" (UniqueName: \"kubernetes.io/projected/af5e6705-3151-42d7-a383-e6f0a25c3fb0-kube-api-access-g284m\") pod \"collect-profiles-29561355-9m7n4\" (UID: \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.262749 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af5e6705-3151-42d7-a383-e6f0a25c3fb0-config-volume\") pod \"collect-profiles-29561355-9m7n4\" (UID: \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.365226 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g284m\" (UniqueName: \"kubernetes.io/projected/af5e6705-3151-42d7-a383-e6f0a25c3fb0-kube-api-access-g284m\") pod \"collect-profiles-29561355-9m7n4\" (UID: \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.365515 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af5e6705-3151-42d7-a383-e6f0a25c3fb0-config-volume\") pod \"collect-profiles-29561355-9m7n4\" (UID: \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.365751 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af5e6705-3151-42d7-a383-e6f0a25c3fb0-secret-volume\") pod \"collect-profiles-29561355-9m7n4\" (UID: \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.367293 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af5e6705-3151-42d7-a383-e6f0a25c3fb0-config-volume\") pod \"collect-profiles-29561355-9m7n4\" (UID: \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.374752 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af5e6705-3151-42d7-a383-e6f0a25c3fb0-secret-volume\") pod \"collect-profiles-29561355-9m7n4\" (UID: \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.393943 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g284m\" (UniqueName: \"kubernetes.io/projected/af5e6705-3151-42d7-a383-e6f0a25c3fb0-kube-api-access-g284m\") pod \"collect-profiles-29561355-9m7n4\" (UID: \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" Mar 16 17:15:00 crc kubenswrapper[4736]: I0316 17:15:00.478828 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" Mar 16 17:15:01 crc kubenswrapper[4736]: I0316 17:15:01.025553 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4"] Mar 16 17:15:01 crc kubenswrapper[4736]: I0316 17:15:01.275801 4736 scope.go:117] "RemoveContainer" containerID="ceba6341fe90f629e5a04a26d56c8b669372694088858d6696664786da8c1402" Mar 16 17:15:01 crc kubenswrapper[4736]: I0316 17:15:01.618802 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" event={"ID":"af5e6705-3151-42d7-a383-e6f0a25c3fb0","Type":"ContainerStarted","Data":"cb5bb0689213238df52be44ba4e173a78cdc54a11436474f7d8475bdd020db89"} Mar 16 17:15:01 crc kubenswrapper[4736]: I0316 17:15:01.618878 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" event={"ID":"af5e6705-3151-42d7-a383-e6f0a25c3fb0","Type":"ContainerStarted","Data":"49c75459611ca8eabfedc2efc061f83883e37e152540ca9056554a4580a3888c"} Mar 16 17:15:01 crc kubenswrapper[4736]: I0316 17:15:01.650473 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" podStartSLOduration=1.650449872 podStartE2EDuration="1.650449872s" podCreationTimestamp="2026-03-16 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 17:15:01.638018064 +0000 UTC m=+7303.365408361" watchObservedRunningTime="2026-03-16 17:15:01.650449872 +0000 UTC m=+7303.377840159" Mar 16 17:15:01 crc kubenswrapper[4736]: I0316 17:15:01.765015 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:15:01 crc kubenswrapper[4736]: I0316 17:15:01.826860 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:15:02 crc kubenswrapper[4736]: I0316 17:15:02.006473 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-28qk9"] Mar 16 17:15:02 crc kubenswrapper[4736]: I0316 17:15:02.633135 4736 generic.go:334] "Generic (PLEG): container finished" podID="af5e6705-3151-42d7-a383-e6f0a25c3fb0" containerID="cb5bb0689213238df52be44ba4e173a78cdc54a11436474f7d8475bdd020db89" exitCode=0 Mar 16 17:15:02 crc kubenswrapper[4736]: I0316 17:15:02.633256 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" event={"ID":"af5e6705-3151-42d7-a383-e6f0a25c3fb0","Type":"ContainerDied","Data":"cb5bb0689213238df52be44ba4e173a78cdc54a11436474f7d8475bdd020db89"} Mar 16 17:15:03 crc kubenswrapper[4736]: I0316 17:15:03.640924 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-28qk9" podUID="c20184e6-addc-49c2-a6e4-3c0ef7f56e42" containerName="registry-server" containerID="cri-o://6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d" gracePeriod=2 Mar 16 17:15:03 crc kubenswrapper[4736]: E0316 17:15:03.747257 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc20184e6_addc_49c2_a6e4_3c0ef7f56e42.slice/crio-6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc20184e6_addc_49c2_a6e4_3c0ef7f56e42.slice/crio-conmon-6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d.scope\": RecentStats: unable to find data in memory cache]" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.185587 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.195027 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.243878 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-utilities\") pod \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\" (UID: \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\") " Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.243933 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af5e6705-3151-42d7-a383-e6f0a25c3fb0-secret-volume\") pod \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\" (UID: \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\") " Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.244019 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4hn8\" (UniqueName: \"kubernetes.io/projected/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-kube-api-access-h4hn8\") pod \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\" (UID: \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\") " Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.244050 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af5e6705-3151-42d7-a383-e6f0a25c3fb0-config-volume\") pod \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\" (UID: \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\") " Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.244076 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-catalog-content\") pod \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\" (UID: \"c20184e6-addc-49c2-a6e4-3c0ef7f56e42\") " Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.244171 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g284m\" (UniqueName: \"kubernetes.io/projected/af5e6705-3151-42d7-a383-e6f0a25c3fb0-kube-api-access-g284m\") pod \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\" (UID: \"af5e6705-3151-42d7-a383-e6f0a25c3fb0\") " Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.247561 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af5e6705-3151-42d7-a383-e6f0a25c3fb0-config-volume" (OuterVolumeSpecName: "config-volume") pod "af5e6705-3151-42d7-a383-e6f0a25c3fb0" (UID: "af5e6705-3151-42d7-a383-e6f0a25c3fb0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.248384 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-utilities" (OuterVolumeSpecName: "utilities") pod "c20184e6-addc-49c2-a6e4-3c0ef7f56e42" (UID: "c20184e6-addc-49c2-a6e4-3c0ef7f56e42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.256813 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-kube-api-access-h4hn8" (OuterVolumeSpecName: "kube-api-access-h4hn8") pod "c20184e6-addc-49c2-a6e4-3c0ef7f56e42" (UID: "c20184e6-addc-49c2-a6e4-3c0ef7f56e42"). InnerVolumeSpecName "kube-api-access-h4hn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.261716 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5e6705-3151-42d7-a383-e6f0a25c3fb0-kube-api-access-g284m" (OuterVolumeSpecName: "kube-api-access-g284m") pod "af5e6705-3151-42d7-a383-e6f0a25c3fb0" (UID: "af5e6705-3151-42d7-a383-e6f0a25c3fb0"). InnerVolumeSpecName "kube-api-access-g284m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.267416 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af5e6705-3151-42d7-a383-e6f0a25c3fb0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "af5e6705-3151-42d7-a383-e6f0a25c3fb0" (UID: "af5e6705-3151-42d7-a383-e6f0a25c3fb0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.322015 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c20184e6-addc-49c2-a6e4-3c0ef7f56e42" (UID: "c20184e6-addc-49c2-a6e4-3c0ef7f56e42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.345831 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.345863 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af5e6705-3151-42d7-a383-e6f0a25c3fb0-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.345875 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4hn8\" (UniqueName: \"kubernetes.io/projected/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-kube-api-access-h4hn8\") on node \"crc\" DevicePath \"\"" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.345885 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af5e6705-3151-42d7-a383-e6f0a25c3fb0-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.345894 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20184e6-addc-49c2-a6e4-3c0ef7f56e42-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.345902 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g284m\" (UniqueName: \"kubernetes.io/projected/af5e6705-3151-42d7-a383-e6f0a25c3fb0-kube-api-access-g284m\") on node \"crc\" DevicePath \"\"" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.652274 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.652168 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4" event={"ID":"af5e6705-3151-42d7-a383-e6f0a25c3fb0","Type":"ContainerDied","Data":"49c75459611ca8eabfedc2efc061f83883e37e152540ca9056554a4580a3888c"} Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.654016 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49c75459611ca8eabfedc2efc061f83883e37e152540ca9056554a4580a3888c" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.654539 4736 generic.go:334] "Generic (PLEG): container finished" podID="c20184e6-addc-49c2-a6e4-3c0ef7f56e42" containerID="6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d" exitCode=0 Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.654577 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28qk9" event={"ID":"c20184e6-addc-49c2-a6e4-3c0ef7f56e42","Type":"ContainerDied","Data":"6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d"} Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.654605 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-28qk9" event={"ID":"c20184e6-addc-49c2-a6e4-3c0ef7f56e42","Type":"ContainerDied","Data":"e939b574fb6102167340955ad9aa56a98ab7311fb8111dd941e966d34895402e"} Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.654624 4736 scope.go:117] "RemoveContainer" containerID="6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.655200 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-28qk9" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.689044 4736 scope.go:117] "RemoveContainer" containerID="80221fd482a420cd0edd75fdd3fbdbd633a47edfa49d96a98e27bd5c8ac9ca75" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.721591 4736 scope.go:117] "RemoveContainer" containerID="d55c3db7f6fd8dc06c2601fd62ec65b09fefde2ccdef4e6d9f1625425eb5be3c" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.722549 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-28qk9"] Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.732670 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-28qk9"] Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.741067 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7"] Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.750870 4736 scope.go:117] "RemoveContainer" containerID="6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.760743 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561310-g7hc7"] Mar 16 17:15:04 crc kubenswrapper[4736]: E0316 17:15:04.751395 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d\": container with ID starting with 6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d not found: ID does not exist" containerID="6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.763938 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d"} err="failed to get container status \"6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d\": rpc error: code = NotFound desc = could not find container \"6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d\": container with ID starting with 6c0cc3f561931e95ef20119b958087828653c1b3af62048abaa037c0ed55499d not found: ID does not exist" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.763969 4736 scope.go:117] "RemoveContainer" containerID="80221fd482a420cd0edd75fdd3fbdbd633a47edfa49d96a98e27bd5c8ac9ca75" Mar 16 17:15:04 crc kubenswrapper[4736]: E0316 17:15:04.767501 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80221fd482a420cd0edd75fdd3fbdbd633a47edfa49d96a98e27bd5c8ac9ca75\": container with ID starting with 80221fd482a420cd0edd75fdd3fbdbd633a47edfa49d96a98e27bd5c8ac9ca75 not found: ID does not exist" containerID="80221fd482a420cd0edd75fdd3fbdbd633a47edfa49d96a98e27bd5c8ac9ca75" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.767531 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80221fd482a420cd0edd75fdd3fbdbd633a47edfa49d96a98e27bd5c8ac9ca75"} err="failed to get container status \"80221fd482a420cd0edd75fdd3fbdbd633a47edfa49d96a98e27bd5c8ac9ca75\": rpc error: code = NotFound desc = could not find container \"80221fd482a420cd0edd75fdd3fbdbd633a47edfa49d96a98e27bd5c8ac9ca75\": container with ID starting with 80221fd482a420cd0edd75fdd3fbdbd633a47edfa49d96a98e27bd5c8ac9ca75 not found: ID does not exist" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.767569 4736 scope.go:117] "RemoveContainer" containerID="d55c3db7f6fd8dc06c2601fd62ec65b09fefde2ccdef4e6d9f1625425eb5be3c" Mar 16 17:15:04 crc kubenswrapper[4736]: E0316 17:15:04.767944 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d55c3db7f6fd8dc06c2601fd62ec65b09fefde2ccdef4e6d9f1625425eb5be3c\": container with ID starting with d55c3db7f6fd8dc06c2601fd62ec65b09fefde2ccdef4e6d9f1625425eb5be3c not found: ID does not exist" containerID="d55c3db7f6fd8dc06c2601fd62ec65b09fefde2ccdef4e6d9f1625425eb5be3c" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.767962 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d55c3db7f6fd8dc06c2601fd62ec65b09fefde2ccdef4e6d9f1625425eb5be3c"} err="failed to get container status \"d55c3db7f6fd8dc06c2601fd62ec65b09fefde2ccdef4e6d9f1625425eb5be3c\": rpc error: code = NotFound desc = could not find container \"d55c3db7f6fd8dc06c2601fd62ec65b09fefde2ccdef4e6d9f1625425eb5be3c\": container with ID starting with d55c3db7f6fd8dc06c2601fd62ec65b09fefde2ccdef4e6d9f1625425eb5be3c not found: ID does not exist" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.994314 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c2f5143-440b-4a2c-90c5-ce6b185936c3" path="/var/lib/kubelet/pods/1c2f5143-440b-4a2c-90c5-ce6b185936c3/volumes" Mar 16 17:15:04 crc kubenswrapper[4736]: I0316 17:15:04.995820 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c20184e6-addc-49c2-a6e4-3c0ef7f56e42" path="/var/lib/kubelet/pods/c20184e6-addc-49c2-a6e4-3c0ef7f56e42/volumes" Mar 16 17:15:05 crc kubenswrapper[4736]: I0316 17:15:05.979045 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:15:05 crc kubenswrapper[4736]: E0316 17:15:05.979442 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:15:17 crc kubenswrapper[4736]: I0316 17:15:17.977466 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:15:17 crc kubenswrapper[4736]: E0316 17:15:17.978183 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.465767 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dg8d7"] Mar 16 17:15:21 crc kubenswrapper[4736]: E0316 17:15:21.467257 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20184e6-addc-49c2-a6e4-3c0ef7f56e42" containerName="registry-server" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.467288 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20184e6-addc-49c2-a6e4-3c0ef7f56e42" containerName="registry-server" Mar 16 17:15:21 crc kubenswrapper[4736]: E0316 17:15:21.467347 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20184e6-addc-49c2-a6e4-3c0ef7f56e42" containerName="extract-utilities" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.467362 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20184e6-addc-49c2-a6e4-3c0ef7f56e42" containerName="extract-utilities" Mar 16 17:15:21 crc kubenswrapper[4736]: E0316 17:15:21.467406 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af5e6705-3151-42d7-a383-e6f0a25c3fb0" containerName="collect-profiles" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.467423 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="af5e6705-3151-42d7-a383-e6f0a25c3fb0" containerName="collect-profiles" Mar 16 17:15:21 crc kubenswrapper[4736]: E0316 17:15:21.467456 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20184e6-addc-49c2-a6e4-3c0ef7f56e42" containerName="extract-content" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.467469 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20184e6-addc-49c2-a6e4-3c0ef7f56e42" containerName="extract-content" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.467841 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="af5e6705-3151-42d7-a383-e6f0a25c3fb0" containerName="collect-profiles" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.467899 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20184e6-addc-49c2-a6e4-3c0ef7f56e42" containerName="registry-server" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.470768 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.490319 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dg8d7"] Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.661535 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb935d59-85e8-41c1-918c-a58c2eebdd38-utilities\") pod \"community-operators-dg8d7\" (UID: \"fb935d59-85e8-41c1-918c-a58c2eebdd38\") " pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.661645 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb935d59-85e8-41c1-918c-a58c2eebdd38-catalog-content\") pod \"community-operators-dg8d7\" (UID: \"fb935d59-85e8-41c1-918c-a58c2eebdd38\") " pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.661740 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjhrk\" (UniqueName: \"kubernetes.io/projected/fb935d59-85e8-41c1-918c-a58c2eebdd38-kube-api-access-sjhrk\") pod \"community-operators-dg8d7\" (UID: \"fb935d59-85e8-41c1-918c-a58c2eebdd38\") " pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.763358 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb935d59-85e8-41c1-918c-a58c2eebdd38-utilities\") pod \"community-operators-dg8d7\" (UID: \"fb935d59-85e8-41c1-918c-a58c2eebdd38\") " pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.763487 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb935d59-85e8-41c1-918c-a58c2eebdd38-catalog-content\") pod \"community-operators-dg8d7\" (UID: \"fb935d59-85e8-41c1-918c-a58c2eebdd38\") " pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.763603 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjhrk\" (UniqueName: \"kubernetes.io/projected/fb935d59-85e8-41c1-918c-a58c2eebdd38-kube-api-access-sjhrk\") pod \"community-operators-dg8d7\" (UID: \"fb935d59-85e8-41c1-918c-a58c2eebdd38\") " pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.763924 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb935d59-85e8-41c1-918c-a58c2eebdd38-utilities\") pod \"community-operators-dg8d7\" (UID: \"fb935d59-85e8-41c1-918c-a58c2eebdd38\") " pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.763966 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb935d59-85e8-41c1-918c-a58c2eebdd38-catalog-content\") pod \"community-operators-dg8d7\" (UID: \"fb935d59-85e8-41c1-918c-a58c2eebdd38\") " pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.787900 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjhrk\" (UniqueName: \"kubernetes.io/projected/fb935d59-85e8-41c1-918c-a58c2eebdd38-kube-api-access-sjhrk\") pod \"community-operators-dg8d7\" (UID: \"fb935d59-85e8-41c1-918c-a58c2eebdd38\") " pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:21 crc kubenswrapper[4736]: I0316 17:15:21.799844 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:22 crc kubenswrapper[4736]: I0316 17:15:22.345825 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dg8d7"] Mar 16 17:15:22 crc kubenswrapper[4736]: I0316 17:15:22.845240 4736 generic.go:334] "Generic (PLEG): container finished" podID="fb935d59-85e8-41c1-918c-a58c2eebdd38" containerID="975b7d1677a74a852712f8244cb23c1b1291155fa345ebeb620aa39bfd731f5c" exitCode=0 Mar 16 17:15:22 crc kubenswrapper[4736]: I0316 17:15:22.845281 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dg8d7" event={"ID":"fb935d59-85e8-41c1-918c-a58c2eebdd38","Type":"ContainerDied","Data":"975b7d1677a74a852712f8244cb23c1b1291155fa345ebeb620aa39bfd731f5c"} Mar 16 17:15:22 crc kubenswrapper[4736]: I0316 17:15:22.845304 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dg8d7" event={"ID":"fb935d59-85e8-41c1-918c-a58c2eebdd38","Type":"ContainerStarted","Data":"243e9c924d336738fd12d63621414da89ed4834de8eb3ddd7161cb90b79346b4"} Mar 16 17:15:24 crc kubenswrapper[4736]: I0316 17:15:24.865061 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dg8d7" event={"ID":"fb935d59-85e8-41c1-918c-a58c2eebdd38","Type":"ContainerStarted","Data":"918fa1a700d942b883b094095b3e1ccdf19ea930bdacc3d471db0bee3811d948"} Mar 16 17:15:25 crc kubenswrapper[4736]: I0316 17:15:25.879041 4736 generic.go:334] "Generic (PLEG): container finished" podID="fb935d59-85e8-41c1-918c-a58c2eebdd38" containerID="918fa1a700d942b883b094095b3e1ccdf19ea930bdacc3d471db0bee3811d948" exitCode=0 Mar 16 17:15:25 crc kubenswrapper[4736]: I0316 17:15:25.879126 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dg8d7" event={"ID":"fb935d59-85e8-41c1-918c-a58c2eebdd38","Type":"ContainerDied","Data":"918fa1a700d942b883b094095b3e1ccdf19ea930bdacc3d471db0bee3811d948"} Mar 16 17:15:26 crc kubenswrapper[4736]: I0316 17:15:26.889767 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dg8d7" event={"ID":"fb935d59-85e8-41c1-918c-a58c2eebdd38","Type":"ContainerStarted","Data":"6c1eef478a2ab25e3e08bb92698736544723aa1b35a92accff2a4a25922d2168"} Mar 16 17:15:26 crc kubenswrapper[4736]: I0316 17:15:26.917290 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dg8d7" podStartSLOduration=2.477362142 podStartE2EDuration="5.917272166s" podCreationTimestamp="2026-03-16 17:15:21 +0000 UTC" firstStartedPulling="2026-03-16 17:15:22.847275424 +0000 UTC m=+7324.574665711" lastFinishedPulling="2026-03-16 17:15:26.287185428 +0000 UTC m=+7328.014575735" observedRunningTime="2026-03-16 17:15:26.91340785 +0000 UTC m=+7328.640798147" watchObservedRunningTime="2026-03-16 17:15:26.917272166 +0000 UTC m=+7328.644662453" Mar 16 17:15:29 crc kubenswrapper[4736]: I0316 17:15:29.978413 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:15:29 crc kubenswrapper[4736]: E0316 17:15:29.979361 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:15:31 crc kubenswrapper[4736]: I0316 17:15:31.800682 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:31 crc kubenswrapper[4736]: I0316 17:15:31.802288 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:31 crc kubenswrapper[4736]: I0316 17:15:31.878647 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:31 crc kubenswrapper[4736]: I0316 17:15:31.978802 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:32 crc kubenswrapper[4736]: I0316 17:15:32.111733 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dg8d7"] Mar 16 17:15:33 crc kubenswrapper[4736]: I0316 17:15:33.951671 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dg8d7" podUID="fb935d59-85e8-41c1-918c-a58c2eebdd38" containerName="registry-server" containerID="cri-o://6c1eef478a2ab25e3e08bb92698736544723aa1b35a92accff2a4a25922d2168" gracePeriod=2 Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.467918 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.514752 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjhrk\" (UniqueName: \"kubernetes.io/projected/fb935d59-85e8-41c1-918c-a58c2eebdd38-kube-api-access-sjhrk\") pod \"fb935d59-85e8-41c1-918c-a58c2eebdd38\" (UID: \"fb935d59-85e8-41c1-918c-a58c2eebdd38\") " Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.514880 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb935d59-85e8-41c1-918c-a58c2eebdd38-catalog-content\") pod \"fb935d59-85e8-41c1-918c-a58c2eebdd38\" (UID: \"fb935d59-85e8-41c1-918c-a58c2eebdd38\") " Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.515063 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb935d59-85e8-41c1-918c-a58c2eebdd38-utilities\") pod \"fb935d59-85e8-41c1-918c-a58c2eebdd38\" (UID: \"fb935d59-85e8-41c1-918c-a58c2eebdd38\") " Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.515935 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb935d59-85e8-41c1-918c-a58c2eebdd38-utilities" (OuterVolumeSpecName: "utilities") pod "fb935d59-85e8-41c1-918c-a58c2eebdd38" (UID: "fb935d59-85e8-41c1-918c-a58c2eebdd38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.528361 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb935d59-85e8-41c1-918c-a58c2eebdd38-kube-api-access-sjhrk" (OuterVolumeSpecName: "kube-api-access-sjhrk") pod "fb935d59-85e8-41c1-918c-a58c2eebdd38" (UID: "fb935d59-85e8-41c1-918c-a58c2eebdd38"). InnerVolumeSpecName "kube-api-access-sjhrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.583994 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb935d59-85e8-41c1-918c-a58c2eebdd38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb935d59-85e8-41c1-918c-a58c2eebdd38" (UID: "fb935d59-85e8-41c1-918c-a58c2eebdd38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.623276 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb935d59-85e8-41c1-918c-a58c2eebdd38-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.623332 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb935d59-85e8-41c1-918c-a58c2eebdd38-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.623346 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjhrk\" (UniqueName: \"kubernetes.io/projected/fb935d59-85e8-41c1-918c-a58c2eebdd38-kube-api-access-sjhrk\") on node \"crc\" DevicePath \"\"" Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.962690 4736 generic.go:334] "Generic (PLEG): container finished" podID="fb935d59-85e8-41c1-918c-a58c2eebdd38" containerID="6c1eef478a2ab25e3e08bb92698736544723aa1b35a92accff2a4a25922d2168" exitCode=0 Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.962756 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dg8d7" Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.962778 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dg8d7" event={"ID":"fb935d59-85e8-41c1-918c-a58c2eebdd38","Type":"ContainerDied","Data":"6c1eef478a2ab25e3e08bb92698736544723aa1b35a92accff2a4a25922d2168"} Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.963120 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dg8d7" event={"ID":"fb935d59-85e8-41c1-918c-a58c2eebdd38","Type":"ContainerDied","Data":"243e9c924d336738fd12d63621414da89ed4834de8eb3ddd7161cb90b79346b4"} Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.963164 4736 scope.go:117] "RemoveContainer" containerID="6c1eef478a2ab25e3e08bb92698736544723aa1b35a92accff2a4a25922d2168" Mar 16 17:15:34 crc kubenswrapper[4736]: I0316 17:15:34.989738 4736 scope.go:117] "RemoveContainer" containerID="918fa1a700d942b883b094095b3e1ccdf19ea930bdacc3d471db0bee3811d948" Mar 16 17:15:35 crc kubenswrapper[4736]: I0316 17:15:35.015237 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dg8d7"] Mar 16 17:15:35 crc kubenswrapper[4736]: I0316 17:15:35.036039 4736 scope.go:117] "RemoveContainer" containerID="975b7d1677a74a852712f8244cb23c1b1291155fa345ebeb620aa39bfd731f5c" Mar 16 17:15:35 crc kubenswrapper[4736]: I0316 17:15:35.042547 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dg8d7"] Mar 16 17:15:35 crc kubenswrapper[4736]: I0316 17:15:35.080007 4736 scope.go:117] "RemoveContainer" containerID="6c1eef478a2ab25e3e08bb92698736544723aa1b35a92accff2a4a25922d2168" Mar 16 17:15:35 crc kubenswrapper[4736]: E0316 17:15:35.080624 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c1eef478a2ab25e3e08bb92698736544723aa1b35a92accff2a4a25922d2168\": container with ID starting with 6c1eef478a2ab25e3e08bb92698736544723aa1b35a92accff2a4a25922d2168 not found: ID does not exist" containerID="6c1eef478a2ab25e3e08bb92698736544723aa1b35a92accff2a4a25922d2168" Mar 16 17:15:35 crc kubenswrapper[4736]: I0316 17:15:35.080747 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1eef478a2ab25e3e08bb92698736544723aa1b35a92accff2a4a25922d2168"} err="failed to get container status \"6c1eef478a2ab25e3e08bb92698736544723aa1b35a92accff2a4a25922d2168\": rpc error: code = NotFound desc = could not find container \"6c1eef478a2ab25e3e08bb92698736544723aa1b35a92accff2a4a25922d2168\": container with ID starting with 6c1eef478a2ab25e3e08bb92698736544723aa1b35a92accff2a4a25922d2168 not found: ID does not exist" Mar 16 17:15:35 crc kubenswrapper[4736]: I0316 17:15:35.080839 4736 scope.go:117] "RemoveContainer" containerID="918fa1a700d942b883b094095b3e1ccdf19ea930bdacc3d471db0bee3811d948" Mar 16 17:15:35 crc kubenswrapper[4736]: E0316 17:15:35.081587 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"918fa1a700d942b883b094095b3e1ccdf19ea930bdacc3d471db0bee3811d948\": container with ID starting with 918fa1a700d942b883b094095b3e1ccdf19ea930bdacc3d471db0bee3811d948 not found: ID does not exist" containerID="918fa1a700d942b883b094095b3e1ccdf19ea930bdacc3d471db0bee3811d948" Mar 16 17:15:35 crc kubenswrapper[4736]: I0316 17:15:35.081690 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"918fa1a700d942b883b094095b3e1ccdf19ea930bdacc3d471db0bee3811d948"} err="failed to get container status \"918fa1a700d942b883b094095b3e1ccdf19ea930bdacc3d471db0bee3811d948\": rpc error: code = NotFound desc = could not find container \"918fa1a700d942b883b094095b3e1ccdf19ea930bdacc3d471db0bee3811d948\": container with ID starting with 918fa1a700d942b883b094095b3e1ccdf19ea930bdacc3d471db0bee3811d948 not found: ID does not exist" Mar 16 17:15:35 crc kubenswrapper[4736]: I0316 17:15:35.081867 4736 scope.go:117] "RemoveContainer" containerID="975b7d1677a74a852712f8244cb23c1b1291155fa345ebeb620aa39bfd731f5c" Mar 16 17:15:35 crc kubenswrapper[4736]: E0316 17:15:35.082404 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"975b7d1677a74a852712f8244cb23c1b1291155fa345ebeb620aa39bfd731f5c\": container with ID starting with 975b7d1677a74a852712f8244cb23c1b1291155fa345ebeb620aa39bfd731f5c not found: ID does not exist" containerID="975b7d1677a74a852712f8244cb23c1b1291155fa345ebeb620aa39bfd731f5c" Mar 16 17:15:35 crc kubenswrapper[4736]: I0316 17:15:35.082466 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"975b7d1677a74a852712f8244cb23c1b1291155fa345ebeb620aa39bfd731f5c"} err="failed to get container status \"975b7d1677a74a852712f8244cb23c1b1291155fa345ebeb620aa39bfd731f5c\": rpc error: code = NotFound desc = could not find container \"975b7d1677a74a852712f8244cb23c1b1291155fa345ebeb620aa39bfd731f5c\": container with ID starting with 975b7d1677a74a852712f8244cb23c1b1291155fa345ebeb620aa39bfd731f5c not found: ID does not exist" Mar 16 17:15:36 crc kubenswrapper[4736]: I0316 17:15:36.994645 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb935d59-85e8-41c1-918c-a58c2eebdd38" path="/var/lib/kubelet/pods/fb935d59-85e8-41c1-918c-a58c2eebdd38/volumes" Mar 16 17:15:41 crc kubenswrapper[4736]: I0316 17:15:41.978053 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:15:41 crc kubenswrapper[4736]: E0316 17:15:41.978863 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:15:55 crc kubenswrapper[4736]: I0316 17:15:55.978302 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:15:55 crc kubenswrapper[4736]: E0316 17:15:55.979314 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.150159 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561356-qvcj9"] Mar 16 17:16:00 crc kubenswrapper[4736]: E0316 17:16:00.151096 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb935d59-85e8-41c1-918c-a58c2eebdd38" containerName="extract-utilities" Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.151123 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb935d59-85e8-41c1-918c-a58c2eebdd38" containerName="extract-utilities" Mar 16 17:16:00 crc kubenswrapper[4736]: E0316 17:16:00.151137 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb935d59-85e8-41c1-918c-a58c2eebdd38" containerName="extract-content" Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.151143 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb935d59-85e8-41c1-918c-a58c2eebdd38" containerName="extract-content" Mar 16 17:16:00 crc kubenswrapper[4736]: E0316 17:16:00.151157 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb935d59-85e8-41c1-918c-a58c2eebdd38" containerName="registry-server" Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.151163 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb935d59-85e8-41c1-918c-a58c2eebdd38" containerName="registry-server" Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.151327 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb935d59-85e8-41c1-918c-a58c2eebdd38" containerName="registry-server" Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.151974 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561356-qvcj9" Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.154194 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.154502 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.157887 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.170650 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561356-qvcj9"] Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.321306 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv2z2\" (UniqueName: \"kubernetes.io/projected/472cd732-7072-41e9-ba9a-72b506ff0885-kube-api-access-pv2z2\") pod \"auto-csr-approver-29561356-qvcj9\" (UID: \"472cd732-7072-41e9-ba9a-72b506ff0885\") " pod="openshift-infra/auto-csr-approver-29561356-qvcj9" Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.423243 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv2z2\" (UniqueName: \"kubernetes.io/projected/472cd732-7072-41e9-ba9a-72b506ff0885-kube-api-access-pv2z2\") pod \"auto-csr-approver-29561356-qvcj9\" (UID: \"472cd732-7072-41e9-ba9a-72b506ff0885\") " pod="openshift-infra/auto-csr-approver-29561356-qvcj9" Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.442877 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv2z2\" (UniqueName: \"kubernetes.io/projected/472cd732-7072-41e9-ba9a-72b506ff0885-kube-api-access-pv2z2\") pod \"auto-csr-approver-29561356-qvcj9\" (UID: \"472cd732-7072-41e9-ba9a-72b506ff0885\") " pod="openshift-infra/auto-csr-approver-29561356-qvcj9" Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.469563 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561356-qvcj9" Mar 16 17:16:00 crc kubenswrapper[4736]: I0316 17:16:00.936969 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561356-qvcj9"] Mar 16 17:16:01 crc kubenswrapper[4736]: I0316 17:16:01.184319 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561356-qvcj9" event={"ID":"472cd732-7072-41e9-ba9a-72b506ff0885","Type":"ContainerStarted","Data":"9af7d12b30066c3fa04422974e0c81ba27eeabcee938883d42d222d248e50e1f"} Mar 16 17:16:01 crc kubenswrapper[4736]: I0316 17:16:01.350083 4736 scope.go:117] "RemoveContainer" containerID="aea39b907a8280104e5e7136644af7e5dff399e28cbed88f33714c3c958ce09c" Mar 16 17:16:03 crc kubenswrapper[4736]: I0316 17:16:03.206805 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561356-qvcj9" event={"ID":"472cd732-7072-41e9-ba9a-72b506ff0885","Type":"ContainerStarted","Data":"d7502bffa1a4650aec82f81b75d32f14177b52c00e47192b16396a3480be549e"} Mar 16 17:16:03 crc kubenswrapper[4736]: I0316 17:16:03.226661 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561356-qvcj9" podStartSLOduration=1.620208659 podStartE2EDuration="3.22664365s" podCreationTimestamp="2026-03-16 17:16:00 +0000 UTC" firstStartedPulling="2026-03-16 17:16:00.942473962 +0000 UTC m=+7362.669864259" lastFinishedPulling="2026-03-16 17:16:02.548908963 +0000 UTC m=+7364.276299250" observedRunningTime="2026-03-16 17:16:03.219762582 +0000 UTC m=+7364.947152869" watchObservedRunningTime="2026-03-16 17:16:03.22664365 +0000 UTC m=+7364.954033937" Mar 16 17:16:04 crc kubenswrapper[4736]: I0316 17:16:04.233441 4736 generic.go:334] "Generic (PLEG): container finished" podID="472cd732-7072-41e9-ba9a-72b506ff0885" containerID="d7502bffa1a4650aec82f81b75d32f14177b52c00e47192b16396a3480be549e" exitCode=0 Mar 16 17:16:04 crc kubenswrapper[4736]: I0316 17:16:04.234178 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561356-qvcj9" event={"ID":"472cd732-7072-41e9-ba9a-72b506ff0885","Type":"ContainerDied","Data":"d7502bffa1a4650aec82f81b75d32f14177b52c00e47192b16396a3480be549e"} Mar 16 17:16:05 crc kubenswrapper[4736]: I0316 17:16:05.597509 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561356-qvcj9" Mar 16 17:16:05 crc kubenswrapper[4736]: I0316 17:16:05.723424 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv2z2\" (UniqueName: \"kubernetes.io/projected/472cd732-7072-41e9-ba9a-72b506ff0885-kube-api-access-pv2z2\") pod \"472cd732-7072-41e9-ba9a-72b506ff0885\" (UID: \"472cd732-7072-41e9-ba9a-72b506ff0885\") " Mar 16 17:16:05 crc kubenswrapper[4736]: I0316 17:16:05.728972 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472cd732-7072-41e9-ba9a-72b506ff0885-kube-api-access-pv2z2" (OuterVolumeSpecName: "kube-api-access-pv2z2") pod "472cd732-7072-41e9-ba9a-72b506ff0885" (UID: "472cd732-7072-41e9-ba9a-72b506ff0885"). InnerVolumeSpecName "kube-api-access-pv2z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:16:05 crc kubenswrapper[4736]: I0316 17:16:05.825925 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv2z2\" (UniqueName: \"kubernetes.io/projected/472cd732-7072-41e9-ba9a-72b506ff0885-kube-api-access-pv2z2\") on node \"crc\" DevicePath \"\"" Mar 16 17:16:06 crc kubenswrapper[4736]: I0316 17:16:06.257202 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561356-qvcj9" event={"ID":"472cd732-7072-41e9-ba9a-72b506ff0885","Type":"ContainerDied","Data":"9af7d12b30066c3fa04422974e0c81ba27eeabcee938883d42d222d248e50e1f"} Mar 16 17:16:06 crc kubenswrapper[4736]: I0316 17:16:06.257258 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9af7d12b30066c3fa04422974e0c81ba27eeabcee938883d42d222d248e50e1f" Mar 16 17:16:06 crc kubenswrapper[4736]: I0316 17:16:06.257255 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561356-qvcj9" Mar 16 17:16:06 crc kubenswrapper[4736]: I0316 17:16:06.316235 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561350-fwgrh"] Mar 16 17:16:06 crc kubenswrapper[4736]: I0316 17:16:06.327266 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561350-fwgrh"] Mar 16 17:16:06 crc kubenswrapper[4736]: I0316 17:16:06.978822 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:16:06 crc kubenswrapper[4736]: E0316 17:16:06.979414 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:16:06 crc kubenswrapper[4736]: I0316 17:16:06.995260 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06f6919-59df-4a8a-acff-57e7d7f9546e" path="/var/lib/kubelet/pods/f06f6919-59df-4a8a-acff-57e7d7f9546e/volumes" Mar 16 17:16:21 crc kubenswrapper[4736]: I0316 17:16:21.978484 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:16:21 crc kubenswrapper[4736]: E0316 17:16:21.979250 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:16:36 crc kubenswrapper[4736]: I0316 17:16:36.978078 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:16:36 crc kubenswrapper[4736]: E0316 17:16:36.980274 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:16:49 crc kubenswrapper[4736]: I0316 17:16:49.978012 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:16:49 crc kubenswrapper[4736]: E0316 17:16:49.978795 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:17:01 crc kubenswrapper[4736]: I0316 17:17:01.461452 4736 scope.go:117] "RemoveContainer" containerID="41442a39fc9a39880defd3f4fd3390cd3456f2e3808af63d0a5c20b5735b3e9a" Mar 16 17:17:01 crc kubenswrapper[4736]: I0316 17:17:01.952033 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zwqmm"] Mar 16 17:17:01 crc kubenswrapper[4736]: E0316 17:17:01.952828 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472cd732-7072-41e9-ba9a-72b506ff0885" containerName="oc" Mar 16 17:17:01 crc kubenswrapper[4736]: I0316 17:17:01.952853 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="472cd732-7072-41e9-ba9a-72b506ff0885" containerName="oc" Mar 16 17:17:01 crc kubenswrapper[4736]: I0316 17:17:01.953126 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="472cd732-7072-41e9-ba9a-72b506ff0885" containerName="oc" Mar 16 17:17:01 crc kubenswrapper[4736]: I0316 17:17:01.954758 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:01 crc kubenswrapper[4736]: I0316 17:17:01.973421 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwqmm"] Mar 16 17:17:01 crc kubenswrapper[4736]: I0316 17:17:01.978270 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:17:01 crc kubenswrapper[4736]: E0316 17:17:01.978509 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:17:01 crc kubenswrapper[4736]: I0316 17:17:01.994095 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dwcs\" (UniqueName: \"kubernetes.io/projected/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-kube-api-access-6dwcs\") pod \"redhat-marketplace-zwqmm\" (UID: \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\") " pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:01 crc kubenswrapper[4736]: I0316 17:17:01.994261 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-utilities\") pod \"redhat-marketplace-zwqmm\" (UID: \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\") " pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:01 crc kubenswrapper[4736]: I0316 17:17:01.994465 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-catalog-content\") pod \"redhat-marketplace-zwqmm\" (UID: \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\") " pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:02 crc kubenswrapper[4736]: I0316 17:17:02.096550 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-catalog-content\") pod \"redhat-marketplace-zwqmm\" (UID: \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\") " pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:02 crc kubenswrapper[4736]: I0316 17:17:02.096689 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dwcs\" (UniqueName: \"kubernetes.io/projected/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-kube-api-access-6dwcs\") pod \"redhat-marketplace-zwqmm\" (UID: \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\") " pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:02 crc kubenswrapper[4736]: I0316 17:17:02.096823 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-utilities\") pod \"redhat-marketplace-zwqmm\" (UID: \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\") " pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:02 crc kubenswrapper[4736]: I0316 17:17:02.097384 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-catalog-content\") pod \"redhat-marketplace-zwqmm\" (UID: \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\") " pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:02 crc kubenswrapper[4736]: I0316 17:17:02.097482 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-utilities\") pod \"redhat-marketplace-zwqmm\" (UID: \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\") " pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:02 crc kubenswrapper[4736]: I0316 17:17:02.121711 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dwcs\" (UniqueName: \"kubernetes.io/projected/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-kube-api-access-6dwcs\") pod \"redhat-marketplace-zwqmm\" (UID: \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\") " pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:02 crc kubenswrapper[4736]: I0316 17:17:02.276772 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:02 crc kubenswrapper[4736]: I0316 17:17:02.946688 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwqmm"] Mar 16 17:17:03 crc kubenswrapper[4736]: I0316 17:17:03.075935 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwqmm" event={"ID":"cc7d4f04-8f7b-4b2b-abab-8d41156b2813","Type":"ContainerStarted","Data":"148f6f369927d5d725b105d5d09f20cbab235d94ed53dfae9bd976973036a4bc"} Mar 16 17:17:04 crc kubenswrapper[4736]: I0316 17:17:04.086863 4736 generic.go:334] "Generic (PLEG): container finished" podID="cc7d4f04-8f7b-4b2b-abab-8d41156b2813" containerID="df339f032b9eae0fbf793cf6c74343d9587241b6c4abab7a9c733ed776f16ae3" exitCode=0 Mar 16 17:17:04 crc kubenswrapper[4736]: I0316 17:17:04.087036 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwqmm" event={"ID":"cc7d4f04-8f7b-4b2b-abab-8d41156b2813","Type":"ContainerDied","Data":"df339f032b9eae0fbf793cf6c74343d9587241b6c4abab7a9c733ed776f16ae3"} Mar 16 17:17:04 crc kubenswrapper[4736]: I0316 17:17:04.090984 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 17:17:05 crc kubenswrapper[4736]: I0316 17:17:05.097908 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwqmm" event={"ID":"cc7d4f04-8f7b-4b2b-abab-8d41156b2813","Type":"ContainerStarted","Data":"cadc53bdd29729ad922d00a2ef0e91d999a3a62c18158646d2478823581ed0dd"} Mar 16 17:17:07 crc kubenswrapper[4736]: I0316 17:17:07.119915 4736 generic.go:334] "Generic (PLEG): container finished" podID="cc7d4f04-8f7b-4b2b-abab-8d41156b2813" containerID="cadc53bdd29729ad922d00a2ef0e91d999a3a62c18158646d2478823581ed0dd" exitCode=0 Mar 16 17:17:07 crc kubenswrapper[4736]: I0316 17:17:07.119973 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwqmm" event={"ID":"cc7d4f04-8f7b-4b2b-abab-8d41156b2813","Type":"ContainerDied","Data":"cadc53bdd29729ad922d00a2ef0e91d999a3a62c18158646d2478823581ed0dd"} Mar 16 17:17:08 crc kubenswrapper[4736]: I0316 17:17:08.129519 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwqmm" event={"ID":"cc7d4f04-8f7b-4b2b-abab-8d41156b2813","Type":"ContainerStarted","Data":"4db23d842b6c7c0317890022b7b848d0da51acddeceae691624bab88e6af6d9a"} Mar 16 17:17:08 crc kubenswrapper[4736]: I0316 17:17:08.160971 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zwqmm" podStartSLOduration=3.507026917 podStartE2EDuration="7.160948661s" podCreationTimestamp="2026-03-16 17:17:01 +0000 UTC" firstStartedPulling="2026-03-16 17:17:04.090687701 +0000 UTC m=+7425.818077998" lastFinishedPulling="2026-03-16 17:17:07.744609455 +0000 UTC m=+7429.471999742" observedRunningTime="2026-03-16 17:17:08.149083378 +0000 UTC m=+7429.876473665" watchObservedRunningTime="2026-03-16 17:17:08.160948661 +0000 UTC m=+7429.888338958" Mar 16 17:17:12 crc kubenswrapper[4736]: I0316 17:17:12.277794 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:12 crc kubenswrapper[4736]: I0316 17:17:12.279089 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:12 crc kubenswrapper[4736]: I0316 17:17:12.326572 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:13 crc kubenswrapper[4736]: I0316 17:17:13.238320 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:13 crc kubenswrapper[4736]: I0316 17:17:13.300837 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwqmm"] Mar 16 17:17:13 crc kubenswrapper[4736]: I0316 17:17:13.978522 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:17:15 crc kubenswrapper[4736]: I0316 17:17:15.189129 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"3d5146309e3e7073c7b63428d346eb6d13d10704803563bce8c1bd4611deb23f"} Mar 16 17:17:15 crc kubenswrapper[4736]: I0316 17:17:15.189258 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zwqmm" podUID="cc7d4f04-8f7b-4b2b-abab-8d41156b2813" containerName="registry-server" containerID="cri-o://4db23d842b6c7c0317890022b7b848d0da51acddeceae691624bab88e6af6d9a" gracePeriod=2 Mar 16 17:17:16 crc kubenswrapper[4736]: I0316 17:17:16.205382 4736 generic.go:334] "Generic (PLEG): container finished" podID="cc7d4f04-8f7b-4b2b-abab-8d41156b2813" containerID="4db23d842b6c7c0317890022b7b848d0da51acddeceae691624bab88e6af6d9a" exitCode=0 Mar 16 17:17:16 crc kubenswrapper[4736]: I0316 17:17:16.205437 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwqmm" event={"ID":"cc7d4f04-8f7b-4b2b-abab-8d41156b2813","Type":"ContainerDied","Data":"4db23d842b6c7c0317890022b7b848d0da51acddeceae691624bab88e6af6d9a"} Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:16.383702 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:16.511539 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-catalog-content\") pod \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\" (UID: \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\") " Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:16.512146 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dwcs\" (UniqueName: \"kubernetes.io/projected/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-kube-api-access-6dwcs\") pod \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\" (UID: \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\") " Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:16.512316 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-utilities\") pod \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\" (UID: \"cc7d4f04-8f7b-4b2b-abab-8d41156b2813\") " Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:16.513733 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-utilities" (OuterVolumeSpecName: "utilities") pod "cc7d4f04-8f7b-4b2b-abab-8d41156b2813" (UID: "cc7d4f04-8f7b-4b2b-abab-8d41156b2813"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:16.537916 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc7d4f04-8f7b-4b2b-abab-8d41156b2813" (UID: "cc7d4f04-8f7b-4b2b-abab-8d41156b2813"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:16.614277 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:16.614306 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:17.138919 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-kube-api-access-6dwcs" (OuterVolumeSpecName: "kube-api-access-6dwcs") pod "cc7d4f04-8f7b-4b2b-abab-8d41156b2813" (UID: "cc7d4f04-8f7b-4b2b-abab-8d41156b2813"). InnerVolumeSpecName "kube-api-access-6dwcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:17.215952 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwqmm" event={"ID":"cc7d4f04-8f7b-4b2b-abab-8d41156b2813","Type":"ContainerDied","Data":"148f6f369927d5d725b105d5d09f20cbab235d94ed53dfae9bd976973036a4bc"} Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:17.216008 4736 scope.go:117] "RemoveContainer" containerID="4db23d842b6c7c0317890022b7b848d0da51acddeceae691624bab88e6af6d9a" Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:17.216211 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwqmm" Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:17.226866 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dwcs\" (UniqueName: \"kubernetes.io/projected/cc7d4f04-8f7b-4b2b-abab-8d41156b2813-kube-api-access-6dwcs\") on node \"crc\" DevicePath \"\"" Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:17.238809 4736 scope.go:117] "RemoveContainer" containerID="cadc53bdd29729ad922d00a2ef0e91d999a3a62c18158646d2478823581ed0dd" Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:17.256442 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwqmm"] Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:17.264234 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwqmm"] Mar 16 17:17:17 crc kubenswrapper[4736]: I0316 17:17:17.282995 4736 scope.go:117] "RemoveContainer" containerID="df339f032b9eae0fbf793cf6c74343d9587241b6c4abab7a9c733ed776f16ae3" Mar 16 17:17:18 crc kubenswrapper[4736]: I0316 17:17:18.996722 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7d4f04-8f7b-4b2b-abab-8d41156b2813" path="/var/lib/kubelet/pods/cc7d4f04-8f7b-4b2b-abab-8d41156b2813/volumes" Mar 16 17:18:00 crc kubenswrapper[4736]: I0316 17:18:00.159662 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561358-549zk"] Mar 16 17:18:00 crc kubenswrapper[4736]: E0316 17:18:00.160943 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7d4f04-8f7b-4b2b-abab-8d41156b2813" containerName="registry-server" Mar 16 17:18:00 crc kubenswrapper[4736]: I0316 17:18:00.160963 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7d4f04-8f7b-4b2b-abab-8d41156b2813" containerName="registry-server" Mar 16 17:18:00 crc kubenswrapper[4736]: E0316 17:18:00.160997 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7d4f04-8f7b-4b2b-abab-8d41156b2813" containerName="extract-utilities" Mar 16 17:18:00 crc kubenswrapper[4736]: I0316 17:18:00.161006 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7d4f04-8f7b-4b2b-abab-8d41156b2813" containerName="extract-utilities" Mar 16 17:18:00 crc kubenswrapper[4736]: E0316 17:18:00.161032 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7d4f04-8f7b-4b2b-abab-8d41156b2813" containerName="extract-content" Mar 16 17:18:00 crc kubenswrapper[4736]: I0316 17:18:00.161040 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7d4f04-8f7b-4b2b-abab-8d41156b2813" containerName="extract-content" Mar 16 17:18:00 crc kubenswrapper[4736]: I0316 17:18:00.161394 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7d4f04-8f7b-4b2b-abab-8d41156b2813" containerName="registry-server" Mar 16 17:18:00 crc kubenswrapper[4736]: I0316 17:18:00.162240 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561358-549zk" Mar 16 17:18:00 crc kubenswrapper[4736]: I0316 17:18:00.164535 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:18:00 crc kubenswrapper[4736]: I0316 17:18:00.165154 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:18:00 crc kubenswrapper[4736]: I0316 17:18:00.167588 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:18:00 crc kubenswrapper[4736]: I0316 17:18:00.177174 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561358-549zk"] Mar 16 17:18:00 crc kubenswrapper[4736]: I0316 17:18:00.238619 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fr4n\" (UniqueName: \"kubernetes.io/projected/7233ac61-3281-4356-a46c-5599365589c2-kube-api-access-7fr4n\") pod \"auto-csr-approver-29561358-549zk\" (UID: \"7233ac61-3281-4356-a46c-5599365589c2\") " pod="openshift-infra/auto-csr-approver-29561358-549zk" Mar 16 17:18:00 crc kubenswrapper[4736]: I0316 17:18:00.339878 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fr4n\" (UniqueName: \"kubernetes.io/projected/7233ac61-3281-4356-a46c-5599365589c2-kube-api-access-7fr4n\") pod \"auto-csr-approver-29561358-549zk\" (UID: \"7233ac61-3281-4356-a46c-5599365589c2\") " pod="openshift-infra/auto-csr-approver-29561358-549zk" Mar 16 17:18:00 crc kubenswrapper[4736]: I0316 17:18:00.361061 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fr4n\" (UniqueName: \"kubernetes.io/projected/7233ac61-3281-4356-a46c-5599365589c2-kube-api-access-7fr4n\") pod \"auto-csr-approver-29561358-549zk\" (UID: \"7233ac61-3281-4356-a46c-5599365589c2\") " pod="openshift-infra/auto-csr-approver-29561358-549zk" Mar 16 17:18:00 crc kubenswrapper[4736]: I0316 17:18:00.484590 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561358-549zk" Mar 16 17:18:01 crc kubenswrapper[4736]: I0316 17:18:01.010816 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561358-549zk"] Mar 16 17:18:01 crc kubenswrapper[4736]: I0316 17:18:01.626724 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561358-549zk" event={"ID":"7233ac61-3281-4356-a46c-5599365589c2","Type":"ContainerStarted","Data":"840a882c9cced4b50da19e9a018f9b249e0c631f02ca4b20ab923250d770da5a"} Mar 16 17:18:02 crc kubenswrapper[4736]: I0316 17:18:02.640174 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561358-549zk" event={"ID":"7233ac61-3281-4356-a46c-5599365589c2","Type":"ContainerStarted","Data":"7a6c09cb0d0adcd3aea8b3b3a855b6e458c0466199cd6b4c4bfa5f39392d0654"} Mar 16 17:18:02 crc kubenswrapper[4736]: I0316 17:18:02.658359 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561358-549zk" podStartSLOduration=1.741046863 podStartE2EDuration="2.658340584s" podCreationTimestamp="2026-03-16 17:18:00 +0000 UTC" firstStartedPulling="2026-03-16 17:18:01.015182528 +0000 UTC m=+7482.742572815" lastFinishedPulling="2026-03-16 17:18:01.932476249 +0000 UTC m=+7483.659866536" observedRunningTime="2026-03-16 17:18:02.657785429 +0000 UTC m=+7484.385175716" watchObservedRunningTime="2026-03-16 17:18:02.658340584 +0000 UTC m=+7484.385730871" Mar 16 17:18:03 crc kubenswrapper[4736]: I0316 17:18:03.652818 4736 generic.go:334] "Generic (PLEG): container finished" podID="7233ac61-3281-4356-a46c-5599365589c2" containerID="7a6c09cb0d0adcd3aea8b3b3a855b6e458c0466199cd6b4c4bfa5f39392d0654" exitCode=0 Mar 16 17:18:03 crc kubenswrapper[4736]: I0316 17:18:03.652859 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561358-549zk" event={"ID":"7233ac61-3281-4356-a46c-5599365589c2","Type":"ContainerDied","Data":"7a6c09cb0d0adcd3aea8b3b3a855b6e458c0466199cd6b4c4bfa5f39392d0654"} Mar 16 17:18:05 crc kubenswrapper[4736]: I0316 17:18:05.224485 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561358-549zk" Mar 16 17:18:05 crc kubenswrapper[4736]: I0316 17:18:05.345229 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fr4n\" (UniqueName: \"kubernetes.io/projected/7233ac61-3281-4356-a46c-5599365589c2-kube-api-access-7fr4n\") pod \"7233ac61-3281-4356-a46c-5599365589c2\" (UID: \"7233ac61-3281-4356-a46c-5599365589c2\") " Mar 16 17:18:05 crc kubenswrapper[4736]: I0316 17:18:05.352158 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7233ac61-3281-4356-a46c-5599365589c2-kube-api-access-7fr4n" (OuterVolumeSpecName: "kube-api-access-7fr4n") pod "7233ac61-3281-4356-a46c-5599365589c2" (UID: "7233ac61-3281-4356-a46c-5599365589c2"). InnerVolumeSpecName "kube-api-access-7fr4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:18:05 crc kubenswrapper[4736]: I0316 17:18:05.447733 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fr4n\" (UniqueName: \"kubernetes.io/projected/7233ac61-3281-4356-a46c-5599365589c2-kube-api-access-7fr4n\") on node \"crc\" DevicePath \"\"" Mar 16 17:18:05 crc kubenswrapper[4736]: I0316 17:18:05.677422 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561358-549zk" event={"ID":"7233ac61-3281-4356-a46c-5599365589c2","Type":"ContainerDied","Data":"840a882c9cced4b50da19e9a018f9b249e0c631f02ca4b20ab923250d770da5a"} Mar 16 17:18:05 crc kubenswrapper[4736]: I0316 17:18:05.678251 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="840a882c9cced4b50da19e9a018f9b249e0c631f02ca4b20ab923250d770da5a" Mar 16 17:18:05 crc kubenswrapper[4736]: I0316 17:18:05.677541 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561358-549zk" Mar 16 17:18:05 crc kubenswrapper[4736]: I0316 17:18:05.762308 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561352-g2x4b"] Mar 16 17:18:05 crc kubenswrapper[4736]: I0316 17:18:05.770167 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561352-g2x4b"] Mar 16 17:18:07 crc kubenswrapper[4736]: I0316 17:18:07.000566 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10b5063a-d652-4112-8fe7-a9a7a8726f2b" path="/var/lib/kubelet/pods/10b5063a-d652-4112-8fe7-a9a7a8726f2b/volumes" Mar 16 17:19:01 crc kubenswrapper[4736]: I0316 17:19:01.587452 4736 scope.go:117] "RemoveContainer" containerID="ad42148a8300c3d5cebdf7b307dd435f319eb815cd183ca662c1167a3bdbd666" Mar 16 17:19:38 crc kubenswrapper[4736]: I0316 17:19:38.507772 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:19:38 crc kubenswrapper[4736]: I0316 17:19:38.508395 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:20:00 crc kubenswrapper[4736]: I0316 17:20:00.156398 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561360-p56ds"] Mar 16 17:20:00 crc kubenswrapper[4736]: E0316 17:20:00.157359 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7233ac61-3281-4356-a46c-5599365589c2" containerName="oc" Mar 16 17:20:00 crc kubenswrapper[4736]: I0316 17:20:00.157373 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7233ac61-3281-4356-a46c-5599365589c2" containerName="oc" Mar 16 17:20:00 crc kubenswrapper[4736]: I0316 17:20:00.157563 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7233ac61-3281-4356-a46c-5599365589c2" containerName="oc" Mar 16 17:20:00 crc kubenswrapper[4736]: I0316 17:20:00.158256 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561360-p56ds" Mar 16 17:20:00 crc kubenswrapper[4736]: I0316 17:20:00.161400 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:20:00 crc kubenswrapper[4736]: I0316 17:20:00.161400 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:20:00 crc kubenswrapper[4736]: I0316 17:20:00.166331 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561360-p56ds"] Mar 16 17:20:00 crc kubenswrapper[4736]: I0316 17:20:00.168983 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:20:00 crc kubenswrapper[4736]: I0316 17:20:00.309141 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9tjm\" (UniqueName: \"kubernetes.io/projected/4b7eaaf7-83c2-4e95-8956-33bb887d53f0-kube-api-access-s9tjm\") pod \"auto-csr-approver-29561360-p56ds\" (UID: \"4b7eaaf7-83c2-4e95-8956-33bb887d53f0\") " pod="openshift-infra/auto-csr-approver-29561360-p56ds" Mar 16 17:20:00 crc kubenswrapper[4736]: I0316 17:20:00.410758 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9tjm\" (UniqueName: \"kubernetes.io/projected/4b7eaaf7-83c2-4e95-8956-33bb887d53f0-kube-api-access-s9tjm\") pod \"auto-csr-approver-29561360-p56ds\" (UID: \"4b7eaaf7-83c2-4e95-8956-33bb887d53f0\") " pod="openshift-infra/auto-csr-approver-29561360-p56ds" Mar 16 17:20:00 crc kubenswrapper[4736]: I0316 17:20:00.431062 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9tjm\" (UniqueName: \"kubernetes.io/projected/4b7eaaf7-83c2-4e95-8956-33bb887d53f0-kube-api-access-s9tjm\") pod \"auto-csr-approver-29561360-p56ds\" (UID: \"4b7eaaf7-83c2-4e95-8956-33bb887d53f0\") " pod="openshift-infra/auto-csr-approver-29561360-p56ds" Mar 16 17:20:00 crc kubenswrapper[4736]: I0316 17:20:00.483708 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561360-p56ds" Mar 16 17:20:01 crc kubenswrapper[4736]: I0316 17:20:01.545955 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561360-p56ds"] Mar 16 17:20:01 crc kubenswrapper[4736]: W0316 17:20:01.553647 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b7eaaf7_83c2_4e95_8956_33bb887d53f0.slice/crio-f532dace9cce6498535fe5971c944a6e1b414493b1ca410a694a6fbb43ac6b10 WatchSource:0}: Error finding container f532dace9cce6498535fe5971c944a6e1b414493b1ca410a694a6fbb43ac6b10: Status 404 returned error can't find the container with id f532dace9cce6498535fe5971c944a6e1b414493b1ca410a694a6fbb43ac6b10 Mar 16 17:20:01 crc kubenswrapper[4736]: I0316 17:20:01.783323 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561360-p56ds" event={"ID":"4b7eaaf7-83c2-4e95-8956-33bb887d53f0","Type":"ContainerStarted","Data":"f532dace9cce6498535fe5971c944a6e1b414493b1ca410a694a6fbb43ac6b10"} Mar 16 17:20:03 crc kubenswrapper[4736]: I0316 17:20:03.803250 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561360-p56ds" event={"ID":"4b7eaaf7-83c2-4e95-8956-33bb887d53f0","Type":"ContainerStarted","Data":"6c23a0841e16eff33067985db5cb130d58e22a857104c6f5866cf6bb6d2c63bb"} Mar 16 17:20:03 crc kubenswrapper[4736]: I0316 17:20:03.828310 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561360-p56ds" podStartSLOduration=2.465410801 podStartE2EDuration="3.828245593s" podCreationTimestamp="2026-03-16 17:20:00 +0000 UTC" firstStartedPulling="2026-03-16 17:20:01.554553436 +0000 UTC m=+7603.281943723" lastFinishedPulling="2026-03-16 17:20:02.917388228 +0000 UTC m=+7604.644778515" observedRunningTime="2026-03-16 17:20:03.82335557 +0000 UTC m=+7605.550745857" watchObservedRunningTime="2026-03-16 17:20:03.828245593 +0000 UTC m=+7605.555635880" Mar 16 17:20:04 crc kubenswrapper[4736]: I0316 17:20:04.839279 4736 generic.go:334] "Generic (PLEG): container finished" podID="4b7eaaf7-83c2-4e95-8956-33bb887d53f0" containerID="6c23a0841e16eff33067985db5cb130d58e22a857104c6f5866cf6bb6d2c63bb" exitCode=0 Mar 16 17:20:04 crc kubenswrapper[4736]: I0316 17:20:04.839332 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561360-p56ds" event={"ID":"4b7eaaf7-83c2-4e95-8956-33bb887d53f0","Type":"ContainerDied","Data":"6c23a0841e16eff33067985db5cb130d58e22a857104c6f5866cf6bb6d2c63bb"} Mar 16 17:20:06 crc kubenswrapper[4736]: I0316 17:20:06.328545 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561360-p56ds" Mar 16 17:20:06 crc kubenswrapper[4736]: I0316 17:20:06.479021 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9tjm\" (UniqueName: \"kubernetes.io/projected/4b7eaaf7-83c2-4e95-8956-33bb887d53f0-kube-api-access-s9tjm\") pod \"4b7eaaf7-83c2-4e95-8956-33bb887d53f0\" (UID: \"4b7eaaf7-83c2-4e95-8956-33bb887d53f0\") " Mar 16 17:20:06 crc kubenswrapper[4736]: I0316 17:20:06.486341 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b7eaaf7-83c2-4e95-8956-33bb887d53f0-kube-api-access-s9tjm" (OuterVolumeSpecName: "kube-api-access-s9tjm") pod "4b7eaaf7-83c2-4e95-8956-33bb887d53f0" (UID: "4b7eaaf7-83c2-4e95-8956-33bb887d53f0"). InnerVolumeSpecName "kube-api-access-s9tjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:20:06 crc kubenswrapper[4736]: I0316 17:20:06.582248 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9tjm\" (UniqueName: \"kubernetes.io/projected/4b7eaaf7-83c2-4e95-8956-33bb887d53f0-kube-api-access-s9tjm\") on node \"crc\" DevicePath \"\"" Mar 16 17:20:06 crc kubenswrapper[4736]: I0316 17:20:06.864760 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561360-p56ds" event={"ID":"4b7eaaf7-83c2-4e95-8956-33bb887d53f0","Type":"ContainerDied","Data":"f532dace9cce6498535fe5971c944a6e1b414493b1ca410a694a6fbb43ac6b10"} Mar 16 17:20:06 crc kubenswrapper[4736]: I0316 17:20:06.865135 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f532dace9cce6498535fe5971c944a6e1b414493b1ca410a694a6fbb43ac6b10" Mar 16 17:20:06 crc kubenswrapper[4736]: I0316 17:20:06.865264 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561360-p56ds" Mar 16 17:20:06 crc kubenswrapper[4736]: I0316 17:20:06.919397 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561354-wmvp5"] Mar 16 17:20:06 crc kubenswrapper[4736]: I0316 17:20:06.930266 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561354-wmvp5"] Mar 16 17:20:06 crc kubenswrapper[4736]: I0316 17:20:06.994215 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3429d24b-73f4-4243-8f32-28d562143b19" path="/var/lib/kubelet/pods/3429d24b-73f4-4243-8f32-28d562143b19/volumes" Mar 16 17:20:08 crc kubenswrapper[4736]: I0316 17:20:08.508725 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:20:08 crc kubenswrapper[4736]: I0316 17:20:08.508780 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.368835 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2hqjg"] Mar 16 17:20:25 crc kubenswrapper[4736]: E0316 17:20:25.370197 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7eaaf7-83c2-4e95-8956-33bb887d53f0" containerName="oc" Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.370222 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7eaaf7-83c2-4e95-8956-33bb887d53f0" containerName="oc" Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.370627 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b7eaaf7-83c2-4e95-8956-33bb887d53f0" containerName="oc" Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.373049 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.386945 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2hqjg"] Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.461296 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac1925f-a407-4a8e-9356-875887b1d8d3-utilities\") pod \"redhat-operators-2hqjg\" (UID: \"cac1925f-a407-4a8e-9356-875887b1d8d3\") " pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.461422 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac1925f-a407-4a8e-9356-875887b1d8d3-catalog-content\") pod \"redhat-operators-2hqjg\" (UID: \"cac1925f-a407-4a8e-9356-875887b1d8d3\") " pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.461569 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2zfr\" (UniqueName: \"kubernetes.io/projected/cac1925f-a407-4a8e-9356-875887b1d8d3-kube-api-access-p2zfr\") pod \"redhat-operators-2hqjg\" (UID: \"cac1925f-a407-4a8e-9356-875887b1d8d3\") " pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.563513 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac1925f-a407-4a8e-9356-875887b1d8d3-utilities\") pod \"redhat-operators-2hqjg\" (UID: \"cac1925f-a407-4a8e-9356-875887b1d8d3\") " pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.563564 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac1925f-a407-4a8e-9356-875887b1d8d3-catalog-content\") pod \"redhat-operators-2hqjg\" (UID: \"cac1925f-a407-4a8e-9356-875887b1d8d3\") " pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.563629 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2zfr\" (UniqueName: \"kubernetes.io/projected/cac1925f-a407-4a8e-9356-875887b1d8d3-kube-api-access-p2zfr\") pod \"redhat-operators-2hqjg\" (UID: \"cac1925f-a407-4a8e-9356-875887b1d8d3\") " pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.565294 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac1925f-a407-4a8e-9356-875887b1d8d3-catalog-content\") pod \"redhat-operators-2hqjg\" (UID: \"cac1925f-a407-4a8e-9356-875887b1d8d3\") " pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.565772 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac1925f-a407-4a8e-9356-875887b1d8d3-utilities\") pod \"redhat-operators-2hqjg\" (UID: \"cac1925f-a407-4a8e-9356-875887b1d8d3\") " pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.584558 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2zfr\" (UniqueName: \"kubernetes.io/projected/cac1925f-a407-4a8e-9356-875887b1d8d3-kube-api-access-p2zfr\") pod \"redhat-operators-2hqjg\" (UID: \"cac1925f-a407-4a8e-9356-875887b1d8d3\") " pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:20:25 crc kubenswrapper[4736]: I0316 17:20:25.698824 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:20:26 crc kubenswrapper[4736]: I0316 17:20:26.271935 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2hqjg"] Mar 16 17:20:27 crc kubenswrapper[4736]: I0316 17:20:27.108311 4736 generic.go:334] "Generic (PLEG): container finished" podID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerID="eac0b136164c96e072ee0913d2434b6c48b8ff617209928919ba34610e27503f" exitCode=0 Mar 16 17:20:27 crc kubenswrapper[4736]: I0316 17:20:27.108355 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hqjg" event={"ID":"cac1925f-a407-4a8e-9356-875887b1d8d3","Type":"ContainerDied","Data":"eac0b136164c96e072ee0913d2434b6c48b8ff617209928919ba34610e27503f"} Mar 16 17:20:27 crc kubenswrapper[4736]: I0316 17:20:27.108692 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hqjg" event={"ID":"cac1925f-a407-4a8e-9356-875887b1d8d3","Type":"ContainerStarted","Data":"6f3deaae258908656ea3f103bb255634b549919bae814720ba766c051094287e"} Mar 16 17:20:29 crc kubenswrapper[4736]: I0316 17:20:29.129479 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hqjg" event={"ID":"cac1925f-a407-4a8e-9356-875887b1d8d3","Type":"ContainerStarted","Data":"3b5903013ea5735aabebffa611b129b94a013be4966e37a63fe0ed3fa2a73678"} Mar 16 17:20:33 crc kubenswrapper[4736]: I0316 17:20:33.167723 4736 generic.go:334] "Generic (PLEG): container finished" podID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerID="3b5903013ea5735aabebffa611b129b94a013be4966e37a63fe0ed3fa2a73678" exitCode=0 Mar 16 17:20:33 crc kubenswrapper[4736]: I0316 17:20:33.167870 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hqjg" event={"ID":"cac1925f-a407-4a8e-9356-875887b1d8d3","Type":"ContainerDied","Data":"3b5903013ea5735aabebffa611b129b94a013be4966e37a63fe0ed3fa2a73678"} Mar 16 17:20:35 crc kubenswrapper[4736]: I0316 17:20:35.187492 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hqjg" event={"ID":"cac1925f-a407-4a8e-9356-875887b1d8d3","Type":"ContainerStarted","Data":"1aec3beae1ab6655d38fe3c4ca40183f025f74960b3cca9fbed0cfb693771207"} Mar 16 17:20:35 crc kubenswrapper[4736]: I0316 17:20:35.208891 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2hqjg" podStartSLOduration=3.373919317 podStartE2EDuration="10.20887214s" podCreationTimestamp="2026-03-16 17:20:25 +0000 UTC" firstStartedPulling="2026-03-16 17:20:27.110045815 +0000 UTC m=+7628.837436112" lastFinishedPulling="2026-03-16 17:20:33.944998638 +0000 UTC m=+7635.672388935" observedRunningTime="2026-03-16 17:20:35.205449436 +0000 UTC m=+7636.932839723" watchObservedRunningTime="2026-03-16 17:20:35.20887214 +0000 UTC m=+7636.936262437" Mar 16 17:20:35 crc kubenswrapper[4736]: I0316 17:20:35.701192 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:20:35 crc kubenswrapper[4736]: I0316 17:20:35.701237 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:20:36 crc kubenswrapper[4736]: I0316 17:20:36.760450 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2hqjg" podUID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerName="registry-server" probeResult="failure" output=< Mar 16 17:20:36 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:20:36 crc kubenswrapper[4736]: > Mar 16 17:20:38 crc kubenswrapper[4736]: I0316 17:20:38.508017 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:20:38 crc kubenswrapper[4736]: I0316 17:20:38.508083 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:20:38 crc kubenswrapper[4736]: I0316 17:20:38.508181 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 17:20:38 crc kubenswrapper[4736]: I0316 17:20:38.510620 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d5146309e3e7073c7b63428d346eb6d13d10704803563bce8c1bd4611deb23f"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 17:20:38 crc kubenswrapper[4736]: I0316 17:20:38.510700 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://3d5146309e3e7073c7b63428d346eb6d13d10704803563bce8c1bd4611deb23f" gracePeriod=600 Mar 16 17:20:39 crc kubenswrapper[4736]: I0316 17:20:39.224992 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="3d5146309e3e7073c7b63428d346eb6d13d10704803563bce8c1bd4611deb23f" exitCode=0 Mar 16 17:20:39 crc kubenswrapper[4736]: I0316 17:20:39.225226 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"3d5146309e3e7073c7b63428d346eb6d13d10704803563bce8c1bd4611deb23f"} Mar 16 17:20:39 crc kubenswrapper[4736]: I0316 17:20:39.226180 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c"} Mar 16 17:20:39 crc kubenswrapper[4736]: I0316 17:20:39.226874 4736 scope.go:117] "RemoveContainer" containerID="f17b864510b294e20b6ca4e11d7f279e618cdfff860b0b4f7d75ce62425038fd" Mar 16 17:20:46 crc kubenswrapper[4736]: I0316 17:20:46.842957 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2hqjg" podUID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerName="registry-server" probeResult="failure" output=< Mar 16 17:20:46 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:20:46 crc kubenswrapper[4736]: > Mar 16 17:20:56 crc kubenswrapper[4736]: I0316 17:20:56.822589 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2hqjg" podUID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerName="registry-server" probeResult="failure" output=< Mar 16 17:20:56 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:20:56 crc kubenswrapper[4736]: > Mar 16 17:21:01 crc kubenswrapper[4736]: I0316 17:21:01.706338 4736 scope.go:117] "RemoveContainer" containerID="9410fcca2c165e04d3f1fd3bec2735987dbe68c971e734b70ea64e9487cd4120" Mar 16 17:21:06 crc kubenswrapper[4736]: I0316 17:21:06.752744 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2hqjg" podUID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerName="registry-server" probeResult="failure" output=< Mar 16 17:21:06 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:21:06 crc kubenswrapper[4736]: > Mar 16 17:21:15 crc kubenswrapper[4736]: I0316 17:21:15.755666 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:21:15 crc kubenswrapper[4736]: I0316 17:21:15.811463 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:21:16 crc kubenswrapper[4736]: I0316 17:21:16.016878 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2hqjg"] Mar 16 17:21:17 crc kubenswrapper[4736]: I0316 17:21:17.576242 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2hqjg" podUID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerName="registry-server" containerID="cri-o://1aec3beae1ab6655d38fe3c4ca40183f025f74960b3cca9fbed0cfb693771207" gracePeriod=2 Mar 16 17:21:18 crc kubenswrapper[4736]: I0316 17:21:18.582553 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hqjg" event={"ID":"cac1925f-a407-4a8e-9356-875887b1d8d3","Type":"ContainerDied","Data":"1aec3beae1ab6655d38fe3c4ca40183f025f74960b3cca9fbed0cfb693771207"} Mar 16 17:21:18 crc kubenswrapper[4736]: I0316 17:21:18.583101 4736 generic.go:334] "Generic (PLEG): container finished" podID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerID="1aec3beae1ab6655d38fe3c4ca40183f025f74960b3cca9fbed0cfb693771207" exitCode=0 Mar 16 17:21:18 crc kubenswrapper[4736]: I0316 17:21:18.767047 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:21:18 crc kubenswrapper[4736]: I0316 17:21:18.930854 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac1925f-a407-4a8e-9356-875887b1d8d3-catalog-content\") pod \"cac1925f-a407-4a8e-9356-875887b1d8d3\" (UID: \"cac1925f-a407-4a8e-9356-875887b1d8d3\") " Mar 16 17:21:18 crc kubenswrapper[4736]: I0316 17:21:18.931047 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2zfr\" (UniqueName: \"kubernetes.io/projected/cac1925f-a407-4a8e-9356-875887b1d8d3-kube-api-access-p2zfr\") pod \"cac1925f-a407-4a8e-9356-875887b1d8d3\" (UID: \"cac1925f-a407-4a8e-9356-875887b1d8d3\") " Mar 16 17:21:18 crc kubenswrapper[4736]: I0316 17:21:18.931099 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac1925f-a407-4a8e-9356-875887b1d8d3-utilities\") pod \"cac1925f-a407-4a8e-9356-875887b1d8d3\" (UID: \"cac1925f-a407-4a8e-9356-875887b1d8d3\") " Mar 16 17:21:18 crc kubenswrapper[4736]: I0316 17:21:18.934601 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cac1925f-a407-4a8e-9356-875887b1d8d3-utilities" (OuterVolumeSpecName: "utilities") pod "cac1925f-a407-4a8e-9356-875887b1d8d3" (UID: "cac1925f-a407-4a8e-9356-875887b1d8d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:21:18 crc kubenswrapper[4736]: I0316 17:21:18.954360 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac1925f-a407-4a8e-9356-875887b1d8d3-kube-api-access-p2zfr" (OuterVolumeSpecName: "kube-api-access-p2zfr") pod "cac1925f-a407-4a8e-9356-875887b1d8d3" (UID: "cac1925f-a407-4a8e-9356-875887b1d8d3"). InnerVolumeSpecName "kube-api-access-p2zfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:21:19 crc kubenswrapper[4736]: I0316 17:21:19.035280 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2zfr\" (UniqueName: \"kubernetes.io/projected/cac1925f-a407-4a8e-9356-875887b1d8d3-kube-api-access-p2zfr\") on node \"crc\" DevicePath \"\"" Mar 16 17:21:19 crc kubenswrapper[4736]: I0316 17:21:19.035326 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cac1925f-a407-4a8e-9356-875887b1d8d3-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:21:19 crc kubenswrapper[4736]: I0316 17:21:19.155817 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cac1925f-a407-4a8e-9356-875887b1d8d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cac1925f-a407-4a8e-9356-875887b1d8d3" (UID: "cac1925f-a407-4a8e-9356-875887b1d8d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:21:19 crc kubenswrapper[4736]: I0316 17:21:19.239467 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cac1925f-a407-4a8e-9356-875887b1d8d3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:21:19 crc kubenswrapper[4736]: I0316 17:21:19.595372 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hqjg" event={"ID":"cac1925f-a407-4a8e-9356-875887b1d8d3","Type":"ContainerDied","Data":"6f3deaae258908656ea3f103bb255634b549919bae814720ba766c051094287e"} Mar 16 17:21:19 crc kubenswrapper[4736]: I0316 17:21:19.595422 4736 scope.go:117] "RemoveContainer" containerID="1aec3beae1ab6655d38fe3c4ca40183f025f74960b3cca9fbed0cfb693771207" Mar 16 17:21:19 crc kubenswrapper[4736]: I0316 17:21:19.595544 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2hqjg" Mar 16 17:21:19 crc kubenswrapper[4736]: I0316 17:21:19.635477 4736 scope.go:117] "RemoveContainer" containerID="3b5903013ea5735aabebffa611b129b94a013be4966e37a63fe0ed3fa2a73678" Mar 16 17:21:19 crc kubenswrapper[4736]: I0316 17:21:19.666791 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2hqjg"] Mar 16 17:21:19 crc kubenswrapper[4736]: I0316 17:21:19.669028 4736 scope.go:117] "RemoveContainer" containerID="eac0b136164c96e072ee0913d2434b6c48b8ff617209928919ba34610e27503f" Mar 16 17:21:19 crc kubenswrapper[4736]: I0316 17:21:19.684696 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2hqjg"] Mar 16 17:21:20 crc kubenswrapper[4736]: I0316 17:21:20.989715 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac1925f-a407-4a8e-9356-875887b1d8d3" path="/var/lib/kubelet/pods/cac1925f-a407-4a8e-9356-875887b1d8d3/volumes" Mar 16 17:22:00 crc kubenswrapper[4736]: I0316 17:22:00.203869 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561362-7m2vr"] Mar 16 17:22:00 crc kubenswrapper[4736]: E0316 17:22:00.206749 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerName="extract-content" Mar 16 17:22:00 crc kubenswrapper[4736]: I0316 17:22:00.206788 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerName="extract-content" Mar 16 17:22:00 crc kubenswrapper[4736]: E0316 17:22:00.206806 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerName="registry-server" Mar 16 17:22:00 crc kubenswrapper[4736]: I0316 17:22:00.206813 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerName="registry-server" Mar 16 17:22:00 crc kubenswrapper[4736]: E0316 17:22:00.206837 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerName="extract-utilities" Mar 16 17:22:00 crc kubenswrapper[4736]: I0316 17:22:00.206843 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerName="extract-utilities" Mar 16 17:22:00 crc kubenswrapper[4736]: I0316 17:22:00.207709 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac1925f-a407-4a8e-9356-875887b1d8d3" containerName="registry-server" Mar 16 17:22:00 crc kubenswrapper[4736]: I0316 17:22:00.214297 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561362-7m2vr" Mar 16 17:22:00 crc kubenswrapper[4736]: I0316 17:22:00.228442 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:22:00 crc kubenswrapper[4736]: I0316 17:22:00.228455 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:22:00 crc kubenswrapper[4736]: I0316 17:22:00.234546 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:22:00 crc kubenswrapper[4736]: I0316 17:22:00.276414 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561362-7m2vr"] Mar 16 17:22:00 crc kubenswrapper[4736]: I0316 17:22:00.382320 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnlvm\" (UniqueName: \"kubernetes.io/projected/d9f3d7ad-c663-476e-ac04-283e3be440bd-kube-api-access-nnlvm\") pod \"auto-csr-approver-29561362-7m2vr\" (UID: \"d9f3d7ad-c663-476e-ac04-283e3be440bd\") " pod="openshift-infra/auto-csr-approver-29561362-7m2vr" Mar 16 17:22:00 crc kubenswrapper[4736]: I0316 17:22:00.484422 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnlvm\" (UniqueName: \"kubernetes.io/projected/d9f3d7ad-c663-476e-ac04-283e3be440bd-kube-api-access-nnlvm\") pod \"auto-csr-approver-29561362-7m2vr\" (UID: \"d9f3d7ad-c663-476e-ac04-283e3be440bd\") " pod="openshift-infra/auto-csr-approver-29561362-7m2vr" Mar 16 17:22:00 crc kubenswrapper[4736]: I0316 17:22:00.509355 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnlvm\" (UniqueName: \"kubernetes.io/projected/d9f3d7ad-c663-476e-ac04-283e3be440bd-kube-api-access-nnlvm\") pod \"auto-csr-approver-29561362-7m2vr\" (UID: \"d9f3d7ad-c663-476e-ac04-283e3be440bd\") " pod="openshift-infra/auto-csr-approver-29561362-7m2vr" Mar 16 17:22:00 crc kubenswrapper[4736]: I0316 17:22:00.539146 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561362-7m2vr" Mar 16 17:22:01 crc kubenswrapper[4736]: I0316 17:22:01.105860 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561362-7m2vr"] Mar 16 17:22:02 crc kubenswrapper[4736]: I0316 17:22:02.010270 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561362-7m2vr" event={"ID":"d9f3d7ad-c663-476e-ac04-283e3be440bd","Type":"ContainerStarted","Data":"f2077ef2ab04222f4c773676651d56d069112d4b64a749585dbb390a76cdd76a"} Mar 16 17:22:04 crc kubenswrapper[4736]: I0316 17:22:04.032892 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561362-7m2vr" event={"ID":"d9f3d7ad-c663-476e-ac04-283e3be440bd","Type":"ContainerStarted","Data":"afc34f8d22f26ab06d34ea72ff0b5b04c10ae01ed917b740ed65bf1d9fc9d8bc"} Mar 16 17:22:04 crc kubenswrapper[4736]: I0316 17:22:04.067657 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561362-7m2vr" podStartSLOduration=2.727161358 podStartE2EDuration="4.065995266s" podCreationTimestamp="2026-03-16 17:22:00 +0000 UTC" firstStartedPulling="2026-03-16 17:22:01.117118557 +0000 UTC m=+7722.844508844" lastFinishedPulling="2026-03-16 17:22:02.455952465 +0000 UTC m=+7724.183342752" observedRunningTime="2026-03-16 17:22:04.051306685 +0000 UTC m=+7725.778696982" watchObservedRunningTime="2026-03-16 17:22:04.065995266 +0000 UTC m=+7725.793385553" Mar 16 17:22:05 crc kubenswrapper[4736]: I0316 17:22:05.058247 4736 generic.go:334] "Generic (PLEG): container finished" podID="d9f3d7ad-c663-476e-ac04-283e3be440bd" containerID="afc34f8d22f26ab06d34ea72ff0b5b04c10ae01ed917b740ed65bf1d9fc9d8bc" exitCode=0 Mar 16 17:22:05 crc kubenswrapper[4736]: I0316 17:22:05.058658 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561362-7m2vr" event={"ID":"d9f3d7ad-c663-476e-ac04-283e3be440bd","Type":"ContainerDied","Data":"afc34f8d22f26ab06d34ea72ff0b5b04c10ae01ed917b740ed65bf1d9fc9d8bc"} Mar 16 17:22:06 crc kubenswrapper[4736]: I0316 17:22:06.426180 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561362-7m2vr" Mar 16 17:22:06 crc kubenswrapper[4736]: I0316 17:22:06.513919 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnlvm\" (UniqueName: \"kubernetes.io/projected/d9f3d7ad-c663-476e-ac04-283e3be440bd-kube-api-access-nnlvm\") pod \"d9f3d7ad-c663-476e-ac04-283e3be440bd\" (UID: \"d9f3d7ad-c663-476e-ac04-283e3be440bd\") " Mar 16 17:22:06 crc kubenswrapper[4736]: I0316 17:22:06.523318 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f3d7ad-c663-476e-ac04-283e3be440bd-kube-api-access-nnlvm" (OuterVolumeSpecName: "kube-api-access-nnlvm") pod "d9f3d7ad-c663-476e-ac04-283e3be440bd" (UID: "d9f3d7ad-c663-476e-ac04-283e3be440bd"). InnerVolumeSpecName "kube-api-access-nnlvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:22:06 crc kubenswrapper[4736]: I0316 17:22:06.617324 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnlvm\" (UniqueName: \"kubernetes.io/projected/d9f3d7ad-c663-476e-ac04-283e3be440bd-kube-api-access-nnlvm\") on node \"crc\" DevicePath \"\"" Mar 16 17:22:07 crc kubenswrapper[4736]: I0316 17:22:07.079829 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561362-7m2vr" event={"ID":"d9f3d7ad-c663-476e-ac04-283e3be440bd","Type":"ContainerDied","Data":"f2077ef2ab04222f4c773676651d56d069112d4b64a749585dbb390a76cdd76a"} Mar 16 17:22:07 crc kubenswrapper[4736]: I0316 17:22:07.079875 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2077ef2ab04222f4c773676651d56d069112d4b64a749585dbb390a76cdd76a" Mar 16 17:22:07 crc kubenswrapper[4736]: I0316 17:22:07.079903 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561362-7m2vr" Mar 16 17:22:07 crc kubenswrapper[4736]: I0316 17:22:07.154476 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561356-qvcj9"] Mar 16 17:22:07 crc kubenswrapper[4736]: I0316 17:22:07.166581 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561356-qvcj9"] Mar 16 17:22:08 crc kubenswrapper[4736]: I0316 17:22:08.994479 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472cd732-7072-41e9-ba9a-72b506ff0885" path="/var/lib/kubelet/pods/472cd732-7072-41e9-ba9a-72b506ff0885/volumes" Mar 16 17:22:38 crc kubenswrapper[4736]: I0316 17:22:38.507678 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:22:38 crc kubenswrapper[4736]: I0316 17:22:38.508417 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:23:01 crc kubenswrapper[4736]: I0316 17:23:01.890459 4736 scope.go:117] "RemoveContainer" containerID="d7502bffa1a4650aec82f81b75d32f14177b52c00e47192b16396a3480be549e" Mar 16 17:23:08 crc kubenswrapper[4736]: I0316 17:23:08.508695 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:23:08 crc kubenswrapper[4736]: I0316 17:23:08.509231 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:23:38 crc kubenswrapper[4736]: I0316 17:23:38.508031 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:23:38 crc kubenswrapper[4736]: I0316 17:23:38.508937 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:23:38 crc kubenswrapper[4736]: I0316 17:23:38.509005 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 17:23:38 crc kubenswrapper[4736]: I0316 17:23:38.510565 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 17:23:38 crc kubenswrapper[4736]: I0316 17:23:38.510673 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" gracePeriod=600 Mar 16 17:23:38 crc kubenswrapper[4736]: E0316 17:23:38.654152 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:23:38 crc kubenswrapper[4736]: I0316 17:23:38.962234 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" exitCode=0 Mar 16 17:23:38 crc kubenswrapper[4736]: I0316 17:23:38.962306 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c"} Mar 16 17:23:38 crc kubenswrapper[4736]: I0316 17:23:38.962903 4736 scope.go:117] "RemoveContainer" containerID="3d5146309e3e7073c7b63428d346eb6d13d10704803563bce8c1bd4611deb23f" Mar 16 17:23:38 crc kubenswrapper[4736]: I0316 17:23:38.963583 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:23:38 crc kubenswrapper[4736]: E0316 17:23:38.963896 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:23:53 crc kubenswrapper[4736]: I0316 17:23:53.978632 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:23:53 crc kubenswrapper[4736]: E0316 17:23:53.979760 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:24:00 crc kubenswrapper[4736]: I0316 17:24:00.161502 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561364-jznzd"] Mar 16 17:24:00 crc kubenswrapper[4736]: E0316 17:24:00.162799 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f3d7ad-c663-476e-ac04-283e3be440bd" containerName="oc" Mar 16 17:24:00 crc kubenswrapper[4736]: I0316 17:24:00.162833 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f3d7ad-c663-476e-ac04-283e3be440bd" containerName="oc" Mar 16 17:24:00 crc kubenswrapper[4736]: I0316 17:24:00.163275 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f3d7ad-c663-476e-ac04-283e3be440bd" containerName="oc" Mar 16 17:24:00 crc kubenswrapper[4736]: I0316 17:24:00.164424 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561364-jznzd" Mar 16 17:24:00 crc kubenswrapper[4736]: I0316 17:24:00.167498 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:24:00 crc kubenswrapper[4736]: I0316 17:24:00.169943 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:24:00 crc kubenswrapper[4736]: I0316 17:24:00.171529 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:24:00 crc kubenswrapper[4736]: I0316 17:24:00.178263 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561364-jznzd"] Mar 16 17:24:00 crc kubenswrapper[4736]: I0316 17:24:00.313988 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47rtz\" (UniqueName: \"kubernetes.io/projected/10b4ae30-6721-43aa-8e4b-8f2e1321dde6-kube-api-access-47rtz\") pod \"auto-csr-approver-29561364-jznzd\" (UID: \"10b4ae30-6721-43aa-8e4b-8f2e1321dde6\") " pod="openshift-infra/auto-csr-approver-29561364-jznzd" Mar 16 17:24:00 crc kubenswrapper[4736]: I0316 17:24:00.416651 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47rtz\" (UniqueName: \"kubernetes.io/projected/10b4ae30-6721-43aa-8e4b-8f2e1321dde6-kube-api-access-47rtz\") pod \"auto-csr-approver-29561364-jznzd\" (UID: \"10b4ae30-6721-43aa-8e4b-8f2e1321dde6\") " pod="openshift-infra/auto-csr-approver-29561364-jznzd" Mar 16 17:24:00 crc kubenswrapper[4736]: I0316 17:24:00.453205 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47rtz\" (UniqueName: \"kubernetes.io/projected/10b4ae30-6721-43aa-8e4b-8f2e1321dde6-kube-api-access-47rtz\") pod \"auto-csr-approver-29561364-jznzd\" (UID: \"10b4ae30-6721-43aa-8e4b-8f2e1321dde6\") " pod="openshift-infra/auto-csr-approver-29561364-jznzd" Mar 16 17:24:00 crc kubenswrapper[4736]: I0316 17:24:00.497453 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561364-jznzd" Mar 16 17:24:01 crc kubenswrapper[4736]: I0316 17:24:01.007919 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561364-jznzd"] Mar 16 17:24:01 crc kubenswrapper[4736]: I0316 17:24:01.018562 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 17:24:01 crc kubenswrapper[4736]: I0316 17:24:01.196317 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561364-jznzd" event={"ID":"10b4ae30-6721-43aa-8e4b-8f2e1321dde6","Type":"ContainerStarted","Data":"66f124e31083df4a94d36eca70c631522984034ac500a5b482ba533c718bc7e0"} Mar 16 17:24:02 crc kubenswrapper[4736]: I0316 17:24:02.224362 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561364-jznzd" podStartSLOduration=1.363801723 podStartE2EDuration="2.224346695s" podCreationTimestamp="2026-03-16 17:24:00 +0000 UTC" firstStartedPulling="2026-03-16 17:24:01.018315602 +0000 UTC m=+7842.745705889" lastFinishedPulling="2026-03-16 17:24:01.878860534 +0000 UTC m=+7843.606250861" observedRunningTime="2026-03-16 17:24:02.221320302 +0000 UTC m=+7843.948710589" watchObservedRunningTime="2026-03-16 17:24:02.224346695 +0000 UTC m=+7843.951736982" Mar 16 17:24:03 crc kubenswrapper[4736]: I0316 17:24:03.220917 4736 generic.go:334] "Generic (PLEG): container finished" podID="10b4ae30-6721-43aa-8e4b-8f2e1321dde6" containerID="e8dcba289a4e4f1297b330074ac7329a40782364d222e8ca495aea384f669573" exitCode=0 Mar 16 17:24:03 crc kubenswrapper[4736]: I0316 17:24:03.220985 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561364-jznzd" event={"ID":"10b4ae30-6721-43aa-8e4b-8f2e1321dde6","Type":"ContainerDied","Data":"e8dcba289a4e4f1297b330074ac7329a40782364d222e8ca495aea384f669573"} Mar 16 17:24:04 crc kubenswrapper[4736]: I0316 17:24:04.593543 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561364-jznzd" Mar 16 17:24:04 crc kubenswrapper[4736]: I0316 17:24:04.701026 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47rtz\" (UniqueName: \"kubernetes.io/projected/10b4ae30-6721-43aa-8e4b-8f2e1321dde6-kube-api-access-47rtz\") pod \"10b4ae30-6721-43aa-8e4b-8f2e1321dde6\" (UID: \"10b4ae30-6721-43aa-8e4b-8f2e1321dde6\") " Mar 16 17:24:04 crc kubenswrapper[4736]: I0316 17:24:04.708401 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b4ae30-6721-43aa-8e4b-8f2e1321dde6-kube-api-access-47rtz" (OuterVolumeSpecName: "kube-api-access-47rtz") pod "10b4ae30-6721-43aa-8e4b-8f2e1321dde6" (UID: "10b4ae30-6721-43aa-8e4b-8f2e1321dde6"). InnerVolumeSpecName "kube-api-access-47rtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:24:04 crc kubenswrapper[4736]: I0316 17:24:04.803260 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47rtz\" (UniqueName: \"kubernetes.io/projected/10b4ae30-6721-43aa-8e4b-8f2e1321dde6-kube-api-access-47rtz\") on node \"crc\" DevicePath \"\"" Mar 16 17:24:05 crc kubenswrapper[4736]: I0316 17:24:05.244602 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561364-jznzd" event={"ID":"10b4ae30-6721-43aa-8e4b-8f2e1321dde6","Type":"ContainerDied","Data":"66f124e31083df4a94d36eca70c631522984034ac500a5b482ba533c718bc7e0"} Mar 16 17:24:05 crc kubenswrapper[4736]: I0316 17:24:05.244648 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66f124e31083df4a94d36eca70c631522984034ac500a5b482ba533c718bc7e0" Mar 16 17:24:05 crc kubenswrapper[4736]: I0316 17:24:05.244650 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561364-jznzd" Mar 16 17:24:05 crc kubenswrapper[4736]: I0316 17:24:05.298179 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561358-549zk"] Mar 16 17:24:05 crc kubenswrapper[4736]: I0316 17:24:05.310936 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561358-549zk"] Mar 16 17:24:05 crc kubenswrapper[4736]: I0316 17:24:05.979995 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:24:05 crc kubenswrapper[4736]: E0316 17:24:05.980474 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:24:06 crc kubenswrapper[4736]: I0316 17:24:06.993944 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7233ac61-3281-4356-a46c-5599365589c2" path="/var/lib/kubelet/pods/7233ac61-3281-4356-a46c-5599365589c2/volumes" Mar 16 17:24:16 crc kubenswrapper[4736]: I0316 17:24:16.978928 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:24:16 crc kubenswrapper[4736]: E0316 17:24:16.979983 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:24:29 crc kubenswrapper[4736]: I0316 17:24:29.977914 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:24:29 crc kubenswrapper[4736]: E0316 17:24:29.978569 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:24:41 crc kubenswrapper[4736]: I0316 17:24:41.977749 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:24:41 crc kubenswrapper[4736]: E0316 17:24:41.978720 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:24:56 crc kubenswrapper[4736]: I0316 17:24:56.978730 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:24:56 crc kubenswrapper[4736]: E0316 17:24:56.980012 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:25:01 crc kubenswrapper[4736]: I0316 17:25:01.987259 4736 scope.go:117] "RemoveContainer" containerID="7a6c09cb0d0adcd3aea8b3b3a855b6e458c0466199cd6b4c4bfa5f39392d0654" Mar 16 17:25:07 crc kubenswrapper[4736]: I0316 17:25:07.979340 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:25:07 crc kubenswrapper[4736]: E0316 17:25:07.980217 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.253424 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2bmbs"] Mar 16 17:25:09 crc kubenswrapper[4736]: E0316 17:25:09.254034 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b4ae30-6721-43aa-8e4b-8f2e1321dde6" containerName="oc" Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.254045 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b4ae30-6721-43aa-8e4b-8f2e1321dde6" containerName="oc" Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.254249 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="10b4ae30-6721-43aa-8e4b-8f2e1321dde6" containerName="oc" Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.257358 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.267207 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2bmbs"] Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.334444 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nvvp\" (UniqueName: \"kubernetes.io/projected/fdb9275e-e5d4-4789-a332-090b91d8fa04-kube-api-access-4nvvp\") pod \"certified-operators-2bmbs\" (UID: \"fdb9275e-e5d4-4789-a332-090b91d8fa04\") " pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.334545 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdb9275e-e5d4-4789-a332-090b91d8fa04-utilities\") pod \"certified-operators-2bmbs\" (UID: \"fdb9275e-e5d4-4789-a332-090b91d8fa04\") " pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.334615 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdb9275e-e5d4-4789-a332-090b91d8fa04-catalog-content\") pod \"certified-operators-2bmbs\" (UID: \"fdb9275e-e5d4-4789-a332-090b91d8fa04\") " pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.437151 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdb9275e-e5d4-4789-a332-090b91d8fa04-utilities\") pod \"certified-operators-2bmbs\" (UID: \"fdb9275e-e5d4-4789-a332-090b91d8fa04\") " pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.437257 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdb9275e-e5d4-4789-a332-090b91d8fa04-catalog-content\") pod \"certified-operators-2bmbs\" (UID: \"fdb9275e-e5d4-4789-a332-090b91d8fa04\") " pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.437360 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nvvp\" (UniqueName: \"kubernetes.io/projected/fdb9275e-e5d4-4789-a332-090b91d8fa04-kube-api-access-4nvvp\") pod \"certified-operators-2bmbs\" (UID: \"fdb9275e-e5d4-4789-a332-090b91d8fa04\") " pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.437983 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdb9275e-e5d4-4789-a332-090b91d8fa04-utilities\") pod \"certified-operators-2bmbs\" (UID: \"fdb9275e-e5d4-4789-a332-090b91d8fa04\") " pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.438125 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdb9275e-e5d4-4789-a332-090b91d8fa04-catalog-content\") pod \"certified-operators-2bmbs\" (UID: \"fdb9275e-e5d4-4789-a332-090b91d8fa04\") " pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.465012 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nvvp\" (UniqueName: \"kubernetes.io/projected/fdb9275e-e5d4-4789-a332-090b91d8fa04-kube-api-access-4nvvp\") pod \"certified-operators-2bmbs\" (UID: \"fdb9275e-e5d4-4789-a332-090b91d8fa04\") " pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:09 crc kubenswrapper[4736]: I0316 17:25:09.621031 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:10 crc kubenswrapper[4736]: I0316 17:25:10.118008 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2bmbs"] Mar 16 17:25:10 crc kubenswrapper[4736]: I0316 17:25:10.913572 4736 generic.go:334] "Generic (PLEG): container finished" podID="fdb9275e-e5d4-4789-a332-090b91d8fa04" containerID="d8b1a4163d91ca2be823ff92ea1a003be8caa627fca5fff558502c8598ef73fa" exitCode=0 Mar 16 17:25:10 crc kubenswrapper[4736]: I0316 17:25:10.913617 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bmbs" event={"ID":"fdb9275e-e5d4-4789-a332-090b91d8fa04","Type":"ContainerDied","Data":"d8b1a4163d91ca2be823ff92ea1a003be8caa627fca5fff558502c8598ef73fa"} Mar 16 17:25:10 crc kubenswrapper[4736]: I0316 17:25:10.914766 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bmbs" event={"ID":"fdb9275e-e5d4-4789-a332-090b91d8fa04","Type":"ContainerStarted","Data":"eb31f6d678b058938e67d44efaaf76caa0076bf94bb1253aea108c01b1fed487"} Mar 16 17:25:12 crc kubenswrapper[4736]: I0316 17:25:12.943195 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bmbs" event={"ID":"fdb9275e-e5d4-4789-a332-090b91d8fa04","Type":"ContainerStarted","Data":"c40b34272e929bb14bd563631adcac9c8c83ba6c19feb265a80617dda214ea11"} Mar 16 17:25:13 crc kubenswrapper[4736]: I0316 17:25:13.953540 4736 generic.go:334] "Generic (PLEG): container finished" podID="fdb9275e-e5d4-4789-a332-090b91d8fa04" containerID="c40b34272e929bb14bd563631adcac9c8c83ba6c19feb265a80617dda214ea11" exitCode=0 Mar 16 17:25:13 crc kubenswrapper[4736]: I0316 17:25:13.953615 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bmbs" event={"ID":"fdb9275e-e5d4-4789-a332-090b91d8fa04","Type":"ContainerDied","Data":"c40b34272e929bb14bd563631adcac9c8c83ba6c19feb265a80617dda214ea11"} Mar 16 17:25:14 crc kubenswrapper[4736]: I0316 17:25:14.963710 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bmbs" event={"ID":"fdb9275e-e5d4-4789-a332-090b91d8fa04","Type":"ContainerStarted","Data":"df43723161c8a11530ce27d2dcee291543bf75e4afd1a7bc2b12df268f4caaef"} Mar 16 17:25:14 crc kubenswrapper[4736]: I0316 17:25:14.986379 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2bmbs" podStartSLOduration=2.5147198509999997 podStartE2EDuration="5.986359372s" podCreationTimestamp="2026-03-16 17:25:09 +0000 UTC" firstStartedPulling="2026-03-16 17:25:10.915566566 +0000 UTC m=+7912.642956843" lastFinishedPulling="2026-03-16 17:25:14.387206037 +0000 UTC m=+7916.114596364" observedRunningTime="2026-03-16 17:25:14.981913221 +0000 UTC m=+7916.709303518" watchObservedRunningTime="2026-03-16 17:25:14.986359372 +0000 UTC m=+7916.713749659" Mar 16 17:25:18 crc kubenswrapper[4736]: I0316 17:25:18.992791 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:25:18 crc kubenswrapper[4736]: E0316 17:25:18.993725 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:25:19 crc kubenswrapper[4736]: I0316 17:25:19.622754 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:19 crc kubenswrapper[4736]: I0316 17:25:19.622811 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:19 crc kubenswrapper[4736]: I0316 17:25:19.699859 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:20 crc kubenswrapper[4736]: I0316 17:25:20.112767 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:20 crc kubenswrapper[4736]: I0316 17:25:20.170631 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2bmbs"] Mar 16 17:25:22 crc kubenswrapper[4736]: I0316 17:25:22.074399 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2bmbs" podUID="fdb9275e-e5d4-4789-a332-090b91d8fa04" containerName="registry-server" containerID="cri-o://df43723161c8a11530ce27d2dcee291543bf75e4afd1a7bc2b12df268f4caaef" gracePeriod=2 Mar 16 17:25:22 crc kubenswrapper[4736]: I0316 17:25:22.588971 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:22 crc kubenswrapper[4736]: I0316 17:25:22.710729 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nvvp\" (UniqueName: \"kubernetes.io/projected/fdb9275e-e5d4-4789-a332-090b91d8fa04-kube-api-access-4nvvp\") pod \"fdb9275e-e5d4-4789-a332-090b91d8fa04\" (UID: \"fdb9275e-e5d4-4789-a332-090b91d8fa04\") " Mar 16 17:25:22 crc kubenswrapper[4736]: I0316 17:25:22.710884 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdb9275e-e5d4-4789-a332-090b91d8fa04-catalog-content\") pod \"fdb9275e-e5d4-4789-a332-090b91d8fa04\" (UID: \"fdb9275e-e5d4-4789-a332-090b91d8fa04\") " Mar 16 17:25:22 crc kubenswrapper[4736]: I0316 17:25:22.711069 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdb9275e-e5d4-4789-a332-090b91d8fa04-utilities\") pod \"fdb9275e-e5d4-4789-a332-090b91d8fa04\" (UID: \"fdb9275e-e5d4-4789-a332-090b91d8fa04\") " Mar 16 17:25:22 crc kubenswrapper[4736]: I0316 17:25:22.712017 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdb9275e-e5d4-4789-a332-090b91d8fa04-utilities" (OuterVolumeSpecName: "utilities") pod "fdb9275e-e5d4-4789-a332-090b91d8fa04" (UID: "fdb9275e-e5d4-4789-a332-090b91d8fa04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:25:22 crc kubenswrapper[4736]: I0316 17:25:22.722708 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb9275e-e5d4-4789-a332-090b91d8fa04-kube-api-access-4nvvp" (OuterVolumeSpecName: "kube-api-access-4nvvp") pod "fdb9275e-e5d4-4789-a332-090b91d8fa04" (UID: "fdb9275e-e5d4-4789-a332-090b91d8fa04"). InnerVolumeSpecName "kube-api-access-4nvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:25:22 crc kubenswrapper[4736]: I0316 17:25:22.754678 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdb9275e-e5d4-4789-a332-090b91d8fa04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdb9275e-e5d4-4789-a332-090b91d8fa04" (UID: "fdb9275e-e5d4-4789-a332-090b91d8fa04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:25:22 crc kubenswrapper[4736]: I0316 17:25:22.813384 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdb9275e-e5d4-4789-a332-090b91d8fa04-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:25:22 crc kubenswrapper[4736]: I0316 17:25:22.813424 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdb9275e-e5d4-4789-a332-090b91d8fa04-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:25:22 crc kubenswrapper[4736]: I0316 17:25:22.813450 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nvvp\" (UniqueName: \"kubernetes.io/projected/fdb9275e-e5d4-4789-a332-090b91d8fa04-kube-api-access-4nvvp\") on node \"crc\" DevicePath \"\"" Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.085297 4736 generic.go:334] "Generic (PLEG): container finished" podID="fdb9275e-e5d4-4789-a332-090b91d8fa04" containerID="df43723161c8a11530ce27d2dcee291543bf75e4afd1a7bc2b12df268f4caaef" exitCode=0 Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.085357 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bmbs" event={"ID":"fdb9275e-e5d4-4789-a332-090b91d8fa04","Type":"ContainerDied","Data":"df43723161c8a11530ce27d2dcee291543bf75e4afd1a7bc2b12df268f4caaef"} Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.085390 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bmbs" event={"ID":"fdb9275e-e5d4-4789-a332-090b91d8fa04","Type":"ContainerDied","Data":"eb31f6d678b058938e67d44efaaf76caa0076bf94bb1253aea108c01b1fed487"} Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.085412 4736 scope.go:117] "RemoveContainer" containerID="df43723161c8a11530ce27d2dcee291543bf75e4afd1a7bc2b12df268f4caaef" Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.085561 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bmbs" Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.114073 4736 scope.go:117] "RemoveContainer" containerID="c40b34272e929bb14bd563631adcac9c8c83ba6c19feb265a80617dda214ea11" Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.116464 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2bmbs"] Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.135877 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2bmbs"] Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.143375 4736 scope.go:117] "RemoveContainer" containerID="d8b1a4163d91ca2be823ff92ea1a003be8caa627fca5fff558502c8598ef73fa" Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.193929 4736 scope.go:117] "RemoveContainer" containerID="df43723161c8a11530ce27d2dcee291543bf75e4afd1a7bc2b12df268f4caaef" Mar 16 17:25:23 crc kubenswrapper[4736]: E0316 17:25:23.196899 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df43723161c8a11530ce27d2dcee291543bf75e4afd1a7bc2b12df268f4caaef\": container with ID starting with df43723161c8a11530ce27d2dcee291543bf75e4afd1a7bc2b12df268f4caaef not found: ID does not exist" containerID="df43723161c8a11530ce27d2dcee291543bf75e4afd1a7bc2b12df268f4caaef" Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.196946 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df43723161c8a11530ce27d2dcee291543bf75e4afd1a7bc2b12df268f4caaef"} err="failed to get container status \"df43723161c8a11530ce27d2dcee291543bf75e4afd1a7bc2b12df268f4caaef\": rpc error: code = NotFound desc = could not find container \"df43723161c8a11530ce27d2dcee291543bf75e4afd1a7bc2b12df268f4caaef\": container with ID starting with df43723161c8a11530ce27d2dcee291543bf75e4afd1a7bc2b12df268f4caaef not found: ID does not exist" Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.196976 4736 scope.go:117] "RemoveContainer" containerID="c40b34272e929bb14bd563631adcac9c8c83ba6c19feb265a80617dda214ea11" Mar 16 17:25:23 crc kubenswrapper[4736]: E0316 17:25:23.197551 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c40b34272e929bb14bd563631adcac9c8c83ba6c19feb265a80617dda214ea11\": container with ID starting with c40b34272e929bb14bd563631adcac9c8c83ba6c19feb265a80617dda214ea11 not found: ID does not exist" containerID="c40b34272e929bb14bd563631adcac9c8c83ba6c19feb265a80617dda214ea11" Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.197614 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c40b34272e929bb14bd563631adcac9c8c83ba6c19feb265a80617dda214ea11"} err="failed to get container status \"c40b34272e929bb14bd563631adcac9c8c83ba6c19feb265a80617dda214ea11\": rpc error: code = NotFound desc = could not find container \"c40b34272e929bb14bd563631adcac9c8c83ba6c19feb265a80617dda214ea11\": container with ID starting with c40b34272e929bb14bd563631adcac9c8c83ba6c19feb265a80617dda214ea11 not found: ID does not exist" Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.197632 4736 scope.go:117] "RemoveContainer" containerID="d8b1a4163d91ca2be823ff92ea1a003be8caa627fca5fff558502c8598ef73fa" Mar 16 17:25:23 crc kubenswrapper[4736]: E0316 17:25:23.197880 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8b1a4163d91ca2be823ff92ea1a003be8caa627fca5fff558502c8598ef73fa\": container with ID starting with d8b1a4163d91ca2be823ff92ea1a003be8caa627fca5fff558502c8598ef73fa not found: ID does not exist" containerID="d8b1a4163d91ca2be823ff92ea1a003be8caa627fca5fff558502c8598ef73fa" Mar 16 17:25:23 crc kubenswrapper[4736]: I0316 17:25:23.197894 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8b1a4163d91ca2be823ff92ea1a003be8caa627fca5fff558502c8598ef73fa"} err="failed to get container status \"d8b1a4163d91ca2be823ff92ea1a003be8caa627fca5fff558502c8598ef73fa\": rpc error: code = NotFound desc = could not find container \"d8b1a4163d91ca2be823ff92ea1a003be8caa627fca5fff558502c8598ef73fa\": container with ID starting with d8b1a4163d91ca2be823ff92ea1a003be8caa627fca5fff558502c8598ef73fa not found: ID does not exist" Mar 16 17:25:24 crc kubenswrapper[4736]: I0316 17:25:24.987202 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdb9275e-e5d4-4789-a332-090b91d8fa04" path="/var/lib/kubelet/pods/fdb9275e-e5d4-4789-a332-090b91d8fa04/volumes" Mar 16 17:25:32 crc kubenswrapper[4736]: I0316 17:25:32.979187 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:25:32 crc kubenswrapper[4736]: E0316 17:25:32.980630 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:25:44 crc kubenswrapper[4736]: I0316 17:25:44.978248 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:25:44 crc kubenswrapper[4736]: E0316 17:25:44.979342 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.064746 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ktcd8"] Mar 16 17:25:50 crc kubenswrapper[4736]: E0316 17:25:50.065706 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb9275e-e5d4-4789-a332-090b91d8fa04" containerName="registry-server" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.065723 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb9275e-e5d4-4789-a332-090b91d8fa04" containerName="registry-server" Mar 16 17:25:50 crc kubenswrapper[4736]: E0316 17:25:50.065743 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb9275e-e5d4-4789-a332-090b91d8fa04" containerName="extract-content" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.065750 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb9275e-e5d4-4789-a332-090b91d8fa04" containerName="extract-content" Mar 16 17:25:50 crc kubenswrapper[4736]: E0316 17:25:50.065800 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb9275e-e5d4-4789-a332-090b91d8fa04" containerName="extract-utilities" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.065808 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb9275e-e5d4-4789-a332-090b91d8fa04" containerName="extract-utilities" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.066021 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb9275e-e5d4-4789-a332-090b91d8fa04" containerName="registry-server" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.074299 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.091193 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ktcd8"] Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.196906 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb949e96-3156-4068-b156-8b4f44796f7e-catalog-content\") pod \"community-operators-ktcd8\" (UID: \"fb949e96-3156-4068-b156-8b4f44796f7e\") " pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.197085 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfjc2\" (UniqueName: \"kubernetes.io/projected/fb949e96-3156-4068-b156-8b4f44796f7e-kube-api-access-jfjc2\") pod \"community-operators-ktcd8\" (UID: \"fb949e96-3156-4068-b156-8b4f44796f7e\") " pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.197271 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb949e96-3156-4068-b156-8b4f44796f7e-utilities\") pod \"community-operators-ktcd8\" (UID: \"fb949e96-3156-4068-b156-8b4f44796f7e\") " pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.299464 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb949e96-3156-4068-b156-8b4f44796f7e-catalog-content\") pod \"community-operators-ktcd8\" (UID: \"fb949e96-3156-4068-b156-8b4f44796f7e\") " pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.299562 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfjc2\" (UniqueName: \"kubernetes.io/projected/fb949e96-3156-4068-b156-8b4f44796f7e-kube-api-access-jfjc2\") pod \"community-operators-ktcd8\" (UID: \"fb949e96-3156-4068-b156-8b4f44796f7e\") " pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.299593 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb949e96-3156-4068-b156-8b4f44796f7e-utilities\") pod \"community-operators-ktcd8\" (UID: \"fb949e96-3156-4068-b156-8b4f44796f7e\") " pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.299950 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb949e96-3156-4068-b156-8b4f44796f7e-catalog-content\") pod \"community-operators-ktcd8\" (UID: \"fb949e96-3156-4068-b156-8b4f44796f7e\") " pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.300050 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb949e96-3156-4068-b156-8b4f44796f7e-utilities\") pod \"community-operators-ktcd8\" (UID: \"fb949e96-3156-4068-b156-8b4f44796f7e\") " pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.329029 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfjc2\" (UniqueName: \"kubernetes.io/projected/fb949e96-3156-4068-b156-8b4f44796f7e-kube-api-access-jfjc2\") pod \"community-operators-ktcd8\" (UID: \"fb949e96-3156-4068-b156-8b4f44796f7e\") " pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.410136 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:25:50 crc kubenswrapper[4736]: I0316 17:25:50.933782 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ktcd8"] Mar 16 17:25:51 crc kubenswrapper[4736]: I0316 17:25:51.338344 4736 generic.go:334] "Generic (PLEG): container finished" podID="fb949e96-3156-4068-b156-8b4f44796f7e" containerID="a7cb69f4fcfc19e0f3cd02cf5fa0420650abcffc8136d58a3ff0175c6f934174" exitCode=0 Mar 16 17:25:51 crc kubenswrapper[4736]: I0316 17:25:51.338389 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktcd8" event={"ID":"fb949e96-3156-4068-b156-8b4f44796f7e","Type":"ContainerDied","Data":"a7cb69f4fcfc19e0f3cd02cf5fa0420650abcffc8136d58a3ff0175c6f934174"} Mar 16 17:25:51 crc kubenswrapper[4736]: I0316 17:25:51.338438 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktcd8" event={"ID":"fb949e96-3156-4068-b156-8b4f44796f7e","Type":"ContainerStarted","Data":"646d311c8ef73efbcc7ea1a36168aa0beb45e02e8c45acef3b23e3729cf47ae9"} Mar 16 17:25:52 crc kubenswrapper[4736]: I0316 17:25:52.349032 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktcd8" event={"ID":"fb949e96-3156-4068-b156-8b4f44796f7e","Type":"ContainerStarted","Data":"fd71e38441725be2d07608666437651d98f3a3e1bb50dbe4a345385a949e56c5"} Mar 16 17:25:54 crc kubenswrapper[4736]: I0316 17:25:54.366062 4736 generic.go:334] "Generic (PLEG): container finished" podID="fb949e96-3156-4068-b156-8b4f44796f7e" containerID="fd71e38441725be2d07608666437651d98f3a3e1bb50dbe4a345385a949e56c5" exitCode=0 Mar 16 17:25:54 crc kubenswrapper[4736]: I0316 17:25:54.366139 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktcd8" event={"ID":"fb949e96-3156-4068-b156-8b4f44796f7e","Type":"ContainerDied","Data":"fd71e38441725be2d07608666437651d98f3a3e1bb50dbe4a345385a949e56c5"} Mar 16 17:25:55 crc kubenswrapper[4736]: I0316 17:25:55.380275 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktcd8" event={"ID":"fb949e96-3156-4068-b156-8b4f44796f7e","Type":"ContainerStarted","Data":"e17f2781ab384be906e295c772fe12c0965b1ffc6e34a4dc990278bd78ec23ec"} Mar 16 17:25:55 crc kubenswrapper[4736]: I0316 17:25:55.403865 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ktcd8" podStartSLOduration=2.005414968 podStartE2EDuration="5.403837059s" podCreationTimestamp="2026-03-16 17:25:50 +0000 UTC" firstStartedPulling="2026-03-16 17:25:51.341247998 +0000 UTC m=+7953.068638295" lastFinishedPulling="2026-03-16 17:25:54.739670099 +0000 UTC m=+7956.467060386" observedRunningTime="2026-03-16 17:25:55.40020447 +0000 UTC m=+7957.127594767" watchObservedRunningTime="2026-03-16 17:25:55.403837059 +0000 UTC m=+7957.131227346" Mar 16 17:25:57 crc kubenswrapper[4736]: I0316 17:25:57.978496 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:25:57 crc kubenswrapper[4736]: E0316 17:25:57.978975 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:26:00 crc kubenswrapper[4736]: I0316 17:26:00.155087 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561366-b6wzq"] Mar 16 17:26:00 crc kubenswrapper[4736]: I0316 17:26:00.156514 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561366-b6wzq" Mar 16 17:26:00 crc kubenswrapper[4736]: I0316 17:26:00.160006 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:26:00 crc kubenswrapper[4736]: I0316 17:26:00.160992 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:26:00 crc kubenswrapper[4736]: I0316 17:26:00.161211 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:26:00 crc kubenswrapper[4736]: I0316 17:26:00.172414 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561366-b6wzq"] Mar 16 17:26:00 crc kubenswrapper[4736]: I0316 17:26:00.327572 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnjrb\" (UniqueName: \"kubernetes.io/projected/6862a736-6544-4282-9a6c-bd5eb9c32c56-kube-api-access-jnjrb\") pod \"auto-csr-approver-29561366-b6wzq\" (UID: \"6862a736-6544-4282-9a6c-bd5eb9c32c56\") " pod="openshift-infra/auto-csr-approver-29561366-b6wzq" Mar 16 17:26:00 crc kubenswrapper[4736]: I0316 17:26:00.411314 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:26:00 crc kubenswrapper[4736]: I0316 17:26:00.411999 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:26:00 crc kubenswrapper[4736]: I0316 17:26:00.430022 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnjrb\" (UniqueName: \"kubernetes.io/projected/6862a736-6544-4282-9a6c-bd5eb9c32c56-kube-api-access-jnjrb\") pod \"auto-csr-approver-29561366-b6wzq\" (UID: \"6862a736-6544-4282-9a6c-bd5eb9c32c56\") " pod="openshift-infra/auto-csr-approver-29561366-b6wzq" Mar 16 17:26:00 crc kubenswrapper[4736]: I0316 17:26:00.466564 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnjrb\" (UniqueName: \"kubernetes.io/projected/6862a736-6544-4282-9a6c-bd5eb9c32c56-kube-api-access-jnjrb\") pod \"auto-csr-approver-29561366-b6wzq\" (UID: \"6862a736-6544-4282-9a6c-bd5eb9c32c56\") " pod="openshift-infra/auto-csr-approver-29561366-b6wzq" Mar 16 17:26:00 crc kubenswrapper[4736]: I0316 17:26:00.481837 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561366-b6wzq" Mar 16 17:26:00 crc kubenswrapper[4736]: I0316 17:26:00.998027 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561366-b6wzq"] Mar 16 17:26:01 crc kubenswrapper[4736]: I0316 17:26:01.438526 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561366-b6wzq" event={"ID":"6862a736-6544-4282-9a6c-bd5eb9c32c56","Type":"ContainerStarted","Data":"7afa3b3b35c8ab567c78cf641c091eb57fc51cc9a56f7566b174e9fe417c908a"} Mar 16 17:26:01 crc kubenswrapper[4736]: I0316 17:26:01.458044 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ktcd8" podUID="fb949e96-3156-4068-b156-8b4f44796f7e" containerName="registry-server" probeResult="failure" output=< Mar 16 17:26:01 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:26:01 crc kubenswrapper[4736]: > Mar 16 17:26:03 crc kubenswrapper[4736]: I0316 17:26:03.462957 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561366-b6wzq" event={"ID":"6862a736-6544-4282-9a6c-bd5eb9c32c56","Type":"ContainerStarted","Data":"287d4902c1c6af074fa01eac7c1c91330e77d7245d7d7a7b757a36474f83c7e1"} Mar 16 17:26:03 crc kubenswrapper[4736]: I0316 17:26:03.484708 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561366-b6wzq" podStartSLOduration=2.25556997 podStartE2EDuration="3.484685523s" podCreationTimestamp="2026-03-16 17:26:00 +0000 UTC" firstStartedPulling="2026-03-16 17:26:01.007508679 +0000 UTC m=+7962.734898966" lastFinishedPulling="2026-03-16 17:26:02.236624232 +0000 UTC m=+7963.964014519" observedRunningTime="2026-03-16 17:26:03.476829758 +0000 UTC m=+7965.204220045" watchObservedRunningTime="2026-03-16 17:26:03.484685523 +0000 UTC m=+7965.212075810" Mar 16 17:26:04 crc kubenswrapper[4736]: I0316 17:26:04.476923 4736 generic.go:334] "Generic (PLEG): container finished" podID="6862a736-6544-4282-9a6c-bd5eb9c32c56" containerID="287d4902c1c6af074fa01eac7c1c91330e77d7245d7d7a7b757a36474f83c7e1" exitCode=0 Mar 16 17:26:04 crc kubenswrapper[4736]: I0316 17:26:04.476992 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561366-b6wzq" event={"ID":"6862a736-6544-4282-9a6c-bd5eb9c32c56","Type":"ContainerDied","Data":"287d4902c1c6af074fa01eac7c1c91330e77d7245d7d7a7b757a36474f83c7e1"} Mar 16 17:26:05 crc kubenswrapper[4736]: I0316 17:26:05.871568 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561366-b6wzq" Mar 16 17:26:06 crc kubenswrapper[4736]: I0316 17:26:06.047688 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnjrb\" (UniqueName: \"kubernetes.io/projected/6862a736-6544-4282-9a6c-bd5eb9c32c56-kube-api-access-jnjrb\") pod \"6862a736-6544-4282-9a6c-bd5eb9c32c56\" (UID: \"6862a736-6544-4282-9a6c-bd5eb9c32c56\") " Mar 16 17:26:06 crc kubenswrapper[4736]: I0316 17:26:06.058721 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6862a736-6544-4282-9a6c-bd5eb9c32c56-kube-api-access-jnjrb" (OuterVolumeSpecName: "kube-api-access-jnjrb") pod "6862a736-6544-4282-9a6c-bd5eb9c32c56" (UID: "6862a736-6544-4282-9a6c-bd5eb9c32c56"). InnerVolumeSpecName "kube-api-access-jnjrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:26:06 crc kubenswrapper[4736]: I0316 17:26:06.150390 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnjrb\" (UniqueName: \"kubernetes.io/projected/6862a736-6544-4282-9a6c-bd5eb9c32c56-kube-api-access-jnjrb\") on node \"crc\" DevicePath \"\"" Mar 16 17:26:06 crc kubenswrapper[4736]: I0316 17:26:06.512741 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561366-b6wzq" event={"ID":"6862a736-6544-4282-9a6c-bd5eb9c32c56","Type":"ContainerDied","Data":"7afa3b3b35c8ab567c78cf641c091eb57fc51cc9a56f7566b174e9fe417c908a"} Mar 16 17:26:06 crc kubenswrapper[4736]: I0316 17:26:06.512807 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7afa3b3b35c8ab567c78cf641c091eb57fc51cc9a56f7566b174e9fe417c908a" Mar 16 17:26:06 crc kubenswrapper[4736]: I0316 17:26:06.512917 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561366-b6wzq" Mar 16 17:26:06 crc kubenswrapper[4736]: I0316 17:26:06.557812 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561360-p56ds"] Mar 16 17:26:06 crc kubenswrapper[4736]: I0316 17:26:06.566187 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561360-p56ds"] Mar 16 17:26:06 crc kubenswrapper[4736]: I0316 17:26:06.991692 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b7eaaf7-83c2-4e95-8956-33bb887d53f0" path="/var/lib/kubelet/pods/4b7eaaf7-83c2-4e95-8956-33bb887d53f0/volumes" Mar 16 17:26:10 crc kubenswrapper[4736]: I0316 17:26:10.459009 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:26:10 crc kubenswrapper[4736]: I0316 17:26:10.507989 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:26:10 crc kubenswrapper[4736]: I0316 17:26:10.695320 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ktcd8"] Mar 16 17:26:11 crc kubenswrapper[4736]: I0316 17:26:11.560150 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ktcd8" podUID="fb949e96-3156-4068-b156-8b4f44796f7e" containerName="registry-server" containerID="cri-o://e17f2781ab384be906e295c772fe12c0965b1ffc6e34a4dc990278bd78ec23ec" gracePeriod=2 Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.087630 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.170323 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb949e96-3156-4068-b156-8b4f44796f7e-utilities\") pod \"fb949e96-3156-4068-b156-8b4f44796f7e\" (UID: \"fb949e96-3156-4068-b156-8b4f44796f7e\") " Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.170501 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfjc2\" (UniqueName: \"kubernetes.io/projected/fb949e96-3156-4068-b156-8b4f44796f7e-kube-api-access-jfjc2\") pod \"fb949e96-3156-4068-b156-8b4f44796f7e\" (UID: \"fb949e96-3156-4068-b156-8b4f44796f7e\") " Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.170617 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb949e96-3156-4068-b156-8b4f44796f7e-catalog-content\") pod \"fb949e96-3156-4068-b156-8b4f44796f7e\" (UID: \"fb949e96-3156-4068-b156-8b4f44796f7e\") " Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.170851 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb949e96-3156-4068-b156-8b4f44796f7e-utilities" (OuterVolumeSpecName: "utilities") pod "fb949e96-3156-4068-b156-8b4f44796f7e" (UID: "fb949e96-3156-4068-b156-8b4f44796f7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.171250 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb949e96-3156-4068-b156-8b4f44796f7e-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.178303 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb949e96-3156-4068-b156-8b4f44796f7e-kube-api-access-jfjc2" (OuterVolumeSpecName: "kube-api-access-jfjc2") pod "fb949e96-3156-4068-b156-8b4f44796f7e" (UID: "fb949e96-3156-4068-b156-8b4f44796f7e"). InnerVolumeSpecName "kube-api-access-jfjc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.224608 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb949e96-3156-4068-b156-8b4f44796f7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb949e96-3156-4068-b156-8b4f44796f7e" (UID: "fb949e96-3156-4068-b156-8b4f44796f7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.273245 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfjc2\" (UniqueName: \"kubernetes.io/projected/fb949e96-3156-4068-b156-8b4f44796f7e-kube-api-access-jfjc2\") on node \"crc\" DevicePath \"\"" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.273282 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb949e96-3156-4068-b156-8b4f44796f7e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.577163 4736 generic.go:334] "Generic (PLEG): container finished" podID="fb949e96-3156-4068-b156-8b4f44796f7e" containerID="e17f2781ab384be906e295c772fe12c0965b1ffc6e34a4dc990278bd78ec23ec" exitCode=0 Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.577219 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktcd8" event={"ID":"fb949e96-3156-4068-b156-8b4f44796f7e","Type":"ContainerDied","Data":"e17f2781ab384be906e295c772fe12c0965b1ffc6e34a4dc990278bd78ec23ec"} Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.577242 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktcd8" event={"ID":"fb949e96-3156-4068-b156-8b4f44796f7e","Type":"ContainerDied","Data":"646d311c8ef73efbcc7ea1a36168aa0beb45e02e8c45acef3b23e3729cf47ae9"} Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.577252 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktcd8" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.577257 4736 scope.go:117] "RemoveContainer" containerID="e17f2781ab384be906e295c772fe12c0965b1ffc6e34a4dc990278bd78ec23ec" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.614816 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ktcd8"] Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.618689 4736 scope.go:117] "RemoveContainer" containerID="fd71e38441725be2d07608666437651d98f3a3e1bb50dbe4a345385a949e56c5" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.623723 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ktcd8"] Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.647871 4736 scope.go:117] "RemoveContainer" containerID="a7cb69f4fcfc19e0f3cd02cf5fa0420650abcffc8136d58a3ff0175c6f934174" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.691023 4736 scope.go:117] "RemoveContainer" containerID="e17f2781ab384be906e295c772fe12c0965b1ffc6e34a4dc990278bd78ec23ec" Mar 16 17:26:12 crc kubenswrapper[4736]: E0316 17:26:12.691586 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e17f2781ab384be906e295c772fe12c0965b1ffc6e34a4dc990278bd78ec23ec\": container with ID starting with e17f2781ab384be906e295c772fe12c0965b1ffc6e34a4dc990278bd78ec23ec not found: ID does not exist" containerID="e17f2781ab384be906e295c772fe12c0965b1ffc6e34a4dc990278bd78ec23ec" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.691625 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e17f2781ab384be906e295c772fe12c0965b1ffc6e34a4dc990278bd78ec23ec"} err="failed to get container status \"e17f2781ab384be906e295c772fe12c0965b1ffc6e34a4dc990278bd78ec23ec\": rpc error: code = NotFound desc = could not find container \"e17f2781ab384be906e295c772fe12c0965b1ffc6e34a4dc990278bd78ec23ec\": container with ID starting with e17f2781ab384be906e295c772fe12c0965b1ffc6e34a4dc990278bd78ec23ec not found: ID does not exist" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.691653 4736 scope.go:117] "RemoveContainer" containerID="fd71e38441725be2d07608666437651d98f3a3e1bb50dbe4a345385a949e56c5" Mar 16 17:26:12 crc kubenswrapper[4736]: E0316 17:26:12.691916 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd71e38441725be2d07608666437651d98f3a3e1bb50dbe4a345385a949e56c5\": container with ID starting with fd71e38441725be2d07608666437651d98f3a3e1bb50dbe4a345385a949e56c5 not found: ID does not exist" containerID="fd71e38441725be2d07608666437651d98f3a3e1bb50dbe4a345385a949e56c5" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.691945 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd71e38441725be2d07608666437651d98f3a3e1bb50dbe4a345385a949e56c5"} err="failed to get container status \"fd71e38441725be2d07608666437651d98f3a3e1bb50dbe4a345385a949e56c5\": rpc error: code = NotFound desc = could not find container \"fd71e38441725be2d07608666437651d98f3a3e1bb50dbe4a345385a949e56c5\": container with ID starting with fd71e38441725be2d07608666437651d98f3a3e1bb50dbe4a345385a949e56c5 not found: ID does not exist" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.692062 4736 scope.go:117] "RemoveContainer" containerID="a7cb69f4fcfc19e0f3cd02cf5fa0420650abcffc8136d58a3ff0175c6f934174" Mar 16 17:26:12 crc kubenswrapper[4736]: E0316 17:26:12.692529 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7cb69f4fcfc19e0f3cd02cf5fa0420650abcffc8136d58a3ff0175c6f934174\": container with ID starting with a7cb69f4fcfc19e0f3cd02cf5fa0420650abcffc8136d58a3ff0175c6f934174 not found: ID does not exist" containerID="a7cb69f4fcfc19e0f3cd02cf5fa0420650abcffc8136d58a3ff0175c6f934174" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.692559 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7cb69f4fcfc19e0f3cd02cf5fa0420650abcffc8136d58a3ff0175c6f934174"} err="failed to get container status \"a7cb69f4fcfc19e0f3cd02cf5fa0420650abcffc8136d58a3ff0175c6f934174\": rpc error: code = NotFound desc = could not find container \"a7cb69f4fcfc19e0f3cd02cf5fa0420650abcffc8136d58a3ff0175c6f934174\": container with ID starting with a7cb69f4fcfc19e0f3cd02cf5fa0420650abcffc8136d58a3ff0175c6f934174 not found: ID does not exist" Mar 16 17:26:12 crc kubenswrapper[4736]: I0316 17:26:12.982981 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:26:12 crc kubenswrapper[4736]: E0316 17:26:12.983548 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:26:13 crc kubenswrapper[4736]: I0316 17:26:13.002438 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb949e96-3156-4068-b156-8b4f44796f7e" path="/var/lib/kubelet/pods/fb949e96-3156-4068-b156-8b4f44796f7e/volumes" Mar 16 17:26:23 crc kubenswrapper[4736]: I0316 17:26:23.978933 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:26:23 crc kubenswrapper[4736]: E0316 17:26:23.980169 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:26:35 crc kubenswrapper[4736]: I0316 17:26:35.978914 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:26:35 crc kubenswrapper[4736]: E0316 17:26:35.980478 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:26:46 crc kubenswrapper[4736]: I0316 17:26:46.979183 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:26:46 crc kubenswrapper[4736]: E0316 17:26:46.980002 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:26:58 crc kubenswrapper[4736]: I0316 17:26:58.984008 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:26:58 crc kubenswrapper[4736]: E0316 17:26:58.984675 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:27:02 crc kubenswrapper[4736]: I0316 17:27:02.129085 4736 scope.go:117] "RemoveContainer" containerID="6c23a0841e16eff33067985db5cb130d58e22a857104c6f5866cf6bb6d2c63bb" Mar 16 17:27:13 crc kubenswrapper[4736]: I0316 17:27:13.978168 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:27:13 crc kubenswrapper[4736]: E0316 17:27:13.979090 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:27:26 crc kubenswrapper[4736]: I0316 17:27:26.978065 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:27:26 crc kubenswrapper[4736]: E0316 17:27:26.979007 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:27:37 crc kubenswrapper[4736]: I0316 17:27:37.979621 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:27:37 crc kubenswrapper[4736]: E0316 17:27:37.980462 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.048177 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s8z2z"] Mar 16 17:27:48 crc kubenswrapper[4736]: E0316 17:27:48.049208 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb949e96-3156-4068-b156-8b4f44796f7e" containerName="registry-server" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.049224 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb949e96-3156-4068-b156-8b4f44796f7e" containerName="registry-server" Mar 16 17:27:48 crc kubenswrapper[4736]: E0316 17:27:48.049243 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb949e96-3156-4068-b156-8b4f44796f7e" containerName="extract-content" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.049251 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb949e96-3156-4068-b156-8b4f44796f7e" containerName="extract-content" Mar 16 17:27:48 crc kubenswrapper[4736]: E0316 17:27:48.049284 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6862a736-6544-4282-9a6c-bd5eb9c32c56" containerName="oc" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.049292 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6862a736-6544-4282-9a6c-bd5eb9c32c56" containerName="oc" Mar 16 17:27:48 crc kubenswrapper[4736]: E0316 17:27:48.049311 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb949e96-3156-4068-b156-8b4f44796f7e" containerName="extract-utilities" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.049319 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb949e96-3156-4068-b156-8b4f44796f7e" containerName="extract-utilities" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.049580 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6862a736-6544-4282-9a6c-bd5eb9c32c56" containerName="oc" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.049599 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb949e96-3156-4068-b156-8b4f44796f7e" containerName="registry-server" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.051232 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.053579 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f04d5b-17ff-437f-b0f4-9e0009b52491-utilities\") pod \"redhat-marketplace-s8z2z\" (UID: \"49f04d5b-17ff-437f-b0f4-9e0009b52491\") " pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.053663 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-442pm\" (UniqueName: \"kubernetes.io/projected/49f04d5b-17ff-437f-b0f4-9e0009b52491-kube-api-access-442pm\") pod \"redhat-marketplace-s8z2z\" (UID: \"49f04d5b-17ff-437f-b0f4-9e0009b52491\") " pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.053762 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f04d5b-17ff-437f-b0f4-9e0009b52491-catalog-content\") pod \"redhat-marketplace-s8z2z\" (UID: \"49f04d5b-17ff-437f-b0f4-9e0009b52491\") " pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.072810 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8z2z"] Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.156769 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f04d5b-17ff-437f-b0f4-9e0009b52491-utilities\") pod \"redhat-marketplace-s8z2z\" (UID: \"49f04d5b-17ff-437f-b0f4-9e0009b52491\") " pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.157227 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f04d5b-17ff-437f-b0f4-9e0009b52491-utilities\") pod \"redhat-marketplace-s8z2z\" (UID: \"49f04d5b-17ff-437f-b0f4-9e0009b52491\") " pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.157415 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-442pm\" (UniqueName: \"kubernetes.io/projected/49f04d5b-17ff-437f-b0f4-9e0009b52491-kube-api-access-442pm\") pod \"redhat-marketplace-s8z2z\" (UID: \"49f04d5b-17ff-437f-b0f4-9e0009b52491\") " pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.158214 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f04d5b-17ff-437f-b0f4-9e0009b52491-catalog-content\") pod \"redhat-marketplace-s8z2z\" (UID: \"49f04d5b-17ff-437f-b0f4-9e0009b52491\") " pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.158528 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f04d5b-17ff-437f-b0f4-9e0009b52491-catalog-content\") pod \"redhat-marketplace-s8z2z\" (UID: \"49f04d5b-17ff-437f-b0f4-9e0009b52491\") " pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.185981 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-442pm\" (UniqueName: \"kubernetes.io/projected/49f04d5b-17ff-437f-b0f4-9e0009b52491-kube-api-access-442pm\") pod \"redhat-marketplace-s8z2z\" (UID: \"49f04d5b-17ff-437f-b0f4-9e0009b52491\") " pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.372855 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.884119 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8z2z"] Mar 16 17:27:48 crc kubenswrapper[4736]: I0316 17:27:48.990068 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:27:48 crc kubenswrapper[4736]: E0316 17:27:48.990983 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:27:49 crc kubenswrapper[4736]: I0316 17:27:49.487186 4736 generic.go:334] "Generic (PLEG): container finished" podID="49f04d5b-17ff-437f-b0f4-9e0009b52491" containerID="a0943379a50243f7a867ec903281d87197032472a7532a1ffc825dfbf98b5a37" exitCode=0 Mar 16 17:27:49 crc kubenswrapper[4736]: I0316 17:27:49.487249 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8z2z" event={"ID":"49f04d5b-17ff-437f-b0f4-9e0009b52491","Type":"ContainerDied","Data":"a0943379a50243f7a867ec903281d87197032472a7532a1ffc825dfbf98b5a37"} Mar 16 17:27:49 crc kubenswrapper[4736]: I0316 17:27:49.487299 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8z2z" event={"ID":"49f04d5b-17ff-437f-b0f4-9e0009b52491","Type":"ContainerStarted","Data":"1ef7f1fc30cd86e4649e9806ba3c08e01829d946236396a7cfbe23ca57b0ebb3"} Mar 16 17:27:51 crc kubenswrapper[4736]: I0316 17:27:51.514614 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8z2z" event={"ID":"49f04d5b-17ff-437f-b0f4-9e0009b52491","Type":"ContainerStarted","Data":"da02033b2bd7b2fd5898831ff24524cf4b1f3f56cc13f95611c7bbd210b93e5d"} Mar 16 17:27:52 crc kubenswrapper[4736]: I0316 17:27:52.528414 4736 generic.go:334] "Generic (PLEG): container finished" podID="49f04d5b-17ff-437f-b0f4-9e0009b52491" containerID="da02033b2bd7b2fd5898831ff24524cf4b1f3f56cc13f95611c7bbd210b93e5d" exitCode=0 Mar 16 17:27:52 crc kubenswrapper[4736]: I0316 17:27:52.528552 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8z2z" event={"ID":"49f04d5b-17ff-437f-b0f4-9e0009b52491","Type":"ContainerDied","Data":"da02033b2bd7b2fd5898831ff24524cf4b1f3f56cc13f95611c7bbd210b93e5d"} Mar 16 17:27:53 crc kubenswrapper[4736]: I0316 17:27:53.540182 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8z2z" event={"ID":"49f04d5b-17ff-437f-b0f4-9e0009b52491","Type":"ContainerStarted","Data":"33da79e087be1145931676dfe293762cb443c6438fea44bad679839ab810ed9f"} Mar 16 17:27:53 crc kubenswrapper[4736]: I0316 17:27:53.568932 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s8z2z" podStartSLOduration=2.10484753 podStartE2EDuration="5.568916732s" podCreationTimestamp="2026-03-16 17:27:48 +0000 UTC" firstStartedPulling="2026-03-16 17:27:49.489211854 +0000 UTC m=+8071.216602141" lastFinishedPulling="2026-03-16 17:27:52.953281046 +0000 UTC m=+8074.680671343" observedRunningTime="2026-03-16 17:27:53.561641743 +0000 UTC m=+8075.289032030" watchObservedRunningTime="2026-03-16 17:27:53.568916732 +0000 UTC m=+8075.296307019" Mar 16 17:27:58 crc kubenswrapper[4736]: I0316 17:27:58.373766 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:27:58 crc kubenswrapper[4736]: I0316 17:27:58.375397 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:27:59 crc kubenswrapper[4736]: I0316 17:27:59.415847 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-s8z2z" podUID="49f04d5b-17ff-437f-b0f4-9e0009b52491" containerName="registry-server" probeResult="failure" output=< Mar 16 17:27:59 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:27:59 crc kubenswrapper[4736]: > Mar 16 17:27:59 crc kubenswrapper[4736]: I0316 17:27:59.977890 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:27:59 crc kubenswrapper[4736]: E0316 17:27:59.978234 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:28:00 crc kubenswrapper[4736]: I0316 17:28:00.160772 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561368-tdh9w"] Mar 16 17:28:00 crc kubenswrapper[4736]: I0316 17:28:00.162187 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561368-tdh9w" Mar 16 17:28:00 crc kubenswrapper[4736]: I0316 17:28:00.163926 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:28:00 crc kubenswrapper[4736]: I0316 17:28:00.167824 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:28:00 crc kubenswrapper[4736]: I0316 17:28:00.172200 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:28:00 crc kubenswrapper[4736]: I0316 17:28:00.180281 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561368-tdh9w"] Mar 16 17:28:00 crc kubenswrapper[4736]: I0316 17:28:00.301515 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzgcz\" (UniqueName: \"kubernetes.io/projected/b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb-kube-api-access-xzgcz\") pod \"auto-csr-approver-29561368-tdh9w\" (UID: \"b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb\") " pod="openshift-infra/auto-csr-approver-29561368-tdh9w" Mar 16 17:28:00 crc kubenswrapper[4736]: I0316 17:28:00.403144 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzgcz\" (UniqueName: \"kubernetes.io/projected/b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb-kube-api-access-xzgcz\") pod \"auto-csr-approver-29561368-tdh9w\" (UID: \"b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb\") " pod="openshift-infra/auto-csr-approver-29561368-tdh9w" Mar 16 17:28:00 crc kubenswrapper[4736]: I0316 17:28:00.427789 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzgcz\" (UniqueName: \"kubernetes.io/projected/b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb-kube-api-access-xzgcz\") pod \"auto-csr-approver-29561368-tdh9w\" (UID: \"b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb\") " pod="openshift-infra/auto-csr-approver-29561368-tdh9w" Mar 16 17:28:00 crc kubenswrapper[4736]: I0316 17:28:00.478500 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561368-tdh9w" Mar 16 17:28:00 crc kubenswrapper[4736]: W0316 17:28:00.970504 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb07bafaa_d40b_4dc1_bbbc_25fef83eb4eb.slice/crio-c7ab136e0babafccfb9352ca50d405901b5e56725ae451ec7d8cdd5d67c0e928 WatchSource:0}: Error finding container c7ab136e0babafccfb9352ca50d405901b5e56725ae451ec7d8cdd5d67c0e928: Status 404 returned error can't find the container with id c7ab136e0babafccfb9352ca50d405901b5e56725ae451ec7d8cdd5d67c0e928 Mar 16 17:28:00 crc kubenswrapper[4736]: I0316 17:28:00.977460 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561368-tdh9w"] Mar 16 17:28:01 crc kubenswrapper[4736]: I0316 17:28:01.630214 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561368-tdh9w" event={"ID":"b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb","Type":"ContainerStarted","Data":"c7ab136e0babafccfb9352ca50d405901b5e56725ae451ec7d8cdd5d67c0e928"} Mar 16 17:28:02 crc kubenswrapper[4736]: I0316 17:28:02.640997 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561368-tdh9w" event={"ID":"b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb","Type":"ContainerStarted","Data":"2c055d3f3ca1cb18b385b073d66f007d7cec801b20e7f3a9d48b98dea1c6f0fd"} Mar 16 17:28:02 crc kubenswrapper[4736]: I0316 17:28:02.657241 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561368-tdh9w" podStartSLOduration=1.627837667 podStartE2EDuration="2.657218737s" podCreationTimestamp="2026-03-16 17:28:00 +0000 UTC" firstStartedPulling="2026-03-16 17:28:00.972287662 +0000 UTC m=+8082.699677949" lastFinishedPulling="2026-03-16 17:28:02.001668732 +0000 UTC m=+8083.729059019" observedRunningTime="2026-03-16 17:28:02.655581843 +0000 UTC m=+8084.382972130" watchObservedRunningTime="2026-03-16 17:28:02.657218737 +0000 UTC m=+8084.384609024" Mar 16 17:28:03 crc kubenswrapper[4736]: I0316 17:28:03.650961 4736 generic.go:334] "Generic (PLEG): container finished" podID="b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb" containerID="2c055d3f3ca1cb18b385b073d66f007d7cec801b20e7f3a9d48b98dea1c6f0fd" exitCode=0 Mar 16 17:28:03 crc kubenswrapper[4736]: I0316 17:28:03.651004 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561368-tdh9w" event={"ID":"b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb","Type":"ContainerDied","Data":"2c055d3f3ca1cb18b385b073d66f007d7cec801b20e7f3a9d48b98dea1c6f0fd"} Mar 16 17:28:05 crc kubenswrapper[4736]: I0316 17:28:05.066568 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561368-tdh9w" Mar 16 17:28:05 crc kubenswrapper[4736]: I0316 17:28:05.216187 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzgcz\" (UniqueName: \"kubernetes.io/projected/b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb-kube-api-access-xzgcz\") pod \"b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb\" (UID: \"b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb\") " Mar 16 17:28:05 crc kubenswrapper[4736]: I0316 17:28:05.222335 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb-kube-api-access-xzgcz" (OuterVolumeSpecName: "kube-api-access-xzgcz") pod "b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb" (UID: "b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb"). InnerVolumeSpecName "kube-api-access-xzgcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:28:05 crc kubenswrapper[4736]: I0316 17:28:05.319036 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzgcz\" (UniqueName: \"kubernetes.io/projected/b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb-kube-api-access-xzgcz\") on node \"crc\" DevicePath \"\"" Mar 16 17:28:05 crc kubenswrapper[4736]: I0316 17:28:05.674196 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561368-tdh9w" event={"ID":"b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb","Type":"ContainerDied","Data":"c7ab136e0babafccfb9352ca50d405901b5e56725ae451ec7d8cdd5d67c0e928"} Mar 16 17:28:05 crc kubenswrapper[4736]: I0316 17:28:05.674243 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7ab136e0babafccfb9352ca50d405901b5e56725ae451ec7d8cdd5d67c0e928" Mar 16 17:28:05 crc kubenswrapper[4736]: I0316 17:28:05.674314 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561368-tdh9w" Mar 16 17:28:05 crc kubenswrapper[4736]: I0316 17:28:05.735794 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561362-7m2vr"] Mar 16 17:28:05 crc kubenswrapper[4736]: I0316 17:28:05.751814 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561362-7m2vr"] Mar 16 17:28:06 crc kubenswrapper[4736]: I0316 17:28:06.989372 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f3d7ad-c663-476e-ac04-283e3be440bd" path="/var/lib/kubelet/pods/d9f3d7ad-c663-476e-ac04-283e3be440bd/volumes" Mar 16 17:28:08 crc kubenswrapper[4736]: I0316 17:28:08.421554 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:28:08 crc kubenswrapper[4736]: I0316 17:28:08.476200 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:28:08 crc kubenswrapper[4736]: I0316 17:28:08.667582 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8z2z"] Mar 16 17:28:09 crc kubenswrapper[4736]: I0316 17:28:09.727048 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s8z2z" podUID="49f04d5b-17ff-437f-b0f4-9e0009b52491" containerName="registry-server" containerID="cri-o://33da79e087be1145931676dfe293762cb443c6438fea44bad679839ab810ed9f" gracePeriod=2 Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.255504 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.419163 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f04d5b-17ff-437f-b0f4-9e0009b52491-utilities\") pod \"49f04d5b-17ff-437f-b0f4-9e0009b52491\" (UID: \"49f04d5b-17ff-437f-b0f4-9e0009b52491\") " Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.419229 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f04d5b-17ff-437f-b0f4-9e0009b52491-catalog-content\") pod \"49f04d5b-17ff-437f-b0f4-9e0009b52491\" (UID: \"49f04d5b-17ff-437f-b0f4-9e0009b52491\") " Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.419431 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-442pm\" (UniqueName: \"kubernetes.io/projected/49f04d5b-17ff-437f-b0f4-9e0009b52491-kube-api-access-442pm\") pod \"49f04d5b-17ff-437f-b0f4-9e0009b52491\" (UID: \"49f04d5b-17ff-437f-b0f4-9e0009b52491\") " Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.419962 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f04d5b-17ff-437f-b0f4-9e0009b52491-utilities" (OuterVolumeSpecName: "utilities") pod "49f04d5b-17ff-437f-b0f4-9e0009b52491" (UID: "49f04d5b-17ff-437f-b0f4-9e0009b52491"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.429408 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f04d5b-17ff-437f-b0f4-9e0009b52491-kube-api-access-442pm" (OuterVolumeSpecName: "kube-api-access-442pm") pod "49f04d5b-17ff-437f-b0f4-9e0009b52491" (UID: "49f04d5b-17ff-437f-b0f4-9e0009b52491"). InnerVolumeSpecName "kube-api-access-442pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.453406 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f04d5b-17ff-437f-b0f4-9e0009b52491-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49f04d5b-17ff-437f-b0f4-9e0009b52491" (UID: "49f04d5b-17ff-437f-b0f4-9e0009b52491"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.521875 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f04d5b-17ff-437f-b0f4-9e0009b52491-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.521908 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f04d5b-17ff-437f-b0f4-9e0009b52491-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.521922 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-442pm\" (UniqueName: \"kubernetes.io/projected/49f04d5b-17ff-437f-b0f4-9e0009b52491-kube-api-access-442pm\") on node \"crc\" DevicePath \"\"" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.739287 4736 generic.go:334] "Generic (PLEG): container finished" podID="49f04d5b-17ff-437f-b0f4-9e0009b52491" containerID="33da79e087be1145931676dfe293762cb443c6438fea44bad679839ab810ed9f" exitCode=0 Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.739351 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8z2z" event={"ID":"49f04d5b-17ff-437f-b0f4-9e0009b52491","Type":"ContainerDied","Data":"33da79e087be1145931676dfe293762cb443c6438fea44bad679839ab810ed9f"} Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.739407 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8z2z" event={"ID":"49f04d5b-17ff-437f-b0f4-9e0009b52491","Type":"ContainerDied","Data":"1ef7f1fc30cd86e4649e9806ba3c08e01829d946236396a7cfbe23ca57b0ebb3"} Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.739439 4736 scope.go:117] "RemoveContainer" containerID="33da79e087be1145931676dfe293762cb443c6438fea44bad679839ab810ed9f" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.739453 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8z2z" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.764466 4736 scope.go:117] "RemoveContainer" containerID="da02033b2bd7b2fd5898831ff24524cf4b1f3f56cc13f95611c7bbd210b93e5d" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.789863 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8z2z"] Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.813341 4736 scope.go:117] "RemoveContainer" containerID="a0943379a50243f7a867ec903281d87197032472a7532a1ffc825dfbf98b5a37" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.815443 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8z2z"] Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.839969 4736 scope.go:117] "RemoveContainer" containerID="33da79e087be1145931676dfe293762cb443c6438fea44bad679839ab810ed9f" Mar 16 17:28:10 crc kubenswrapper[4736]: E0316 17:28:10.840963 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33da79e087be1145931676dfe293762cb443c6438fea44bad679839ab810ed9f\": container with ID starting with 33da79e087be1145931676dfe293762cb443c6438fea44bad679839ab810ed9f not found: ID does not exist" containerID="33da79e087be1145931676dfe293762cb443c6438fea44bad679839ab810ed9f" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.841010 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33da79e087be1145931676dfe293762cb443c6438fea44bad679839ab810ed9f"} err="failed to get container status \"33da79e087be1145931676dfe293762cb443c6438fea44bad679839ab810ed9f\": rpc error: code = NotFound desc = could not find container \"33da79e087be1145931676dfe293762cb443c6438fea44bad679839ab810ed9f\": container with ID starting with 33da79e087be1145931676dfe293762cb443c6438fea44bad679839ab810ed9f not found: ID does not exist" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.841045 4736 scope.go:117] "RemoveContainer" containerID="da02033b2bd7b2fd5898831ff24524cf4b1f3f56cc13f95611c7bbd210b93e5d" Mar 16 17:28:10 crc kubenswrapper[4736]: E0316 17:28:10.841704 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da02033b2bd7b2fd5898831ff24524cf4b1f3f56cc13f95611c7bbd210b93e5d\": container with ID starting with da02033b2bd7b2fd5898831ff24524cf4b1f3f56cc13f95611c7bbd210b93e5d not found: ID does not exist" containerID="da02033b2bd7b2fd5898831ff24524cf4b1f3f56cc13f95611c7bbd210b93e5d" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.841734 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da02033b2bd7b2fd5898831ff24524cf4b1f3f56cc13f95611c7bbd210b93e5d"} err="failed to get container status \"da02033b2bd7b2fd5898831ff24524cf4b1f3f56cc13f95611c7bbd210b93e5d\": rpc error: code = NotFound desc = could not find container \"da02033b2bd7b2fd5898831ff24524cf4b1f3f56cc13f95611c7bbd210b93e5d\": container with ID starting with da02033b2bd7b2fd5898831ff24524cf4b1f3f56cc13f95611c7bbd210b93e5d not found: ID does not exist" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.841759 4736 scope.go:117] "RemoveContainer" containerID="a0943379a50243f7a867ec903281d87197032472a7532a1ffc825dfbf98b5a37" Mar 16 17:28:10 crc kubenswrapper[4736]: E0316 17:28:10.842628 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0943379a50243f7a867ec903281d87197032472a7532a1ffc825dfbf98b5a37\": container with ID starting with a0943379a50243f7a867ec903281d87197032472a7532a1ffc825dfbf98b5a37 not found: ID does not exist" containerID="a0943379a50243f7a867ec903281d87197032472a7532a1ffc825dfbf98b5a37" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.842659 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0943379a50243f7a867ec903281d87197032472a7532a1ffc825dfbf98b5a37"} err="failed to get container status \"a0943379a50243f7a867ec903281d87197032472a7532a1ffc825dfbf98b5a37\": rpc error: code = NotFound desc = could not find container \"a0943379a50243f7a867ec903281d87197032472a7532a1ffc825dfbf98b5a37\": container with ID starting with a0943379a50243f7a867ec903281d87197032472a7532a1ffc825dfbf98b5a37 not found: ID does not exist" Mar 16 17:28:10 crc kubenswrapper[4736]: I0316 17:28:10.988672 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f04d5b-17ff-437f-b0f4-9e0009b52491" path="/var/lib/kubelet/pods/49f04d5b-17ff-437f-b0f4-9e0009b52491/volumes" Mar 16 17:28:12 crc kubenswrapper[4736]: I0316 17:28:12.979282 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:28:12 crc kubenswrapper[4736]: E0316 17:28:12.979830 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:28:25 crc kubenswrapper[4736]: I0316 17:28:25.978507 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:28:25 crc kubenswrapper[4736]: E0316 17:28:25.979217 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:28:38 crc kubenswrapper[4736]: I0316 17:28:38.990257 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:28:40 crc kubenswrapper[4736]: I0316 17:28:40.013559 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"f43f7470d1fa67399a1d97da57dd04ecb7dddadaf5dac9fefb4d418a4925a183"} Mar 16 17:29:02 crc kubenswrapper[4736]: I0316 17:29:02.263309 4736 scope.go:117] "RemoveContainer" containerID="afc34f8d22f26ab06d34ea72ff0b5b04c10ae01ed917b740ed65bf1d9fc9d8bc" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.168222 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq"] Mar 16 17:30:00 crc kubenswrapper[4736]: E0316 17:30:00.172705 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f04d5b-17ff-437f-b0f4-9e0009b52491" containerName="extract-content" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.173145 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f04d5b-17ff-437f-b0f4-9e0009b52491" containerName="extract-content" Mar 16 17:30:00 crc kubenswrapper[4736]: E0316 17:30:00.173358 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f04d5b-17ff-437f-b0f4-9e0009b52491" containerName="extract-utilities" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.173501 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f04d5b-17ff-437f-b0f4-9e0009b52491" containerName="extract-utilities" Mar 16 17:30:00 crc kubenswrapper[4736]: E0316 17:30:00.173628 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb" containerName="oc" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.173783 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb" containerName="oc" Mar 16 17:30:00 crc kubenswrapper[4736]: E0316 17:30:00.173917 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f04d5b-17ff-437f-b0f4-9e0009b52491" containerName="registry-server" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.174025 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f04d5b-17ff-437f-b0f4-9e0009b52491" containerName="registry-server" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.174528 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb" containerName="oc" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.174707 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f04d5b-17ff-437f-b0f4-9e0009b52491" containerName="registry-server" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.176069 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.182683 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561370-mxptm"] Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.184637 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561370-mxptm" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.185784 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.187147 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.187442 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.187807 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.188160 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.196243 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq"] Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.210249 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561370-mxptm"] Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.309681 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8pgx\" (UniqueName: \"kubernetes.io/projected/2183f226-fb1b-4a51-a6af-87cb20b6897c-kube-api-access-m8pgx\") pod \"auto-csr-approver-29561370-mxptm\" (UID: \"2183f226-fb1b-4a51-a6af-87cb20b6897c\") " pod="openshift-infra/auto-csr-approver-29561370-mxptm" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.309813 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dc919f4-69b8-4475-98f2-413962ddefb4-config-volume\") pod \"collect-profiles-29561370-xrjlq\" (UID: \"3dc919f4-69b8-4475-98f2-413962ddefb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.309843 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv9w4\" (UniqueName: \"kubernetes.io/projected/3dc919f4-69b8-4475-98f2-413962ddefb4-kube-api-access-vv9w4\") pod \"collect-profiles-29561370-xrjlq\" (UID: \"3dc919f4-69b8-4475-98f2-413962ddefb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.309864 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dc919f4-69b8-4475-98f2-413962ddefb4-secret-volume\") pod \"collect-profiles-29561370-xrjlq\" (UID: \"3dc919f4-69b8-4475-98f2-413962ddefb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.411480 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dc919f4-69b8-4475-98f2-413962ddefb4-config-volume\") pod \"collect-profiles-29561370-xrjlq\" (UID: \"3dc919f4-69b8-4475-98f2-413962ddefb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.411541 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv9w4\" (UniqueName: \"kubernetes.io/projected/3dc919f4-69b8-4475-98f2-413962ddefb4-kube-api-access-vv9w4\") pod \"collect-profiles-29561370-xrjlq\" (UID: \"3dc919f4-69b8-4475-98f2-413962ddefb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.411593 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dc919f4-69b8-4475-98f2-413962ddefb4-secret-volume\") pod \"collect-profiles-29561370-xrjlq\" (UID: \"3dc919f4-69b8-4475-98f2-413962ddefb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.411738 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8pgx\" (UniqueName: \"kubernetes.io/projected/2183f226-fb1b-4a51-a6af-87cb20b6897c-kube-api-access-m8pgx\") pod \"auto-csr-approver-29561370-mxptm\" (UID: \"2183f226-fb1b-4a51-a6af-87cb20b6897c\") " pod="openshift-infra/auto-csr-approver-29561370-mxptm" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.413014 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dc919f4-69b8-4475-98f2-413962ddefb4-config-volume\") pod \"collect-profiles-29561370-xrjlq\" (UID: \"3dc919f4-69b8-4475-98f2-413962ddefb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.421944 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dc919f4-69b8-4475-98f2-413962ddefb4-secret-volume\") pod \"collect-profiles-29561370-xrjlq\" (UID: \"3dc919f4-69b8-4475-98f2-413962ddefb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.433525 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv9w4\" (UniqueName: \"kubernetes.io/projected/3dc919f4-69b8-4475-98f2-413962ddefb4-kube-api-access-vv9w4\") pod \"collect-profiles-29561370-xrjlq\" (UID: \"3dc919f4-69b8-4475-98f2-413962ddefb4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.442184 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8pgx\" (UniqueName: \"kubernetes.io/projected/2183f226-fb1b-4a51-a6af-87cb20b6897c-kube-api-access-m8pgx\") pod \"auto-csr-approver-29561370-mxptm\" (UID: \"2183f226-fb1b-4a51-a6af-87cb20b6897c\") " pod="openshift-infra/auto-csr-approver-29561370-mxptm" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.508854 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" Mar 16 17:30:00 crc kubenswrapper[4736]: I0316 17:30:00.533576 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561370-mxptm" Mar 16 17:30:01 crc kubenswrapper[4736]: I0316 17:30:01.056526 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561370-mxptm"] Mar 16 17:30:01 crc kubenswrapper[4736]: I0316 17:30:01.076043 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 17:30:01 crc kubenswrapper[4736]: W0316 17:30:01.154763 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dc919f4_69b8_4475_98f2_413962ddefb4.slice/crio-2f7c7c5fbfab0f78413b8c13334b253c9339b00bbb56fb6352f88ee9cd6ac9f5 WatchSource:0}: Error finding container 2f7c7c5fbfab0f78413b8c13334b253c9339b00bbb56fb6352f88ee9cd6ac9f5: Status 404 returned error can't find the container with id 2f7c7c5fbfab0f78413b8c13334b253c9339b00bbb56fb6352f88ee9cd6ac9f5 Mar 16 17:30:01 crc kubenswrapper[4736]: I0316 17:30:01.157853 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq"] Mar 16 17:30:01 crc kubenswrapper[4736]: I0316 17:30:01.798723 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" event={"ID":"3dc919f4-69b8-4475-98f2-413962ddefb4","Type":"ContainerStarted","Data":"d1d983ca7ac478df850289c46acdef42517546cbf87d2dbcb332cdde75583835"} Mar 16 17:30:01 crc kubenswrapper[4736]: I0316 17:30:01.799027 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" event={"ID":"3dc919f4-69b8-4475-98f2-413962ddefb4","Type":"ContainerStarted","Data":"2f7c7c5fbfab0f78413b8c13334b253c9339b00bbb56fb6352f88ee9cd6ac9f5"} Mar 16 17:30:01 crc kubenswrapper[4736]: I0316 17:30:01.800190 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561370-mxptm" event={"ID":"2183f226-fb1b-4a51-a6af-87cb20b6897c","Type":"ContainerStarted","Data":"9b1727d2e2ecdb57ad65dee200ad9233bbcce8820b38193f36d67959b723c389"} Mar 16 17:30:01 crc kubenswrapper[4736]: I0316 17:30:01.841952 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" podStartSLOduration=1.841931054 podStartE2EDuration="1.841931054s" podCreationTimestamp="2026-03-16 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 17:30:01.834362828 +0000 UTC m=+8203.561753115" watchObservedRunningTime="2026-03-16 17:30:01.841931054 +0000 UTC m=+8203.569321351" Mar 16 17:30:02 crc kubenswrapper[4736]: I0316 17:30:02.809678 4736 generic.go:334] "Generic (PLEG): container finished" podID="3dc919f4-69b8-4475-98f2-413962ddefb4" containerID="d1d983ca7ac478df850289c46acdef42517546cbf87d2dbcb332cdde75583835" exitCode=0 Mar 16 17:30:02 crc kubenswrapper[4736]: I0316 17:30:02.809753 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" event={"ID":"3dc919f4-69b8-4475-98f2-413962ddefb4","Type":"ContainerDied","Data":"d1d983ca7ac478df850289c46acdef42517546cbf87d2dbcb332cdde75583835"} Mar 16 17:30:03 crc kubenswrapper[4736]: I0316 17:30:03.822578 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561370-mxptm" event={"ID":"2183f226-fb1b-4a51-a6af-87cb20b6897c","Type":"ContainerStarted","Data":"353f3c4fdd2a1a2b5be3f845b20ab52e8c8d4811326048909fa4e4eecc47dce7"} Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.290787 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.312527 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561370-mxptm" podStartSLOduration=2.099267019 podStartE2EDuration="4.312502416s" podCreationTimestamp="2026-03-16 17:30:00 +0000 UTC" firstStartedPulling="2026-03-16 17:30:01.075814321 +0000 UTC m=+8202.803204608" lastFinishedPulling="2026-03-16 17:30:03.289049718 +0000 UTC m=+8205.016440005" observedRunningTime="2026-03-16 17:30:03.848654805 +0000 UTC m=+8205.576045092" watchObservedRunningTime="2026-03-16 17:30:04.312502416 +0000 UTC m=+8206.039892713" Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.396434 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dc919f4-69b8-4475-98f2-413962ddefb4-config-volume\") pod \"3dc919f4-69b8-4475-98f2-413962ddefb4\" (UID: \"3dc919f4-69b8-4475-98f2-413962ddefb4\") " Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.396526 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dc919f4-69b8-4475-98f2-413962ddefb4-secret-volume\") pod \"3dc919f4-69b8-4475-98f2-413962ddefb4\" (UID: \"3dc919f4-69b8-4475-98f2-413962ddefb4\") " Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.396761 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv9w4\" (UniqueName: \"kubernetes.io/projected/3dc919f4-69b8-4475-98f2-413962ddefb4-kube-api-access-vv9w4\") pod \"3dc919f4-69b8-4475-98f2-413962ddefb4\" (UID: \"3dc919f4-69b8-4475-98f2-413962ddefb4\") " Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.397818 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dc919f4-69b8-4475-98f2-413962ddefb4-config-volume" (OuterVolumeSpecName: "config-volume") pod "3dc919f4-69b8-4475-98f2-413962ddefb4" (UID: "3dc919f4-69b8-4475-98f2-413962ddefb4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.404142 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dc919f4-69b8-4475-98f2-413962ddefb4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3dc919f4-69b8-4475-98f2-413962ddefb4" (UID: "3dc919f4-69b8-4475-98f2-413962ddefb4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.404497 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dc919f4-69b8-4475-98f2-413962ddefb4-kube-api-access-vv9w4" (OuterVolumeSpecName: "kube-api-access-vv9w4") pod "3dc919f4-69b8-4475-98f2-413962ddefb4" (UID: "3dc919f4-69b8-4475-98f2-413962ddefb4"). InnerVolumeSpecName "kube-api-access-vv9w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.499010 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv9w4\" (UniqueName: \"kubernetes.io/projected/3dc919f4-69b8-4475-98f2-413962ddefb4-kube-api-access-vv9w4\") on node \"crc\" DevicePath \"\"" Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.499042 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dc919f4-69b8-4475-98f2-413962ddefb4-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.499053 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3dc919f4-69b8-4475-98f2-413962ddefb4-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.832196 4736 generic.go:334] "Generic (PLEG): container finished" podID="2183f226-fb1b-4a51-a6af-87cb20b6897c" containerID="353f3c4fdd2a1a2b5be3f845b20ab52e8c8d4811326048909fa4e4eecc47dce7" exitCode=0 Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.832262 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561370-mxptm" event={"ID":"2183f226-fb1b-4a51-a6af-87cb20b6897c","Type":"ContainerDied","Data":"353f3c4fdd2a1a2b5be3f845b20ab52e8c8d4811326048909fa4e4eecc47dce7"} Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.835768 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" event={"ID":"3dc919f4-69b8-4475-98f2-413962ddefb4","Type":"ContainerDied","Data":"2f7c7c5fbfab0f78413b8c13334b253c9339b00bbb56fb6352f88ee9cd6ac9f5"} Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.835805 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f7c7c5fbfab0f78413b8c13334b253c9339b00bbb56fb6352f88ee9cd6ac9f5" Mar 16 17:30:04 crc kubenswrapper[4736]: I0316 17:30:04.835849 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq" Mar 16 17:30:05 crc kubenswrapper[4736]: I0316 17:30:05.374854 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl"] Mar 16 17:30:05 crc kubenswrapper[4736]: I0316 17:30:05.386855 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561325-kf2fl"] Mar 16 17:30:06 crc kubenswrapper[4736]: I0316 17:30:06.200422 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561370-mxptm" Mar 16 17:30:06 crc kubenswrapper[4736]: I0316 17:30:06.343362 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8pgx\" (UniqueName: \"kubernetes.io/projected/2183f226-fb1b-4a51-a6af-87cb20b6897c-kube-api-access-m8pgx\") pod \"2183f226-fb1b-4a51-a6af-87cb20b6897c\" (UID: \"2183f226-fb1b-4a51-a6af-87cb20b6897c\") " Mar 16 17:30:06 crc kubenswrapper[4736]: I0316 17:30:06.352075 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2183f226-fb1b-4a51-a6af-87cb20b6897c-kube-api-access-m8pgx" (OuterVolumeSpecName: "kube-api-access-m8pgx") pod "2183f226-fb1b-4a51-a6af-87cb20b6897c" (UID: "2183f226-fb1b-4a51-a6af-87cb20b6897c"). InnerVolumeSpecName "kube-api-access-m8pgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:30:06 crc kubenswrapper[4736]: I0316 17:30:06.446218 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8pgx\" (UniqueName: \"kubernetes.io/projected/2183f226-fb1b-4a51-a6af-87cb20b6897c-kube-api-access-m8pgx\") on node \"crc\" DevicePath \"\"" Mar 16 17:30:06 crc kubenswrapper[4736]: I0316 17:30:06.884375 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561370-mxptm" event={"ID":"2183f226-fb1b-4a51-a6af-87cb20b6897c","Type":"ContainerDied","Data":"9b1727d2e2ecdb57ad65dee200ad9233bbcce8820b38193f36d67959b723c389"} Mar 16 17:30:06 crc kubenswrapper[4736]: I0316 17:30:06.884415 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1727d2e2ecdb57ad65dee200ad9233bbcce8820b38193f36d67959b723c389" Mar 16 17:30:06 crc kubenswrapper[4736]: I0316 17:30:06.884442 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561370-mxptm" Mar 16 17:30:06 crc kubenswrapper[4736]: I0316 17:30:06.989672 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0731b440-a0eb-4665-81ad-6c49663b31ce" path="/var/lib/kubelet/pods/0731b440-a0eb-4665-81ad-6c49663b31ce/volumes" Mar 16 17:30:07 crc kubenswrapper[4736]: I0316 17:30:07.264090 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561364-jznzd"] Mar 16 17:30:07 crc kubenswrapper[4736]: I0316 17:30:07.276604 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561364-jznzd"] Mar 16 17:30:08 crc kubenswrapper[4736]: I0316 17:30:08.990421 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10b4ae30-6721-43aa-8e4b-8f2e1321dde6" path="/var/lib/kubelet/pods/10b4ae30-6721-43aa-8e4b-8f2e1321dde6/volumes" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.500067 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kffc4"] Mar 16 17:30:37 crc kubenswrapper[4736]: E0316 17:30:37.501087 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2183f226-fb1b-4a51-a6af-87cb20b6897c" containerName="oc" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.501129 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2183f226-fb1b-4a51-a6af-87cb20b6897c" containerName="oc" Mar 16 17:30:37 crc kubenswrapper[4736]: E0316 17:30:37.501169 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc919f4-69b8-4475-98f2-413962ddefb4" containerName="collect-profiles" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.501178 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc919f4-69b8-4475-98f2-413962ddefb4" containerName="collect-profiles" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.501436 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2183f226-fb1b-4a51-a6af-87cb20b6897c" containerName="oc" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.501462 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dc919f4-69b8-4475-98f2-413962ddefb4" containerName="collect-profiles" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.503251 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.527541 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kffc4"] Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.616467 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdffd9a-9fd4-4ec8-a660-cbd4c759b375-utilities\") pod \"redhat-operators-kffc4\" (UID: \"8fdffd9a-9fd4-4ec8-a660-cbd4c759b375\") " pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.616572 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdffd9a-9fd4-4ec8-a660-cbd4c759b375-catalog-content\") pod \"redhat-operators-kffc4\" (UID: \"8fdffd9a-9fd4-4ec8-a660-cbd4c759b375\") " pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.616629 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4krsf\" (UniqueName: \"kubernetes.io/projected/8fdffd9a-9fd4-4ec8-a660-cbd4c759b375-kube-api-access-4krsf\") pod \"redhat-operators-kffc4\" (UID: \"8fdffd9a-9fd4-4ec8-a660-cbd4c759b375\") " pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.718255 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdffd9a-9fd4-4ec8-a660-cbd4c759b375-utilities\") pod \"redhat-operators-kffc4\" (UID: \"8fdffd9a-9fd4-4ec8-a660-cbd4c759b375\") " pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.718327 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdffd9a-9fd4-4ec8-a660-cbd4c759b375-catalog-content\") pod \"redhat-operators-kffc4\" (UID: \"8fdffd9a-9fd4-4ec8-a660-cbd4c759b375\") " pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.718376 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4krsf\" (UniqueName: \"kubernetes.io/projected/8fdffd9a-9fd4-4ec8-a660-cbd4c759b375-kube-api-access-4krsf\") pod \"redhat-operators-kffc4\" (UID: \"8fdffd9a-9fd4-4ec8-a660-cbd4c759b375\") " pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.718735 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fdffd9a-9fd4-4ec8-a660-cbd4c759b375-utilities\") pod \"redhat-operators-kffc4\" (UID: \"8fdffd9a-9fd4-4ec8-a660-cbd4c759b375\") " pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.719216 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fdffd9a-9fd4-4ec8-a660-cbd4c759b375-catalog-content\") pod \"redhat-operators-kffc4\" (UID: \"8fdffd9a-9fd4-4ec8-a660-cbd4c759b375\") " pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.745694 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4krsf\" (UniqueName: \"kubernetes.io/projected/8fdffd9a-9fd4-4ec8-a660-cbd4c759b375-kube-api-access-4krsf\") pod \"redhat-operators-kffc4\" (UID: \"8fdffd9a-9fd4-4ec8-a660-cbd4c759b375\") " pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:30:37 crc kubenswrapper[4736]: I0316 17:30:37.823979 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:30:38 crc kubenswrapper[4736]: I0316 17:30:38.397738 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kffc4"] Mar 16 17:30:39 crc kubenswrapper[4736]: I0316 17:30:39.194438 4736 generic.go:334] "Generic (PLEG): container finished" podID="8fdffd9a-9fd4-4ec8-a660-cbd4c759b375" containerID="6bd5dcff95064ed227af6dbe23288b3a522b08fa07f8b97fcd08cf9498be1978" exitCode=0 Mar 16 17:30:39 crc kubenswrapper[4736]: I0316 17:30:39.194537 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kffc4" event={"ID":"8fdffd9a-9fd4-4ec8-a660-cbd4c759b375","Type":"ContainerDied","Data":"6bd5dcff95064ed227af6dbe23288b3a522b08fa07f8b97fcd08cf9498be1978"} Mar 16 17:30:39 crc kubenswrapper[4736]: I0316 17:30:39.194838 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kffc4" event={"ID":"8fdffd9a-9fd4-4ec8-a660-cbd4c759b375","Type":"ContainerStarted","Data":"f0f9bd2a06570c081a163ad00d55faa16678d465dba5a02d8da380162edede4d"} Mar 16 17:30:51 crc kubenswrapper[4736]: I0316 17:30:51.380787 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kffc4" event={"ID":"8fdffd9a-9fd4-4ec8-a660-cbd4c759b375","Type":"ContainerStarted","Data":"c79a1db49d66cc2a32888e64c457ce8e29c4b357d78b67dbc8b15bcf22113fc1"} Mar 16 17:30:56 crc kubenswrapper[4736]: I0316 17:30:56.429646 4736 generic.go:334] "Generic (PLEG): container finished" podID="8fdffd9a-9fd4-4ec8-a660-cbd4c759b375" containerID="c79a1db49d66cc2a32888e64c457ce8e29c4b357d78b67dbc8b15bcf22113fc1" exitCode=0 Mar 16 17:30:56 crc kubenswrapper[4736]: I0316 17:30:56.429717 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kffc4" event={"ID":"8fdffd9a-9fd4-4ec8-a660-cbd4c759b375","Type":"ContainerDied","Data":"c79a1db49d66cc2a32888e64c457ce8e29c4b357d78b67dbc8b15bcf22113fc1"} Mar 16 17:30:57 crc kubenswrapper[4736]: I0316 17:30:57.442579 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kffc4" event={"ID":"8fdffd9a-9fd4-4ec8-a660-cbd4c759b375","Type":"ContainerStarted","Data":"7d1f26c7085b98a1511960e7a15c0b3cf6b96d6bdfbd58ca0a7696b9b65a87aa"} Mar 16 17:30:57 crc kubenswrapper[4736]: I0316 17:30:57.469894 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kffc4" podStartSLOduration=2.830876247 podStartE2EDuration="20.469871541s" podCreationTimestamp="2026-03-16 17:30:37 +0000 UTC" firstStartedPulling="2026-03-16 17:30:39.196584502 +0000 UTC m=+8240.923974809" lastFinishedPulling="2026-03-16 17:30:56.835579816 +0000 UTC m=+8258.562970103" observedRunningTime="2026-03-16 17:30:57.463454765 +0000 UTC m=+8259.190845072" watchObservedRunningTime="2026-03-16 17:30:57.469871541 +0000 UTC m=+8259.197261838" Mar 16 17:30:57 crc kubenswrapper[4736]: I0316 17:30:57.824597 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:30:57 crc kubenswrapper[4736]: I0316 17:30:57.826646 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:30:58 crc kubenswrapper[4736]: I0316 17:30:58.911217 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kffc4" podUID="8fdffd9a-9fd4-4ec8-a660-cbd4c759b375" containerName="registry-server" probeResult="failure" output=< Mar 16 17:30:58 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:30:58 crc kubenswrapper[4736]: > Mar 16 17:31:02 crc kubenswrapper[4736]: I0316 17:31:02.376392 4736 scope.go:117] "RemoveContainer" containerID="e8dcba289a4e4f1297b330074ac7329a40782364d222e8ca495aea384f669573" Mar 16 17:31:02 crc kubenswrapper[4736]: I0316 17:31:02.436598 4736 scope.go:117] "RemoveContainer" containerID="33095cd6dcf13d036c16292dc5a46d514369c90ce01a49100855b5853b27e2de" Mar 16 17:31:08 crc kubenswrapper[4736]: I0316 17:31:08.508233 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:31:08 crc kubenswrapper[4736]: I0316 17:31:08.509479 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:31:08 crc kubenswrapper[4736]: I0316 17:31:08.886311 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kffc4" podUID="8fdffd9a-9fd4-4ec8-a660-cbd4c759b375" containerName="registry-server" probeResult="failure" output=< Mar 16 17:31:08 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:31:08 crc kubenswrapper[4736]: > Mar 16 17:31:18 crc kubenswrapper[4736]: I0316 17:31:18.881043 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kffc4" podUID="8fdffd9a-9fd4-4ec8-a660-cbd4c759b375" containerName="registry-server" probeResult="failure" output=< Mar 16 17:31:18 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:31:18 crc kubenswrapper[4736]: > Mar 16 17:31:28 crc kubenswrapper[4736]: I0316 17:31:28.884036 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kffc4" podUID="8fdffd9a-9fd4-4ec8-a660-cbd4c759b375" containerName="registry-server" probeResult="failure" output=< Mar 16 17:31:28 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:31:28 crc kubenswrapper[4736]: > Mar 16 17:31:37 crc kubenswrapper[4736]: I0316 17:31:37.889511 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:31:37 crc kubenswrapper[4736]: I0316 17:31:37.945058 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kffc4" Mar 16 17:31:38 crc kubenswrapper[4736]: I0316 17:31:38.508365 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:31:38 crc kubenswrapper[4736]: I0316 17:31:38.508823 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:31:38 crc kubenswrapper[4736]: I0316 17:31:38.565639 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kffc4"] Mar 16 17:31:38 crc kubenswrapper[4736]: I0316 17:31:38.722741 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-48gwj"] Mar 16 17:31:38 crc kubenswrapper[4736]: I0316 17:31:38.725195 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-48gwj" podUID="785f2d20-3733-4b65-827e-45a047ecc4c6" containerName="registry-server" containerID="cri-o://4829d889cfbe624e06456c45e8fbb5283c57d8a49ecd6ffc4da94ce06b53aeae" gracePeriod=2 Mar 16 17:31:38 crc kubenswrapper[4736]: I0316 17:31:38.883096 4736 generic.go:334] "Generic (PLEG): container finished" podID="785f2d20-3733-4b65-827e-45a047ecc4c6" containerID="4829d889cfbe624e06456c45e8fbb5283c57d8a49ecd6ffc4da94ce06b53aeae" exitCode=0 Mar 16 17:31:38 crc kubenswrapper[4736]: I0316 17:31:38.883425 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48gwj" event={"ID":"785f2d20-3733-4b65-827e-45a047ecc4c6","Type":"ContainerDied","Data":"4829d889cfbe624e06456c45e8fbb5283c57d8a49ecd6ffc4da94ce06b53aeae"} Mar 16 17:31:39 crc kubenswrapper[4736]: I0316 17:31:39.772897 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 17:31:39 crc kubenswrapper[4736]: I0316 17:31:39.864299 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785f2d20-3733-4b65-827e-45a047ecc4c6-catalog-content\") pod \"785f2d20-3733-4b65-827e-45a047ecc4c6\" (UID: \"785f2d20-3733-4b65-827e-45a047ecc4c6\") " Mar 16 17:31:39 crc kubenswrapper[4736]: I0316 17:31:39.864493 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785f2d20-3733-4b65-827e-45a047ecc4c6-utilities\") pod \"785f2d20-3733-4b65-827e-45a047ecc4c6\" (UID: \"785f2d20-3733-4b65-827e-45a047ecc4c6\") " Mar 16 17:31:39 crc kubenswrapper[4736]: I0316 17:31:39.864530 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vsmw\" (UniqueName: \"kubernetes.io/projected/785f2d20-3733-4b65-827e-45a047ecc4c6-kube-api-access-8vsmw\") pod \"785f2d20-3733-4b65-827e-45a047ecc4c6\" (UID: \"785f2d20-3733-4b65-827e-45a047ecc4c6\") " Mar 16 17:31:39 crc kubenswrapper[4736]: I0316 17:31:39.866741 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/785f2d20-3733-4b65-827e-45a047ecc4c6-utilities" (OuterVolumeSpecName: "utilities") pod "785f2d20-3733-4b65-827e-45a047ecc4c6" (UID: "785f2d20-3733-4b65-827e-45a047ecc4c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:31:39 crc kubenswrapper[4736]: I0316 17:31:39.902360 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785f2d20-3733-4b65-827e-45a047ecc4c6-kube-api-access-8vsmw" (OuterVolumeSpecName: "kube-api-access-8vsmw") pod "785f2d20-3733-4b65-827e-45a047ecc4c6" (UID: "785f2d20-3733-4b65-827e-45a047ecc4c6"). InnerVolumeSpecName "kube-api-access-8vsmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:31:39 crc kubenswrapper[4736]: I0316 17:31:39.902471 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48gwj" Mar 16 17:31:39 crc kubenswrapper[4736]: I0316 17:31:39.903008 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48gwj" event={"ID":"785f2d20-3733-4b65-827e-45a047ecc4c6","Type":"ContainerDied","Data":"2faa7c949699922ca0990abae30a5eea5733b2012648cc84a27a055cda75c596"} Mar 16 17:31:39 crc kubenswrapper[4736]: I0316 17:31:39.903060 4736 scope.go:117] "RemoveContainer" containerID="4829d889cfbe624e06456c45e8fbb5283c57d8a49ecd6ffc4da94ce06b53aeae" Mar 16 17:31:39 crc kubenswrapper[4736]: I0316 17:31:39.972281 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/785f2d20-3733-4b65-827e-45a047ecc4c6-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:31:39 crc kubenswrapper[4736]: I0316 17:31:39.972519 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vsmw\" (UniqueName: \"kubernetes.io/projected/785f2d20-3733-4b65-827e-45a047ecc4c6-kube-api-access-8vsmw\") on node \"crc\" DevicePath \"\"" Mar 16 17:31:39 crc kubenswrapper[4736]: I0316 17:31:39.979020 4736 scope.go:117] "RemoveContainer" containerID="301cfac7673cc6d2892771cbaf4fc28144e592f25278a970a3e8a7dce577b3d9" Mar 16 17:31:39 crc kubenswrapper[4736]: I0316 17:31:39.985151 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/785f2d20-3733-4b65-827e-45a047ecc4c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "785f2d20-3733-4b65-827e-45a047ecc4c6" (UID: "785f2d20-3733-4b65-827e-45a047ecc4c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:31:40 crc kubenswrapper[4736]: I0316 17:31:40.022679 4736 scope.go:117] "RemoveContainer" containerID="0ee69cb373612e21e2294d9b24629cbdd631c6bb9b84ca384c3ddde50bab6932" Mar 16 17:31:40 crc kubenswrapper[4736]: I0316 17:31:40.074451 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/785f2d20-3733-4b65-827e-45a047ecc4c6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:31:40 crc kubenswrapper[4736]: I0316 17:31:40.243493 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-48gwj"] Mar 16 17:31:40 crc kubenswrapper[4736]: I0316 17:31:40.255259 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-48gwj"] Mar 16 17:31:40 crc kubenswrapper[4736]: I0316 17:31:40.994245 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="785f2d20-3733-4b65-827e-45a047ecc4c6" path="/var/lib/kubelet/pods/785f2d20-3733-4b65-827e-45a047ecc4c6/volumes" Mar 16 17:32:00 crc kubenswrapper[4736]: I0316 17:32:00.251913 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561372-4jt7c"] Mar 16 17:32:00 crc kubenswrapper[4736]: E0316 17:32:00.254421 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785f2d20-3733-4b65-827e-45a047ecc4c6" containerName="extract-content" Mar 16 17:32:00 crc kubenswrapper[4736]: I0316 17:32:00.254445 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="785f2d20-3733-4b65-827e-45a047ecc4c6" containerName="extract-content" Mar 16 17:32:00 crc kubenswrapper[4736]: E0316 17:32:00.254459 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785f2d20-3733-4b65-827e-45a047ecc4c6" containerName="registry-server" Mar 16 17:32:00 crc kubenswrapper[4736]: I0316 17:32:00.254465 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="785f2d20-3733-4b65-827e-45a047ecc4c6" containerName="registry-server" Mar 16 17:32:00 crc kubenswrapper[4736]: E0316 17:32:00.254490 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785f2d20-3733-4b65-827e-45a047ecc4c6" containerName="extract-utilities" Mar 16 17:32:00 crc kubenswrapper[4736]: I0316 17:32:00.254496 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="785f2d20-3733-4b65-827e-45a047ecc4c6" containerName="extract-utilities" Mar 16 17:32:00 crc kubenswrapper[4736]: I0316 17:32:00.254675 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="785f2d20-3733-4b65-827e-45a047ecc4c6" containerName="registry-server" Mar 16 17:32:00 crc kubenswrapper[4736]: I0316 17:32:00.256281 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561372-4jt7c" Mar 16 17:32:00 crc kubenswrapper[4736]: I0316 17:32:00.295873 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:32:00 crc kubenswrapper[4736]: I0316 17:32:00.295888 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:32:00 crc kubenswrapper[4736]: I0316 17:32:00.296166 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:32:00 crc kubenswrapper[4736]: I0316 17:32:00.340577 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561372-4jt7c"] Mar 16 17:32:00 crc kubenswrapper[4736]: I0316 17:32:00.381349 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wbmf\" (UniqueName: \"kubernetes.io/projected/f429c505-8dd1-49de-b210-c2a2974d47df-kube-api-access-9wbmf\") pod \"auto-csr-approver-29561372-4jt7c\" (UID: \"f429c505-8dd1-49de-b210-c2a2974d47df\") " pod="openshift-infra/auto-csr-approver-29561372-4jt7c" Mar 16 17:32:00 crc kubenswrapper[4736]: I0316 17:32:00.483348 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wbmf\" (UniqueName: \"kubernetes.io/projected/f429c505-8dd1-49de-b210-c2a2974d47df-kube-api-access-9wbmf\") pod \"auto-csr-approver-29561372-4jt7c\" (UID: \"f429c505-8dd1-49de-b210-c2a2974d47df\") " pod="openshift-infra/auto-csr-approver-29561372-4jt7c" Mar 16 17:32:00 crc kubenswrapper[4736]: I0316 17:32:00.508186 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wbmf\" (UniqueName: \"kubernetes.io/projected/f429c505-8dd1-49de-b210-c2a2974d47df-kube-api-access-9wbmf\") pod \"auto-csr-approver-29561372-4jt7c\" (UID: \"f429c505-8dd1-49de-b210-c2a2974d47df\") " pod="openshift-infra/auto-csr-approver-29561372-4jt7c" Mar 16 17:32:00 crc kubenswrapper[4736]: I0316 17:32:00.598707 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561372-4jt7c" Mar 16 17:32:01 crc kubenswrapper[4736]: I0316 17:32:01.133515 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561372-4jt7c"] Mar 16 17:32:01 crc kubenswrapper[4736]: W0316 17:32:01.143653 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf429c505_8dd1_49de_b210_c2a2974d47df.slice/crio-f2a0a643b6bf394ab83e25198862d2b938db27bde4948bc8e01e51141814b90a WatchSource:0}: Error finding container f2a0a643b6bf394ab83e25198862d2b938db27bde4948bc8e01e51141814b90a: Status 404 returned error can't find the container with id f2a0a643b6bf394ab83e25198862d2b938db27bde4948bc8e01e51141814b90a Mar 16 17:32:02 crc kubenswrapper[4736]: I0316 17:32:02.120616 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561372-4jt7c" event={"ID":"f429c505-8dd1-49de-b210-c2a2974d47df","Type":"ContainerStarted","Data":"f2a0a643b6bf394ab83e25198862d2b938db27bde4948bc8e01e51141814b90a"} Mar 16 17:32:03 crc kubenswrapper[4736]: I0316 17:32:03.131589 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561372-4jt7c" event={"ID":"f429c505-8dd1-49de-b210-c2a2974d47df","Type":"ContainerStarted","Data":"9a06f7dccbb02fbe81e87f5c95605eae25c2ac590e4ec0bcb44c04bafef10f7d"} Mar 16 17:32:04 crc kubenswrapper[4736]: I0316 17:32:04.141081 4736 generic.go:334] "Generic (PLEG): container finished" podID="f429c505-8dd1-49de-b210-c2a2974d47df" containerID="9a06f7dccbb02fbe81e87f5c95605eae25c2ac590e4ec0bcb44c04bafef10f7d" exitCode=0 Mar 16 17:32:04 crc kubenswrapper[4736]: I0316 17:32:04.141717 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561372-4jt7c" event={"ID":"f429c505-8dd1-49de-b210-c2a2974d47df","Type":"ContainerDied","Data":"9a06f7dccbb02fbe81e87f5c95605eae25c2ac590e4ec0bcb44c04bafef10f7d"} Mar 16 17:32:05 crc kubenswrapper[4736]: I0316 17:32:05.539624 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561372-4jt7c" Mar 16 17:32:05 crc kubenswrapper[4736]: I0316 17:32:05.579001 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wbmf\" (UniqueName: \"kubernetes.io/projected/f429c505-8dd1-49de-b210-c2a2974d47df-kube-api-access-9wbmf\") pod \"f429c505-8dd1-49de-b210-c2a2974d47df\" (UID: \"f429c505-8dd1-49de-b210-c2a2974d47df\") " Mar 16 17:32:05 crc kubenswrapper[4736]: I0316 17:32:05.586820 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f429c505-8dd1-49de-b210-c2a2974d47df-kube-api-access-9wbmf" (OuterVolumeSpecName: "kube-api-access-9wbmf") pod "f429c505-8dd1-49de-b210-c2a2974d47df" (UID: "f429c505-8dd1-49de-b210-c2a2974d47df"). InnerVolumeSpecName "kube-api-access-9wbmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:32:05 crc kubenswrapper[4736]: I0316 17:32:05.681324 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wbmf\" (UniqueName: \"kubernetes.io/projected/f429c505-8dd1-49de-b210-c2a2974d47df-kube-api-access-9wbmf\") on node \"crc\" DevicePath \"\"" Mar 16 17:32:06 crc kubenswrapper[4736]: I0316 17:32:06.158224 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561372-4jt7c" event={"ID":"f429c505-8dd1-49de-b210-c2a2974d47df","Type":"ContainerDied","Data":"f2a0a643b6bf394ab83e25198862d2b938db27bde4948bc8e01e51141814b90a"} Mar 16 17:32:06 crc kubenswrapper[4736]: I0316 17:32:06.158284 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561372-4jt7c" Mar 16 17:32:06 crc kubenswrapper[4736]: I0316 17:32:06.158722 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2a0a643b6bf394ab83e25198862d2b938db27bde4948bc8e01e51141814b90a" Mar 16 17:32:06 crc kubenswrapper[4736]: I0316 17:32:06.226199 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561366-b6wzq"] Mar 16 17:32:06 crc kubenswrapper[4736]: I0316 17:32:06.234702 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561366-b6wzq"] Mar 16 17:32:06 crc kubenswrapper[4736]: I0316 17:32:06.989464 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6862a736-6544-4282-9a6c-bd5eb9c32c56" path="/var/lib/kubelet/pods/6862a736-6544-4282-9a6c-bd5eb9c32c56/volumes" Mar 16 17:32:08 crc kubenswrapper[4736]: I0316 17:32:08.508457 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:32:08 crc kubenswrapper[4736]: I0316 17:32:08.509216 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:32:08 crc kubenswrapper[4736]: I0316 17:32:08.509337 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 17:32:08 crc kubenswrapper[4736]: I0316 17:32:08.511811 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f43f7470d1fa67399a1d97da57dd04ecb7dddadaf5dac9fefb4d418a4925a183"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 17:32:08 crc kubenswrapper[4736]: I0316 17:32:08.512008 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://f43f7470d1fa67399a1d97da57dd04ecb7dddadaf5dac9fefb4d418a4925a183" gracePeriod=600 Mar 16 17:32:09 crc kubenswrapper[4736]: I0316 17:32:09.192552 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="f43f7470d1fa67399a1d97da57dd04ecb7dddadaf5dac9fefb4d418a4925a183" exitCode=0 Mar 16 17:32:09 crc kubenswrapper[4736]: I0316 17:32:09.192618 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"f43f7470d1fa67399a1d97da57dd04ecb7dddadaf5dac9fefb4d418a4925a183"} Mar 16 17:32:09 crc kubenswrapper[4736]: I0316 17:32:09.193094 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a"} Mar 16 17:32:09 crc kubenswrapper[4736]: I0316 17:32:09.193167 4736 scope.go:117] "RemoveContainer" containerID="c2d047e7494cecb2e6bc87f1284b601935dc5426175cf5250a168bd8bb24a72c" Mar 16 17:33:02 crc kubenswrapper[4736]: I0316 17:33:02.678655 4736 scope.go:117] "RemoveContainer" containerID="287d4902c1c6af074fa01eac7c1c91330e77d7245d7d7a7b757a36474f83c7e1" Mar 16 17:33:26 crc kubenswrapper[4736]: E0316 17:33:26.581731 4736 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.30:42338->38.102.83.30:38289: write tcp 38.102.83.30:42338->38.102.83.30:38289: write: broken pipe Mar 16 17:34:00 crc kubenswrapper[4736]: I0316 17:34:00.151229 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561374-mfsrr"] Mar 16 17:34:00 crc kubenswrapper[4736]: E0316 17:34:00.152217 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f429c505-8dd1-49de-b210-c2a2974d47df" containerName="oc" Mar 16 17:34:00 crc kubenswrapper[4736]: I0316 17:34:00.152232 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f429c505-8dd1-49de-b210-c2a2974d47df" containerName="oc" Mar 16 17:34:00 crc kubenswrapper[4736]: I0316 17:34:00.152478 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f429c505-8dd1-49de-b210-c2a2974d47df" containerName="oc" Mar 16 17:34:00 crc kubenswrapper[4736]: I0316 17:34:00.155495 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561374-mfsrr" Mar 16 17:34:00 crc kubenswrapper[4736]: I0316 17:34:00.159394 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:34:00 crc kubenswrapper[4736]: I0316 17:34:00.159817 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:34:00 crc kubenswrapper[4736]: I0316 17:34:00.160193 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:34:00 crc kubenswrapper[4736]: I0316 17:34:00.185776 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561374-mfsrr"] Mar 16 17:34:00 crc kubenswrapper[4736]: I0316 17:34:00.239235 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlz4j\" (UniqueName: \"kubernetes.io/projected/6acd0f5b-8463-4acc-9dac-f5129e0dcea8-kube-api-access-xlz4j\") pod \"auto-csr-approver-29561374-mfsrr\" (UID: \"6acd0f5b-8463-4acc-9dac-f5129e0dcea8\") " pod="openshift-infra/auto-csr-approver-29561374-mfsrr" Mar 16 17:34:00 crc kubenswrapper[4736]: I0316 17:34:00.341317 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlz4j\" (UniqueName: \"kubernetes.io/projected/6acd0f5b-8463-4acc-9dac-f5129e0dcea8-kube-api-access-xlz4j\") pod \"auto-csr-approver-29561374-mfsrr\" (UID: \"6acd0f5b-8463-4acc-9dac-f5129e0dcea8\") " pod="openshift-infra/auto-csr-approver-29561374-mfsrr" Mar 16 17:34:00 crc kubenswrapper[4736]: I0316 17:34:00.366992 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlz4j\" (UniqueName: \"kubernetes.io/projected/6acd0f5b-8463-4acc-9dac-f5129e0dcea8-kube-api-access-xlz4j\") pod \"auto-csr-approver-29561374-mfsrr\" (UID: \"6acd0f5b-8463-4acc-9dac-f5129e0dcea8\") " pod="openshift-infra/auto-csr-approver-29561374-mfsrr" Mar 16 17:34:00 crc kubenswrapper[4736]: I0316 17:34:00.482519 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561374-mfsrr" Mar 16 17:34:01 crc kubenswrapper[4736]: I0316 17:34:01.116784 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561374-mfsrr"] Mar 16 17:34:01 crc kubenswrapper[4736]: I0316 17:34:01.534282 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561374-mfsrr" event={"ID":"6acd0f5b-8463-4acc-9dac-f5129e0dcea8","Type":"ContainerStarted","Data":"79169995904f4d9a30428fe4a4b6de391a7013883ef58682a2d32aad806191db"} Mar 16 17:34:03 crc kubenswrapper[4736]: I0316 17:34:03.553992 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561374-mfsrr" event={"ID":"6acd0f5b-8463-4acc-9dac-f5129e0dcea8","Type":"ContainerStarted","Data":"23a2537b15dc16f527427374d3a3d4d60b4881010bda9dd59c41dead69d8297f"} Mar 16 17:34:03 crc kubenswrapper[4736]: I0316 17:34:03.583919 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561374-mfsrr" podStartSLOduration=2.443651436 podStartE2EDuration="3.583328957s" podCreationTimestamp="2026-03-16 17:34:00 +0000 UTC" firstStartedPulling="2026-03-16 17:34:01.129486142 +0000 UTC m=+8442.856876429" lastFinishedPulling="2026-03-16 17:34:02.269163663 +0000 UTC m=+8443.996553950" observedRunningTime="2026-03-16 17:34:03.574128965 +0000 UTC m=+8445.301519272" watchObservedRunningTime="2026-03-16 17:34:03.583328957 +0000 UTC m=+8445.310719244" Mar 16 17:34:04 crc kubenswrapper[4736]: I0316 17:34:04.569274 4736 generic.go:334] "Generic (PLEG): container finished" podID="6acd0f5b-8463-4acc-9dac-f5129e0dcea8" containerID="23a2537b15dc16f527427374d3a3d4d60b4881010bda9dd59c41dead69d8297f" exitCode=0 Mar 16 17:34:04 crc kubenswrapper[4736]: I0316 17:34:04.569351 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561374-mfsrr" event={"ID":"6acd0f5b-8463-4acc-9dac-f5129e0dcea8","Type":"ContainerDied","Data":"23a2537b15dc16f527427374d3a3d4d60b4881010bda9dd59c41dead69d8297f"} Mar 16 17:34:06 crc kubenswrapper[4736]: I0316 17:34:06.100628 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561374-mfsrr" Mar 16 17:34:06 crc kubenswrapper[4736]: I0316 17:34:06.165386 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlz4j\" (UniqueName: \"kubernetes.io/projected/6acd0f5b-8463-4acc-9dac-f5129e0dcea8-kube-api-access-xlz4j\") pod \"6acd0f5b-8463-4acc-9dac-f5129e0dcea8\" (UID: \"6acd0f5b-8463-4acc-9dac-f5129e0dcea8\") " Mar 16 17:34:06 crc kubenswrapper[4736]: I0316 17:34:06.171110 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acd0f5b-8463-4acc-9dac-f5129e0dcea8-kube-api-access-xlz4j" (OuterVolumeSpecName: "kube-api-access-xlz4j") pod "6acd0f5b-8463-4acc-9dac-f5129e0dcea8" (UID: "6acd0f5b-8463-4acc-9dac-f5129e0dcea8"). InnerVolumeSpecName "kube-api-access-xlz4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:34:06 crc kubenswrapper[4736]: I0316 17:34:06.268865 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlz4j\" (UniqueName: \"kubernetes.io/projected/6acd0f5b-8463-4acc-9dac-f5129e0dcea8-kube-api-access-xlz4j\") on node \"crc\" DevicePath \"\"" Mar 16 17:34:06 crc kubenswrapper[4736]: I0316 17:34:06.593325 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561374-mfsrr" event={"ID":"6acd0f5b-8463-4acc-9dac-f5129e0dcea8","Type":"ContainerDied","Data":"79169995904f4d9a30428fe4a4b6de391a7013883ef58682a2d32aad806191db"} Mar 16 17:34:06 crc kubenswrapper[4736]: I0316 17:34:06.593657 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79169995904f4d9a30428fe4a4b6de391a7013883ef58682a2d32aad806191db" Mar 16 17:34:06 crc kubenswrapper[4736]: I0316 17:34:06.593501 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561374-mfsrr" Mar 16 17:34:06 crc kubenswrapper[4736]: I0316 17:34:06.658092 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561368-tdh9w"] Mar 16 17:34:06 crc kubenswrapper[4736]: I0316 17:34:06.667243 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561368-tdh9w"] Mar 16 17:34:06 crc kubenswrapper[4736]: I0316 17:34:06.994958 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb" path="/var/lib/kubelet/pods/b07bafaa-d40b-4dc1-bbbc-25fef83eb4eb/volumes" Mar 16 17:34:08 crc kubenswrapper[4736]: I0316 17:34:08.508016 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:34:08 crc kubenswrapper[4736]: I0316 17:34:08.509096 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:34:38 crc kubenswrapper[4736]: E0316 17:34:38.452651 4736 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.30:50270->38.102.83.30:38289: write tcp 38.102.83.30:50270->38.102.83.30:38289: write: broken pipe Mar 16 17:34:38 crc kubenswrapper[4736]: I0316 17:34:38.508399 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:34:38 crc kubenswrapper[4736]: I0316 17:34:38.508483 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:35:02 crc kubenswrapper[4736]: I0316 17:35:02.849146 4736 scope.go:117] "RemoveContainer" containerID="2c055d3f3ca1cb18b385b073d66f007d7cec801b20e7f3a9d48b98dea1c6f0fd" Mar 16 17:35:08 crc kubenswrapper[4736]: I0316 17:35:08.508217 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:35:08 crc kubenswrapper[4736]: I0316 17:35:08.508761 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:35:08 crc kubenswrapper[4736]: I0316 17:35:08.508802 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 17:35:08 crc kubenswrapper[4736]: I0316 17:35:08.509548 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 17:35:08 crc kubenswrapper[4736]: I0316 17:35:08.509593 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" gracePeriod=600 Mar 16 17:35:08 crc kubenswrapper[4736]: E0316 17:35:08.632717 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:35:09 crc kubenswrapper[4736]: I0316 17:35:09.181056 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" exitCode=0 Mar 16 17:35:09 crc kubenswrapper[4736]: I0316 17:35:09.181131 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a"} Mar 16 17:35:09 crc kubenswrapper[4736]: I0316 17:35:09.181208 4736 scope.go:117] "RemoveContainer" containerID="f43f7470d1fa67399a1d97da57dd04ecb7dddadaf5dac9fefb4d418a4925a183" Mar 16 17:35:09 crc kubenswrapper[4736]: I0316 17:35:09.181920 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:35:09 crc kubenswrapper[4736]: E0316 17:35:09.182253 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:35:21 crc kubenswrapper[4736]: I0316 17:35:21.978577 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:35:21 crc kubenswrapper[4736]: E0316 17:35:21.979436 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:35:35 crc kubenswrapper[4736]: I0316 17:35:35.978623 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:35:35 crc kubenswrapper[4736]: E0316 17:35:35.979688 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.509172 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wd8tw"] Mar 16 17:35:45 crc kubenswrapper[4736]: E0316 17:35:45.509782 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acd0f5b-8463-4acc-9dac-f5129e0dcea8" containerName="oc" Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.509794 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acd0f5b-8463-4acc-9dac-f5129e0dcea8" containerName="oc" Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.509974 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6acd0f5b-8463-4acc-9dac-f5129e0dcea8" containerName="oc" Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.513396 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.540256 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wd8tw"] Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.655149 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-utilities\") pod \"certified-operators-wd8tw\" (UID: \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\") " pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.655560 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz8jf\" (UniqueName: \"kubernetes.io/projected/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-kube-api-access-gz8jf\") pod \"certified-operators-wd8tw\" (UID: \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\") " pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.655762 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-catalog-content\") pod \"certified-operators-wd8tw\" (UID: \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\") " pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.757151 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-utilities\") pod \"certified-operators-wd8tw\" (UID: \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\") " pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.757258 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz8jf\" (UniqueName: \"kubernetes.io/projected/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-kube-api-access-gz8jf\") pod \"certified-operators-wd8tw\" (UID: \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\") " pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.757293 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-catalog-content\") pod \"certified-operators-wd8tw\" (UID: \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\") " pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.758427 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-catalog-content\") pod \"certified-operators-wd8tw\" (UID: \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\") " pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.758704 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-utilities\") pod \"certified-operators-wd8tw\" (UID: \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\") " pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.782652 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz8jf\" (UniqueName: \"kubernetes.io/projected/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-kube-api-access-gz8jf\") pod \"certified-operators-wd8tw\" (UID: \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\") " pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:35:45 crc kubenswrapper[4736]: I0316 17:35:45.842895 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:35:46 crc kubenswrapper[4736]: I0316 17:35:46.899682 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wd8tw"] Mar 16 17:35:47 crc kubenswrapper[4736]: I0316 17:35:47.594694 4736 generic.go:334] "Generic (PLEG): container finished" podID="4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" containerID="8555ee35d961b9829b8cecfc14ec344b93377ecb452a7787db608501436133f8" exitCode=0 Mar 16 17:35:47 crc kubenswrapper[4736]: I0316 17:35:47.594745 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd8tw" event={"ID":"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1","Type":"ContainerDied","Data":"8555ee35d961b9829b8cecfc14ec344b93377ecb452a7787db608501436133f8"} Mar 16 17:35:47 crc kubenswrapper[4736]: I0316 17:35:47.594776 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd8tw" event={"ID":"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1","Type":"ContainerStarted","Data":"270ea0c51511177f9d55e2e2fda92ea61ca25b92acc75bf9d5d17395c729e92d"} Mar 16 17:35:47 crc kubenswrapper[4736]: I0316 17:35:47.598459 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 17:35:48 crc kubenswrapper[4736]: I0316 17:35:48.993884 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:35:48 crc kubenswrapper[4736]: E0316 17:35:48.994745 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:35:49 crc kubenswrapper[4736]: I0316 17:35:49.617476 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd8tw" event={"ID":"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1","Type":"ContainerStarted","Data":"119f29d9691e41f13e109264c6bd36e6c2535d98af8a224a4c4d5e00a5bddad5"} Mar 16 17:35:51 crc kubenswrapper[4736]: I0316 17:35:51.645865 4736 generic.go:334] "Generic (PLEG): container finished" podID="4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" containerID="119f29d9691e41f13e109264c6bd36e6c2535d98af8a224a4c4d5e00a5bddad5" exitCode=0 Mar 16 17:35:51 crc kubenswrapper[4736]: I0316 17:35:51.645977 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd8tw" event={"ID":"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1","Type":"ContainerDied","Data":"119f29d9691e41f13e109264c6bd36e6c2535d98af8a224a4c4d5e00a5bddad5"} Mar 16 17:35:52 crc kubenswrapper[4736]: I0316 17:35:52.657895 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd8tw" event={"ID":"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1","Type":"ContainerStarted","Data":"c46dd8be915c3c2c44781c233e0ba8d1729c4d46e4f421568a48e40781bd10ca"} Mar 16 17:35:52 crc kubenswrapper[4736]: I0316 17:35:52.683232 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wd8tw" podStartSLOduration=3.175097167 podStartE2EDuration="7.683205693s" podCreationTimestamp="2026-03-16 17:35:45 +0000 UTC" firstStartedPulling="2026-03-16 17:35:47.596666188 +0000 UTC m=+8549.324056485" lastFinishedPulling="2026-03-16 17:35:52.104774724 +0000 UTC m=+8553.832165011" observedRunningTime="2026-03-16 17:35:52.674861415 +0000 UTC m=+8554.402251702" watchObservedRunningTime="2026-03-16 17:35:52.683205693 +0000 UTC m=+8554.410595980" Mar 16 17:35:55 crc kubenswrapper[4736]: I0316 17:35:55.843816 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:35:55 crc kubenswrapper[4736]: I0316 17:35:55.844186 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:35:56 crc kubenswrapper[4736]: I0316 17:35:56.884076 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wd8tw" podUID="4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" containerName="registry-server" probeResult="failure" output=< Mar 16 17:35:56 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:35:56 crc kubenswrapper[4736]: > Mar 16 17:36:00 crc kubenswrapper[4736]: I0316 17:36:00.160841 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561376-xlmhs"] Mar 16 17:36:00 crc kubenswrapper[4736]: I0316 17:36:00.162719 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561376-xlmhs"] Mar 16 17:36:00 crc kubenswrapper[4736]: I0316 17:36:00.162844 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561376-xlmhs" Mar 16 17:36:00 crc kubenswrapper[4736]: I0316 17:36:00.170700 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:36:00 crc kubenswrapper[4736]: I0316 17:36:00.170893 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:36:00 crc kubenswrapper[4736]: I0316 17:36:00.171012 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:36:00 crc kubenswrapper[4736]: I0316 17:36:00.288304 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdgpw\" (UniqueName: \"kubernetes.io/projected/1f66c029-5ebf-402a-adf4-9f10b085197b-kube-api-access-sdgpw\") pod \"auto-csr-approver-29561376-xlmhs\" (UID: \"1f66c029-5ebf-402a-adf4-9f10b085197b\") " pod="openshift-infra/auto-csr-approver-29561376-xlmhs" Mar 16 17:36:00 crc kubenswrapper[4736]: I0316 17:36:00.389697 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdgpw\" (UniqueName: \"kubernetes.io/projected/1f66c029-5ebf-402a-adf4-9f10b085197b-kube-api-access-sdgpw\") pod \"auto-csr-approver-29561376-xlmhs\" (UID: \"1f66c029-5ebf-402a-adf4-9f10b085197b\") " pod="openshift-infra/auto-csr-approver-29561376-xlmhs" Mar 16 17:36:00 crc kubenswrapper[4736]: I0316 17:36:00.413768 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdgpw\" (UniqueName: \"kubernetes.io/projected/1f66c029-5ebf-402a-adf4-9f10b085197b-kube-api-access-sdgpw\") pod \"auto-csr-approver-29561376-xlmhs\" (UID: \"1f66c029-5ebf-402a-adf4-9f10b085197b\") " pod="openshift-infra/auto-csr-approver-29561376-xlmhs" Mar 16 17:36:00 crc kubenswrapper[4736]: I0316 17:36:00.500522 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561376-xlmhs" Mar 16 17:36:00 crc kubenswrapper[4736]: I0316 17:36:00.998820 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561376-xlmhs"] Mar 16 17:36:01 crc kubenswrapper[4736]: I0316 17:36:01.749573 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561376-xlmhs" event={"ID":"1f66c029-5ebf-402a-adf4-9f10b085197b","Type":"ContainerStarted","Data":"2d5946eb89256cb703c7f71981c0d3ece7583e93e7061490436e05253a6882e2"} Mar 16 17:36:01 crc kubenswrapper[4736]: I0316 17:36:01.978477 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:36:01 crc kubenswrapper[4736]: E0316 17:36:01.978973 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:36:02 crc kubenswrapper[4736]: I0316 17:36:02.761022 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561376-xlmhs" event={"ID":"1f66c029-5ebf-402a-adf4-9f10b085197b","Type":"ContainerStarted","Data":"ba949a9cbb4a232e28e43d61e14493f92e732c9acef87abe7eb237bc70be476e"} Mar 16 17:36:02 crc kubenswrapper[4736]: I0316 17:36:02.780520 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561376-xlmhs" podStartSLOduration=1.4215319499999999 podStartE2EDuration="2.780500532s" podCreationTimestamp="2026-03-16 17:36:00 +0000 UTC" firstStartedPulling="2026-03-16 17:36:00.993903638 +0000 UTC m=+8562.721293935" lastFinishedPulling="2026-03-16 17:36:02.35287222 +0000 UTC m=+8564.080262517" observedRunningTime="2026-03-16 17:36:02.778545739 +0000 UTC m=+8564.505936066" watchObservedRunningTime="2026-03-16 17:36:02.780500532 +0000 UTC m=+8564.507890819" Mar 16 17:36:03 crc kubenswrapper[4736]: I0316 17:36:03.773820 4736 generic.go:334] "Generic (PLEG): container finished" podID="1f66c029-5ebf-402a-adf4-9f10b085197b" containerID="ba949a9cbb4a232e28e43d61e14493f92e732c9acef87abe7eb237bc70be476e" exitCode=0 Mar 16 17:36:03 crc kubenswrapper[4736]: I0316 17:36:03.773964 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561376-xlmhs" event={"ID":"1f66c029-5ebf-402a-adf4-9f10b085197b","Type":"ContainerDied","Data":"ba949a9cbb4a232e28e43d61e14493f92e732c9acef87abe7eb237bc70be476e"} Mar 16 17:36:05 crc kubenswrapper[4736]: I0316 17:36:05.130209 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561376-xlmhs" Mar 16 17:36:05 crc kubenswrapper[4736]: I0316 17:36:05.176492 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdgpw\" (UniqueName: \"kubernetes.io/projected/1f66c029-5ebf-402a-adf4-9f10b085197b-kube-api-access-sdgpw\") pod \"1f66c029-5ebf-402a-adf4-9f10b085197b\" (UID: \"1f66c029-5ebf-402a-adf4-9f10b085197b\") " Mar 16 17:36:05 crc kubenswrapper[4736]: I0316 17:36:05.188937 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f66c029-5ebf-402a-adf4-9f10b085197b-kube-api-access-sdgpw" (OuterVolumeSpecName: "kube-api-access-sdgpw") pod "1f66c029-5ebf-402a-adf4-9f10b085197b" (UID: "1f66c029-5ebf-402a-adf4-9f10b085197b"). InnerVolumeSpecName "kube-api-access-sdgpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:36:05 crc kubenswrapper[4736]: I0316 17:36:05.278443 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdgpw\" (UniqueName: \"kubernetes.io/projected/1f66c029-5ebf-402a-adf4-9f10b085197b-kube-api-access-sdgpw\") on node \"crc\" DevicePath \"\"" Mar 16 17:36:05 crc kubenswrapper[4736]: I0316 17:36:05.792401 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561376-xlmhs" Mar 16 17:36:05 crc kubenswrapper[4736]: I0316 17:36:05.792298 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561376-xlmhs" event={"ID":"1f66c029-5ebf-402a-adf4-9f10b085197b","Type":"ContainerDied","Data":"2d5946eb89256cb703c7f71981c0d3ece7583e93e7061490436e05253a6882e2"} Mar 16 17:36:05 crc kubenswrapper[4736]: I0316 17:36:05.793277 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d5946eb89256cb703c7f71981c0d3ece7583e93e7061490436e05253a6882e2" Mar 16 17:36:05 crc kubenswrapper[4736]: I0316 17:36:05.868007 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561370-mxptm"] Mar 16 17:36:05 crc kubenswrapper[4736]: I0316 17:36:05.879276 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561370-mxptm"] Mar 16 17:36:05 crc kubenswrapper[4736]: I0316 17:36:05.898257 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:36:05 crc kubenswrapper[4736]: I0316 17:36:05.948368 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:36:06 crc kubenswrapper[4736]: I0316 17:36:06.139823 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wd8tw"] Mar 16 17:36:07 crc kubenswrapper[4736]: I0316 17:36:07.006095 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2183f226-fb1b-4a51-a6af-87cb20b6897c" path="/var/lib/kubelet/pods/2183f226-fb1b-4a51-a6af-87cb20b6897c/volumes" Mar 16 17:36:07 crc kubenswrapper[4736]: I0316 17:36:07.810687 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wd8tw" podUID="4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" containerName="registry-server" containerID="cri-o://c46dd8be915c3c2c44781c233e0ba8d1729c4d46e4f421568a48e40781bd10ca" gracePeriod=2 Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.291707 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.457959 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-utilities\") pod \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\" (UID: \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\") " Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.458130 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz8jf\" (UniqueName: \"kubernetes.io/projected/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-kube-api-access-gz8jf\") pod \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\" (UID: \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\") " Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.458258 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-catalog-content\") pod \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\" (UID: \"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1\") " Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.461969 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-utilities" (OuterVolumeSpecName: "utilities") pod "4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" (UID: "4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.476931 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-kube-api-access-gz8jf" (OuterVolumeSpecName: "kube-api-access-gz8jf") pod "4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" (UID: "4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1"). InnerVolumeSpecName "kube-api-access-gz8jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.561197 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.561229 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz8jf\" (UniqueName: \"kubernetes.io/projected/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-kube-api-access-gz8jf\") on node \"crc\" DevicePath \"\"" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.594766 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" (UID: "4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.663789 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.830175 4736 generic.go:334] "Generic (PLEG): container finished" podID="4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" containerID="c46dd8be915c3c2c44781c233e0ba8d1729c4d46e4f421568a48e40781bd10ca" exitCode=0 Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.830217 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd8tw" event={"ID":"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1","Type":"ContainerDied","Data":"c46dd8be915c3c2c44781c233e0ba8d1729c4d46e4f421568a48e40781bd10ca"} Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.830267 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wd8tw" event={"ID":"4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1","Type":"ContainerDied","Data":"270ea0c51511177f9d55e2e2fda92ea61ca25b92acc75bf9d5d17395c729e92d"} Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.830286 4736 scope.go:117] "RemoveContainer" containerID="c46dd8be915c3c2c44781c233e0ba8d1729c4d46e4f421568a48e40781bd10ca" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.830303 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wd8tw" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.880377 4736 scope.go:117] "RemoveContainer" containerID="119f29d9691e41f13e109264c6bd36e6c2535d98af8a224a4c4d5e00a5bddad5" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.889641 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wd8tw"] Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.909254 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wd8tw"] Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.917666 4736 scope.go:117] "RemoveContainer" containerID="8555ee35d961b9829b8cecfc14ec344b93377ecb452a7787db608501436133f8" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.963673 4736 scope.go:117] "RemoveContainer" containerID="c46dd8be915c3c2c44781c233e0ba8d1729c4d46e4f421568a48e40781bd10ca" Mar 16 17:36:08 crc kubenswrapper[4736]: E0316 17:36:08.967899 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c46dd8be915c3c2c44781c233e0ba8d1729c4d46e4f421568a48e40781bd10ca\": container with ID starting with c46dd8be915c3c2c44781c233e0ba8d1729c4d46e4f421568a48e40781bd10ca not found: ID does not exist" containerID="c46dd8be915c3c2c44781c233e0ba8d1729c4d46e4f421568a48e40781bd10ca" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.967939 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c46dd8be915c3c2c44781c233e0ba8d1729c4d46e4f421568a48e40781bd10ca"} err="failed to get container status \"c46dd8be915c3c2c44781c233e0ba8d1729c4d46e4f421568a48e40781bd10ca\": rpc error: code = NotFound desc = could not find container \"c46dd8be915c3c2c44781c233e0ba8d1729c4d46e4f421568a48e40781bd10ca\": container with ID starting with c46dd8be915c3c2c44781c233e0ba8d1729c4d46e4f421568a48e40781bd10ca not found: ID does not exist" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.967961 4736 scope.go:117] "RemoveContainer" containerID="119f29d9691e41f13e109264c6bd36e6c2535d98af8a224a4c4d5e00a5bddad5" Mar 16 17:36:08 crc kubenswrapper[4736]: E0316 17:36:08.968488 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"119f29d9691e41f13e109264c6bd36e6c2535d98af8a224a4c4d5e00a5bddad5\": container with ID starting with 119f29d9691e41f13e109264c6bd36e6c2535d98af8a224a4c4d5e00a5bddad5 not found: ID does not exist" containerID="119f29d9691e41f13e109264c6bd36e6c2535d98af8a224a4c4d5e00a5bddad5" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.968515 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"119f29d9691e41f13e109264c6bd36e6c2535d98af8a224a4c4d5e00a5bddad5"} err="failed to get container status \"119f29d9691e41f13e109264c6bd36e6c2535d98af8a224a4c4d5e00a5bddad5\": rpc error: code = NotFound desc = could not find container \"119f29d9691e41f13e109264c6bd36e6c2535d98af8a224a4c4d5e00a5bddad5\": container with ID starting with 119f29d9691e41f13e109264c6bd36e6c2535d98af8a224a4c4d5e00a5bddad5 not found: ID does not exist" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.968530 4736 scope.go:117] "RemoveContainer" containerID="8555ee35d961b9829b8cecfc14ec344b93377ecb452a7787db608501436133f8" Mar 16 17:36:08 crc kubenswrapper[4736]: E0316 17:36:08.968773 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8555ee35d961b9829b8cecfc14ec344b93377ecb452a7787db608501436133f8\": container with ID starting with 8555ee35d961b9829b8cecfc14ec344b93377ecb452a7787db608501436133f8 not found: ID does not exist" containerID="8555ee35d961b9829b8cecfc14ec344b93377ecb452a7787db608501436133f8" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.968797 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8555ee35d961b9829b8cecfc14ec344b93377ecb452a7787db608501436133f8"} err="failed to get container status \"8555ee35d961b9829b8cecfc14ec344b93377ecb452a7787db608501436133f8\": rpc error: code = NotFound desc = could not find container \"8555ee35d961b9829b8cecfc14ec344b93377ecb452a7787db608501436133f8\": container with ID starting with 8555ee35d961b9829b8cecfc14ec344b93377ecb452a7787db608501436133f8 not found: ID does not exist" Mar 16 17:36:08 crc kubenswrapper[4736]: I0316 17:36:08.991704 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" path="/var/lib/kubelet/pods/4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1/volumes" Mar 16 17:36:14 crc kubenswrapper[4736]: I0316 17:36:14.978072 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:36:14 crc kubenswrapper[4736]: E0316 17:36:14.979340 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:36:25 crc kubenswrapper[4736]: I0316 17:36:25.977360 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:36:25 crc kubenswrapper[4736]: E0316 17:36:25.978299 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:36:38 crc kubenswrapper[4736]: I0316 17:36:38.993262 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:36:38 crc kubenswrapper[4736]: E0316 17:36:38.994036 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.223327 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g877r"] Mar 16 17:36:39 crc kubenswrapper[4736]: E0316 17:36:39.223910 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" containerName="registry-server" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.223942 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" containerName="registry-server" Mar 16 17:36:39 crc kubenswrapper[4736]: E0316 17:36:39.223968 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f66c029-5ebf-402a-adf4-9f10b085197b" containerName="oc" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.223983 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f66c029-5ebf-402a-adf4-9f10b085197b" containerName="oc" Mar 16 17:36:39 crc kubenswrapper[4736]: E0316 17:36:39.224017 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" containerName="extract-content" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.224029 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" containerName="extract-content" Mar 16 17:36:39 crc kubenswrapper[4736]: E0316 17:36:39.224050 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" containerName="extract-utilities" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.224063 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" containerName="extract-utilities" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.224930 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f66c029-5ebf-402a-adf4-9f10b085197b" containerName="oc" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.224975 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff1cfa4-26fa-4452-8c9e-5bfeb97ccab1" containerName="registry-server" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.227280 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g877r" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.236560 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g877r"] Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.425352 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdsn4\" (UniqueName: \"kubernetes.io/projected/775260d4-2928-45a0-8196-e286f09f748a-kube-api-access-zdsn4\") pod \"community-operators-g877r\" (UID: \"775260d4-2928-45a0-8196-e286f09f748a\") " pod="openshift-marketplace/community-operators-g877r" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.425749 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/775260d4-2928-45a0-8196-e286f09f748a-catalog-content\") pod \"community-operators-g877r\" (UID: \"775260d4-2928-45a0-8196-e286f09f748a\") " pod="openshift-marketplace/community-operators-g877r" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.425870 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/775260d4-2928-45a0-8196-e286f09f748a-utilities\") pod \"community-operators-g877r\" (UID: \"775260d4-2928-45a0-8196-e286f09f748a\") " pod="openshift-marketplace/community-operators-g877r" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.528384 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdsn4\" (UniqueName: \"kubernetes.io/projected/775260d4-2928-45a0-8196-e286f09f748a-kube-api-access-zdsn4\") pod \"community-operators-g877r\" (UID: \"775260d4-2928-45a0-8196-e286f09f748a\") " pod="openshift-marketplace/community-operators-g877r" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.528492 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/775260d4-2928-45a0-8196-e286f09f748a-catalog-content\") pod \"community-operators-g877r\" (UID: \"775260d4-2928-45a0-8196-e286f09f748a\") " pod="openshift-marketplace/community-operators-g877r" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.528698 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/775260d4-2928-45a0-8196-e286f09f748a-utilities\") pod \"community-operators-g877r\" (UID: \"775260d4-2928-45a0-8196-e286f09f748a\") " pod="openshift-marketplace/community-operators-g877r" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.529313 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/775260d4-2928-45a0-8196-e286f09f748a-catalog-content\") pod \"community-operators-g877r\" (UID: \"775260d4-2928-45a0-8196-e286f09f748a\") " pod="openshift-marketplace/community-operators-g877r" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.529374 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/775260d4-2928-45a0-8196-e286f09f748a-utilities\") pod \"community-operators-g877r\" (UID: \"775260d4-2928-45a0-8196-e286f09f748a\") " pod="openshift-marketplace/community-operators-g877r" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.548989 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdsn4\" (UniqueName: \"kubernetes.io/projected/775260d4-2928-45a0-8196-e286f09f748a-kube-api-access-zdsn4\") pod \"community-operators-g877r\" (UID: \"775260d4-2928-45a0-8196-e286f09f748a\") " pod="openshift-marketplace/community-operators-g877r" Mar 16 17:36:39 crc kubenswrapper[4736]: I0316 17:36:39.552599 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g877r" Mar 16 17:36:40 crc kubenswrapper[4736]: I0316 17:36:40.050082 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g877r"] Mar 16 17:36:40 crc kubenswrapper[4736]: I0316 17:36:40.122611 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g877r" event={"ID":"775260d4-2928-45a0-8196-e286f09f748a","Type":"ContainerStarted","Data":"9b25ac3ba79ef35988fdc6057a43f306f62841be71775f0e04f8f12479a2d11e"} Mar 16 17:36:41 crc kubenswrapper[4736]: I0316 17:36:41.132721 4736 generic.go:334] "Generic (PLEG): container finished" podID="775260d4-2928-45a0-8196-e286f09f748a" containerID="77a5185f978e8e46bdab94b84c16cb0415e2a387c1b0e12a11bf76cf87aca7dd" exitCode=0 Mar 16 17:36:41 crc kubenswrapper[4736]: I0316 17:36:41.132899 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g877r" event={"ID":"775260d4-2928-45a0-8196-e286f09f748a","Type":"ContainerDied","Data":"77a5185f978e8e46bdab94b84c16cb0415e2a387c1b0e12a11bf76cf87aca7dd"} Mar 16 17:36:46 crc kubenswrapper[4736]: I0316 17:36:46.182824 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g877r" event={"ID":"775260d4-2928-45a0-8196-e286f09f748a","Type":"ContainerStarted","Data":"fa80f7e0d017571f25bc2dccd278c5bd26d350e7e98a0fcfc4b131d7a5dfaad0"} Mar 16 17:36:48 crc kubenswrapper[4736]: I0316 17:36:48.208295 4736 generic.go:334] "Generic (PLEG): container finished" podID="775260d4-2928-45a0-8196-e286f09f748a" containerID="fa80f7e0d017571f25bc2dccd278c5bd26d350e7e98a0fcfc4b131d7a5dfaad0" exitCode=0 Mar 16 17:36:48 crc kubenswrapper[4736]: I0316 17:36:48.208407 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g877r" event={"ID":"775260d4-2928-45a0-8196-e286f09f748a","Type":"ContainerDied","Data":"fa80f7e0d017571f25bc2dccd278c5bd26d350e7e98a0fcfc4b131d7a5dfaad0"} Mar 16 17:36:50 crc kubenswrapper[4736]: I0316 17:36:50.232609 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g877r" event={"ID":"775260d4-2928-45a0-8196-e286f09f748a","Type":"ContainerStarted","Data":"c365f9a8a06a59da1232f92dd3b3ef790c042fe8520eac9a52aeedf8e2d0bd03"} Mar 16 17:36:50 crc kubenswrapper[4736]: I0316 17:36:50.267648 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g877r" podStartSLOduration=3.24194645 podStartE2EDuration="11.267625877s" podCreationTimestamp="2026-03-16 17:36:39 +0000 UTC" firstStartedPulling="2026-03-16 17:36:41.135290075 +0000 UTC m=+8602.862680362" lastFinishedPulling="2026-03-16 17:36:49.160969492 +0000 UTC m=+8610.888359789" observedRunningTime="2026-03-16 17:36:50.254498459 +0000 UTC m=+8611.981888766" watchObservedRunningTime="2026-03-16 17:36:50.267625877 +0000 UTC m=+8611.995016174" Mar 16 17:36:53 crc kubenswrapper[4736]: I0316 17:36:53.978879 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:36:53 crc kubenswrapper[4736]: E0316 17:36:53.979615 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:36:59 crc kubenswrapper[4736]: I0316 17:36:59.553042 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g877r" Mar 16 17:36:59 crc kubenswrapper[4736]: I0316 17:36:59.554235 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g877r" Mar 16 17:36:59 crc kubenswrapper[4736]: I0316 17:36:59.600824 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g877r" Mar 16 17:37:00 crc kubenswrapper[4736]: I0316 17:37:00.426813 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g877r" Mar 16 17:37:00 crc kubenswrapper[4736]: I0316 17:37:00.535804 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g877r"] Mar 16 17:37:00 crc kubenswrapper[4736]: I0316 17:37:00.623330 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wws42"] Mar 16 17:37:00 crc kubenswrapper[4736]: I0316 17:37:00.623571 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wws42" podUID="938fc316-fda9-4e19-8972-92b50fd432e4" containerName="registry-server" containerID="cri-o://42c11afcf4ee104f24a7ccb5d5ef467a2fa779d9c8b4d5fb017af5946632fbdb" gracePeriod=2 Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.164071 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wws42" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.355428 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/938fc316-fda9-4e19-8972-92b50fd432e4-catalog-content\") pod \"938fc316-fda9-4e19-8972-92b50fd432e4\" (UID: \"938fc316-fda9-4e19-8972-92b50fd432e4\") " Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.355623 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/938fc316-fda9-4e19-8972-92b50fd432e4-utilities\") pod \"938fc316-fda9-4e19-8972-92b50fd432e4\" (UID: \"938fc316-fda9-4e19-8972-92b50fd432e4\") " Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.355823 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zjh8\" (UniqueName: \"kubernetes.io/projected/938fc316-fda9-4e19-8972-92b50fd432e4-kube-api-access-5zjh8\") pod \"938fc316-fda9-4e19-8972-92b50fd432e4\" (UID: \"938fc316-fda9-4e19-8972-92b50fd432e4\") " Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.357352 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/938fc316-fda9-4e19-8972-92b50fd432e4-utilities" (OuterVolumeSpecName: "utilities") pod "938fc316-fda9-4e19-8972-92b50fd432e4" (UID: "938fc316-fda9-4e19-8972-92b50fd432e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.363409 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/938fc316-fda9-4e19-8972-92b50fd432e4-kube-api-access-5zjh8" (OuterVolumeSpecName: "kube-api-access-5zjh8") pod "938fc316-fda9-4e19-8972-92b50fd432e4" (UID: "938fc316-fda9-4e19-8972-92b50fd432e4"). InnerVolumeSpecName "kube-api-access-5zjh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.391552 4736 generic.go:334] "Generic (PLEG): container finished" podID="938fc316-fda9-4e19-8972-92b50fd432e4" containerID="42c11afcf4ee104f24a7ccb5d5ef467a2fa779d9c8b4d5fb017af5946632fbdb" exitCode=0 Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.391593 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wws42" event={"ID":"938fc316-fda9-4e19-8972-92b50fd432e4","Type":"ContainerDied","Data":"42c11afcf4ee104f24a7ccb5d5ef467a2fa779d9c8b4d5fb017af5946632fbdb"} Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.391658 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wws42" event={"ID":"938fc316-fda9-4e19-8972-92b50fd432e4","Type":"ContainerDied","Data":"79933b72254dbc7ca92f3697eb40601b6b6dc486b5076811a398aa6c88d848fb"} Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.391675 4736 scope.go:117] "RemoveContainer" containerID="42c11afcf4ee104f24a7ccb5d5ef467a2fa779d9c8b4d5fb017af5946632fbdb" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.391831 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wws42" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.422983 4736 scope.go:117] "RemoveContainer" containerID="4ee5b776e42c0f67def414421fb1af20389093791a3c626d58c25b47ac2edb44" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.429553 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/938fc316-fda9-4e19-8972-92b50fd432e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "938fc316-fda9-4e19-8972-92b50fd432e4" (UID: "938fc316-fda9-4e19-8972-92b50fd432e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.448372 4736 scope.go:117] "RemoveContainer" containerID="189ac835f6e9e1711807a4e8b4f73ef202c4b94c8a987d6b125974926f288726" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.457915 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zjh8\" (UniqueName: \"kubernetes.io/projected/938fc316-fda9-4e19-8972-92b50fd432e4-kube-api-access-5zjh8\") on node \"crc\" DevicePath \"\"" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.457946 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/938fc316-fda9-4e19-8972-92b50fd432e4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.457956 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/938fc316-fda9-4e19-8972-92b50fd432e4-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.490586 4736 scope.go:117] "RemoveContainer" containerID="42c11afcf4ee104f24a7ccb5d5ef467a2fa779d9c8b4d5fb017af5946632fbdb" Mar 16 17:37:01 crc kubenswrapper[4736]: E0316 17:37:01.491054 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42c11afcf4ee104f24a7ccb5d5ef467a2fa779d9c8b4d5fb017af5946632fbdb\": container with ID starting with 42c11afcf4ee104f24a7ccb5d5ef467a2fa779d9c8b4d5fb017af5946632fbdb not found: ID does not exist" containerID="42c11afcf4ee104f24a7ccb5d5ef467a2fa779d9c8b4d5fb017af5946632fbdb" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.491117 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42c11afcf4ee104f24a7ccb5d5ef467a2fa779d9c8b4d5fb017af5946632fbdb"} err="failed to get container status \"42c11afcf4ee104f24a7ccb5d5ef467a2fa779d9c8b4d5fb017af5946632fbdb\": rpc error: code = NotFound desc = could not find container \"42c11afcf4ee104f24a7ccb5d5ef467a2fa779d9c8b4d5fb017af5946632fbdb\": container with ID starting with 42c11afcf4ee104f24a7ccb5d5ef467a2fa779d9c8b4d5fb017af5946632fbdb not found: ID does not exist" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.491145 4736 scope.go:117] "RemoveContainer" containerID="4ee5b776e42c0f67def414421fb1af20389093791a3c626d58c25b47ac2edb44" Mar 16 17:37:01 crc kubenswrapper[4736]: E0316 17:37:01.492150 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ee5b776e42c0f67def414421fb1af20389093791a3c626d58c25b47ac2edb44\": container with ID starting with 4ee5b776e42c0f67def414421fb1af20389093791a3c626d58c25b47ac2edb44 not found: ID does not exist" containerID="4ee5b776e42c0f67def414421fb1af20389093791a3c626d58c25b47ac2edb44" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.492187 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ee5b776e42c0f67def414421fb1af20389093791a3c626d58c25b47ac2edb44"} err="failed to get container status \"4ee5b776e42c0f67def414421fb1af20389093791a3c626d58c25b47ac2edb44\": rpc error: code = NotFound desc = could not find container \"4ee5b776e42c0f67def414421fb1af20389093791a3c626d58c25b47ac2edb44\": container with ID starting with 4ee5b776e42c0f67def414421fb1af20389093791a3c626d58c25b47ac2edb44 not found: ID does not exist" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.492211 4736 scope.go:117] "RemoveContainer" containerID="189ac835f6e9e1711807a4e8b4f73ef202c4b94c8a987d6b125974926f288726" Mar 16 17:37:01 crc kubenswrapper[4736]: E0316 17:37:01.492595 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"189ac835f6e9e1711807a4e8b4f73ef202c4b94c8a987d6b125974926f288726\": container with ID starting with 189ac835f6e9e1711807a4e8b4f73ef202c4b94c8a987d6b125974926f288726 not found: ID does not exist" containerID="189ac835f6e9e1711807a4e8b4f73ef202c4b94c8a987d6b125974926f288726" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.492699 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"189ac835f6e9e1711807a4e8b4f73ef202c4b94c8a987d6b125974926f288726"} err="failed to get container status \"189ac835f6e9e1711807a4e8b4f73ef202c4b94c8a987d6b125974926f288726\": rpc error: code = NotFound desc = could not find container \"189ac835f6e9e1711807a4e8b4f73ef202c4b94c8a987d6b125974926f288726\": container with ID starting with 189ac835f6e9e1711807a4e8b4f73ef202c4b94c8a987d6b125974926f288726 not found: ID does not exist" Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.759595 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wws42"] Mar 16 17:37:01 crc kubenswrapper[4736]: I0316 17:37:01.767356 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wws42"] Mar 16 17:37:02 crc kubenswrapper[4736]: I0316 17:37:02.980880 4736 scope.go:117] "RemoveContainer" containerID="353f3c4fdd2a1a2b5be3f845b20ab52e8c8d4811326048909fa4e4eecc47dce7" Mar 16 17:37:02 crc kubenswrapper[4736]: I0316 17:37:02.993924 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="938fc316-fda9-4e19-8972-92b50fd432e4" path="/var/lib/kubelet/pods/938fc316-fda9-4e19-8972-92b50fd432e4/volumes" Mar 16 17:37:06 crc kubenswrapper[4736]: I0316 17:37:06.978384 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:37:06 crc kubenswrapper[4736]: E0316 17:37:06.979949 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:37:21 crc kubenswrapper[4736]: I0316 17:37:21.978318 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:37:21 crc kubenswrapper[4736]: E0316 17:37:21.979086 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:37:35 crc kubenswrapper[4736]: I0316 17:37:35.978336 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:37:35 crc kubenswrapper[4736]: E0316 17:37:35.979177 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:37:49 crc kubenswrapper[4736]: I0316 17:37:49.978323 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:37:49 crc kubenswrapper[4736]: E0316 17:37:49.980482 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.116162 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x26b6"] Mar 16 17:37:50 crc kubenswrapper[4736]: E0316 17:37:50.116617 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938fc316-fda9-4e19-8972-92b50fd432e4" containerName="extract-utilities" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.116674 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="938fc316-fda9-4e19-8972-92b50fd432e4" containerName="extract-utilities" Mar 16 17:37:50 crc kubenswrapper[4736]: E0316 17:37:50.116692 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938fc316-fda9-4e19-8972-92b50fd432e4" containerName="registry-server" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.116703 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="938fc316-fda9-4e19-8972-92b50fd432e4" containerName="registry-server" Mar 16 17:37:50 crc kubenswrapper[4736]: E0316 17:37:50.116740 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938fc316-fda9-4e19-8972-92b50fd432e4" containerName="extract-content" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.116748 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="938fc316-fda9-4e19-8972-92b50fd432e4" containerName="extract-content" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.116973 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="938fc316-fda9-4e19-8972-92b50fd432e4" containerName="registry-server" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.125559 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.136542 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x26b6"] Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.198413 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-utilities\") pod \"redhat-marketplace-x26b6\" (UID: \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\") " pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.198497 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjj5z\" (UniqueName: \"kubernetes.io/projected/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-kube-api-access-wjj5z\") pod \"redhat-marketplace-x26b6\" (UID: \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\") " pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.198680 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-catalog-content\") pod \"redhat-marketplace-x26b6\" (UID: \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\") " pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.301342 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-catalog-content\") pod \"redhat-marketplace-x26b6\" (UID: \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\") " pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.301432 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-utilities\") pod \"redhat-marketplace-x26b6\" (UID: \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\") " pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.301472 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjj5z\" (UniqueName: \"kubernetes.io/projected/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-kube-api-access-wjj5z\") pod \"redhat-marketplace-x26b6\" (UID: \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\") " pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.302282 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-catalog-content\") pod \"redhat-marketplace-x26b6\" (UID: \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\") " pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.302370 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-utilities\") pod \"redhat-marketplace-x26b6\" (UID: \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\") " pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.324084 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjj5z\" (UniqueName: \"kubernetes.io/projected/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-kube-api-access-wjj5z\") pod \"redhat-marketplace-x26b6\" (UID: \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\") " pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:37:50 crc kubenswrapper[4736]: I0316 17:37:50.451934 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:37:51 crc kubenswrapper[4736]: I0316 17:37:51.009341 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x26b6"] Mar 16 17:37:51 crc kubenswrapper[4736]: I0316 17:37:51.879466 4736 generic.go:334] "Generic (PLEG): container finished" podID="3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" containerID="7634e5a6d40471898979f270da8656c690078baf842b4a0791454a25f7bf885a" exitCode=0 Mar 16 17:37:51 crc kubenswrapper[4736]: I0316 17:37:51.879552 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x26b6" event={"ID":"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf","Type":"ContainerDied","Data":"7634e5a6d40471898979f270da8656c690078baf842b4a0791454a25f7bf885a"} Mar 16 17:37:51 crc kubenswrapper[4736]: I0316 17:37:51.879774 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x26b6" event={"ID":"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf","Type":"ContainerStarted","Data":"a964513371a907240be2d1ec8868b681d528019feb7beb6e3e4c26bb22f3eea1"} Mar 16 17:37:53 crc kubenswrapper[4736]: I0316 17:37:53.901379 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x26b6" event={"ID":"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf","Type":"ContainerStarted","Data":"20ba4c896b4807de1b96c40e5763e415adcb2d33b1cc3ee0f5619a2e389d5721"} Mar 16 17:37:54 crc kubenswrapper[4736]: I0316 17:37:54.915083 4736 generic.go:334] "Generic (PLEG): container finished" podID="3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" containerID="20ba4c896b4807de1b96c40e5763e415adcb2d33b1cc3ee0f5619a2e389d5721" exitCode=0 Mar 16 17:37:54 crc kubenswrapper[4736]: I0316 17:37:54.915373 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x26b6" event={"ID":"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf","Type":"ContainerDied","Data":"20ba4c896b4807de1b96c40e5763e415adcb2d33b1cc3ee0f5619a2e389d5721"} Mar 16 17:37:55 crc kubenswrapper[4736]: I0316 17:37:55.948041 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x26b6" event={"ID":"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf","Type":"ContainerStarted","Data":"a2814171b9d3326ebc69a72a4b8f50f755ef84297d73f43b01f65d4412bdadb8"} Mar 16 17:37:55 crc kubenswrapper[4736]: I0316 17:37:55.977946 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x26b6" podStartSLOduration=2.51082026 podStartE2EDuration="5.977925744s" podCreationTimestamp="2026-03-16 17:37:50 +0000 UTC" firstStartedPulling="2026-03-16 17:37:51.882708097 +0000 UTC m=+8673.610098384" lastFinishedPulling="2026-03-16 17:37:55.349813581 +0000 UTC m=+8677.077203868" observedRunningTime="2026-03-16 17:37:55.970171752 +0000 UTC m=+8677.697562039" watchObservedRunningTime="2026-03-16 17:37:55.977925744 +0000 UTC m=+8677.705316041" Mar 16 17:38:00 crc kubenswrapper[4736]: I0316 17:38:00.144515 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561378-gp8cb"] Mar 16 17:38:00 crc kubenswrapper[4736]: I0316 17:38:00.147292 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561378-gp8cb" Mar 16 17:38:00 crc kubenswrapper[4736]: I0316 17:38:00.150478 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:38:00 crc kubenswrapper[4736]: I0316 17:38:00.150820 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:38:00 crc kubenswrapper[4736]: I0316 17:38:00.150876 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:38:00 crc kubenswrapper[4736]: I0316 17:38:00.157566 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561378-gp8cb"] Mar 16 17:38:00 crc kubenswrapper[4736]: I0316 17:38:00.299645 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bpjx\" (UniqueName: \"kubernetes.io/projected/e59f235d-e272-4ebb-8573-e1473496456d-kube-api-access-7bpjx\") pod \"auto-csr-approver-29561378-gp8cb\" (UID: \"e59f235d-e272-4ebb-8573-e1473496456d\") " pod="openshift-infra/auto-csr-approver-29561378-gp8cb" Mar 16 17:38:00 crc kubenswrapper[4736]: I0316 17:38:00.401988 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bpjx\" (UniqueName: \"kubernetes.io/projected/e59f235d-e272-4ebb-8573-e1473496456d-kube-api-access-7bpjx\") pod \"auto-csr-approver-29561378-gp8cb\" (UID: \"e59f235d-e272-4ebb-8573-e1473496456d\") " pod="openshift-infra/auto-csr-approver-29561378-gp8cb" Mar 16 17:38:00 crc kubenswrapper[4736]: I0316 17:38:00.426651 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bpjx\" (UniqueName: \"kubernetes.io/projected/e59f235d-e272-4ebb-8573-e1473496456d-kube-api-access-7bpjx\") pod \"auto-csr-approver-29561378-gp8cb\" (UID: \"e59f235d-e272-4ebb-8573-e1473496456d\") " pod="openshift-infra/auto-csr-approver-29561378-gp8cb" Mar 16 17:38:00 crc kubenswrapper[4736]: I0316 17:38:00.452901 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:38:00 crc kubenswrapper[4736]: I0316 17:38:00.452955 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:38:00 crc kubenswrapper[4736]: I0316 17:38:00.499819 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:38:00 crc kubenswrapper[4736]: I0316 17:38:00.499961 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561378-gp8cb" Mar 16 17:38:01 crc kubenswrapper[4736]: I0316 17:38:01.036395 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:38:01 crc kubenswrapper[4736]: I0316 17:38:01.089959 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x26b6"] Mar 16 17:38:01 crc kubenswrapper[4736]: I0316 17:38:01.308830 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561378-gp8cb"] Mar 16 17:38:01 crc kubenswrapper[4736]: W0316 17:38:01.305783 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode59f235d_e272_4ebb_8573_e1473496456d.slice/crio-65c6aa514892e03234138732b0ed32304217c8e413fbca8384b525e3d2164b2d WatchSource:0}: Error finding container 65c6aa514892e03234138732b0ed32304217c8e413fbca8384b525e3d2164b2d: Status 404 returned error can't find the container with id 65c6aa514892e03234138732b0ed32304217c8e413fbca8384b525e3d2164b2d Mar 16 17:38:02 crc kubenswrapper[4736]: I0316 17:38:02.001794 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561378-gp8cb" event={"ID":"e59f235d-e272-4ebb-8573-e1473496456d","Type":"ContainerStarted","Data":"65c6aa514892e03234138732b0ed32304217c8e413fbca8384b525e3d2164b2d"} Mar 16 17:38:02 crc kubenswrapper[4736]: I0316 17:38:02.978346 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:38:02 crc kubenswrapper[4736]: E0316 17:38:02.978679 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:38:03 crc kubenswrapper[4736]: I0316 17:38:03.011162 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x26b6" podUID="3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" containerName="registry-server" containerID="cri-o://a2814171b9d3326ebc69a72a4b8f50f755ef84297d73f43b01f65d4412bdadb8" gracePeriod=2 Mar 16 17:38:03 crc kubenswrapper[4736]: I0316 17:38:03.529316 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:38:03 crc kubenswrapper[4736]: I0316 17:38:03.673868 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjj5z\" (UniqueName: \"kubernetes.io/projected/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-kube-api-access-wjj5z\") pod \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\" (UID: \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\") " Mar 16 17:38:03 crc kubenswrapper[4736]: I0316 17:38:03.674044 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-utilities\") pod \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\" (UID: \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\") " Mar 16 17:38:03 crc kubenswrapper[4736]: I0316 17:38:03.674145 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-catalog-content\") pod \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\" (UID: \"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf\") " Mar 16 17:38:03 crc kubenswrapper[4736]: I0316 17:38:03.674963 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-utilities" (OuterVolumeSpecName: "utilities") pod "3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" (UID: "3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:38:03 crc kubenswrapper[4736]: I0316 17:38:03.675310 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:38:03 crc kubenswrapper[4736]: I0316 17:38:03.681338 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-kube-api-access-wjj5z" (OuterVolumeSpecName: "kube-api-access-wjj5z") pod "3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" (UID: "3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf"). InnerVolumeSpecName "kube-api-access-wjj5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:38:03 crc kubenswrapper[4736]: I0316 17:38:03.702389 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" (UID: "3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:38:03 crc kubenswrapper[4736]: I0316 17:38:03.777419 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjj5z\" (UniqueName: \"kubernetes.io/projected/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-kube-api-access-wjj5z\") on node \"crc\" DevicePath \"\"" Mar 16 17:38:03 crc kubenswrapper[4736]: I0316 17:38:03.777449 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.024654 4736 generic.go:334] "Generic (PLEG): container finished" podID="3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" containerID="a2814171b9d3326ebc69a72a4b8f50f755ef84297d73f43b01f65d4412bdadb8" exitCode=0 Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.024690 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x26b6" event={"ID":"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf","Type":"ContainerDied","Data":"a2814171b9d3326ebc69a72a4b8f50f755ef84297d73f43b01f65d4412bdadb8"} Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.024720 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x26b6" event={"ID":"3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf","Type":"ContainerDied","Data":"a964513371a907240be2d1ec8868b681d528019feb7beb6e3e4c26bb22f3eea1"} Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.024726 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x26b6" Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.024736 4736 scope.go:117] "RemoveContainer" containerID="a2814171b9d3326ebc69a72a4b8f50f755ef84297d73f43b01f65d4412bdadb8" Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.069472 4736 scope.go:117] "RemoveContainer" containerID="20ba4c896b4807de1b96c40e5763e415adcb2d33b1cc3ee0f5619a2e389d5721" Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.081708 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x26b6"] Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.096421 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x26b6"] Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.103027 4736 scope.go:117] "RemoveContainer" containerID="7634e5a6d40471898979f270da8656c690078baf842b4a0791454a25f7bf885a" Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.136713 4736 scope.go:117] "RemoveContainer" containerID="a2814171b9d3326ebc69a72a4b8f50f755ef84297d73f43b01f65d4412bdadb8" Mar 16 17:38:04 crc kubenswrapper[4736]: E0316 17:38:04.139450 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2814171b9d3326ebc69a72a4b8f50f755ef84297d73f43b01f65d4412bdadb8\": container with ID starting with a2814171b9d3326ebc69a72a4b8f50f755ef84297d73f43b01f65d4412bdadb8 not found: ID does not exist" containerID="a2814171b9d3326ebc69a72a4b8f50f755ef84297d73f43b01f65d4412bdadb8" Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.139496 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2814171b9d3326ebc69a72a4b8f50f755ef84297d73f43b01f65d4412bdadb8"} err="failed to get container status \"a2814171b9d3326ebc69a72a4b8f50f755ef84297d73f43b01f65d4412bdadb8\": rpc error: code = NotFound desc = could not find container \"a2814171b9d3326ebc69a72a4b8f50f755ef84297d73f43b01f65d4412bdadb8\": container with ID starting with a2814171b9d3326ebc69a72a4b8f50f755ef84297d73f43b01f65d4412bdadb8 not found: ID does not exist" Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.139524 4736 scope.go:117] "RemoveContainer" containerID="20ba4c896b4807de1b96c40e5763e415adcb2d33b1cc3ee0f5619a2e389d5721" Mar 16 17:38:04 crc kubenswrapper[4736]: E0316 17:38:04.141971 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20ba4c896b4807de1b96c40e5763e415adcb2d33b1cc3ee0f5619a2e389d5721\": container with ID starting with 20ba4c896b4807de1b96c40e5763e415adcb2d33b1cc3ee0f5619a2e389d5721 not found: ID does not exist" containerID="20ba4c896b4807de1b96c40e5763e415adcb2d33b1cc3ee0f5619a2e389d5721" Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.142077 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20ba4c896b4807de1b96c40e5763e415adcb2d33b1cc3ee0f5619a2e389d5721"} err="failed to get container status \"20ba4c896b4807de1b96c40e5763e415adcb2d33b1cc3ee0f5619a2e389d5721\": rpc error: code = NotFound desc = could not find container \"20ba4c896b4807de1b96c40e5763e415adcb2d33b1cc3ee0f5619a2e389d5721\": container with ID starting with 20ba4c896b4807de1b96c40e5763e415adcb2d33b1cc3ee0f5619a2e389d5721 not found: ID does not exist" Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.142169 4736 scope.go:117] "RemoveContainer" containerID="7634e5a6d40471898979f270da8656c690078baf842b4a0791454a25f7bf885a" Mar 16 17:38:04 crc kubenswrapper[4736]: E0316 17:38:04.143151 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7634e5a6d40471898979f270da8656c690078baf842b4a0791454a25f7bf885a\": container with ID starting with 7634e5a6d40471898979f270da8656c690078baf842b4a0791454a25f7bf885a not found: ID does not exist" containerID="7634e5a6d40471898979f270da8656c690078baf842b4a0791454a25f7bf885a" Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.143191 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7634e5a6d40471898979f270da8656c690078baf842b4a0791454a25f7bf885a"} err="failed to get container status \"7634e5a6d40471898979f270da8656c690078baf842b4a0791454a25f7bf885a\": rpc error: code = NotFound desc = could not find container \"7634e5a6d40471898979f270da8656c690078baf842b4a0791454a25f7bf885a\": container with ID starting with 7634e5a6d40471898979f270da8656c690078baf842b4a0791454a25f7bf885a not found: ID does not exist" Mar 16 17:38:04 crc kubenswrapper[4736]: I0316 17:38:04.992288 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" path="/var/lib/kubelet/pods/3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf/volumes" Mar 16 17:38:05 crc kubenswrapper[4736]: I0316 17:38:05.038014 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561378-gp8cb" event={"ID":"e59f235d-e272-4ebb-8573-e1473496456d","Type":"ContainerStarted","Data":"067f4ef6415c278d2484b6a90c1ef1c5f842e2626b3f85778242d961d88aac6d"} Mar 16 17:38:05 crc kubenswrapper[4736]: I0316 17:38:05.072203 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561378-gp8cb" podStartSLOduration=3.131461176 podStartE2EDuration="5.072176577s" podCreationTimestamp="2026-03-16 17:38:00 +0000 UTC" firstStartedPulling="2026-03-16 17:38:01.306269729 +0000 UTC m=+8683.033660016" lastFinishedPulling="2026-03-16 17:38:03.24698513 +0000 UTC m=+8684.974375417" observedRunningTime="2026-03-16 17:38:05.058557425 +0000 UTC m=+8686.785947722" watchObservedRunningTime="2026-03-16 17:38:05.072176577 +0000 UTC m=+8686.799566864" Mar 16 17:38:06 crc kubenswrapper[4736]: I0316 17:38:06.051692 4736 generic.go:334] "Generic (PLEG): container finished" podID="e59f235d-e272-4ebb-8573-e1473496456d" containerID="067f4ef6415c278d2484b6a90c1ef1c5f842e2626b3f85778242d961d88aac6d" exitCode=0 Mar 16 17:38:06 crc kubenswrapper[4736]: I0316 17:38:06.051745 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561378-gp8cb" event={"ID":"e59f235d-e272-4ebb-8573-e1473496456d","Type":"ContainerDied","Data":"067f4ef6415c278d2484b6a90c1ef1c5f842e2626b3f85778242d961d88aac6d"} Mar 16 17:38:07 crc kubenswrapper[4736]: I0316 17:38:07.502408 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561378-gp8cb" Mar 16 17:38:07 crc kubenswrapper[4736]: I0316 17:38:07.580173 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bpjx\" (UniqueName: \"kubernetes.io/projected/e59f235d-e272-4ebb-8573-e1473496456d-kube-api-access-7bpjx\") pod \"e59f235d-e272-4ebb-8573-e1473496456d\" (UID: \"e59f235d-e272-4ebb-8573-e1473496456d\") " Mar 16 17:38:07 crc kubenswrapper[4736]: I0316 17:38:07.586720 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59f235d-e272-4ebb-8573-e1473496456d-kube-api-access-7bpjx" (OuterVolumeSpecName: "kube-api-access-7bpjx") pod "e59f235d-e272-4ebb-8573-e1473496456d" (UID: "e59f235d-e272-4ebb-8573-e1473496456d"). InnerVolumeSpecName "kube-api-access-7bpjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:38:07 crc kubenswrapper[4736]: I0316 17:38:07.682866 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bpjx\" (UniqueName: \"kubernetes.io/projected/e59f235d-e272-4ebb-8573-e1473496456d-kube-api-access-7bpjx\") on node \"crc\" DevicePath \"\"" Mar 16 17:38:08 crc kubenswrapper[4736]: I0316 17:38:08.069401 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561378-gp8cb" event={"ID":"e59f235d-e272-4ebb-8573-e1473496456d","Type":"ContainerDied","Data":"65c6aa514892e03234138732b0ed32304217c8e413fbca8384b525e3d2164b2d"} Mar 16 17:38:08 crc kubenswrapper[4736]: I0316 17:38:08.069446 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65c6aa514892e03234138732b0ed32304217c8e413fbca8384b525e3d2164b2d" Mar 16 17:38:08 crc kubenswrapper[4736]: I0316 17:38:08.069868 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561378-gp8cb" Mar 16 17:38:08 crc kubenswrapper[4736]: I0316 17:38:08.147792 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561372-4jt7c"] Mar 16 17:38:08 crc kubenswrapper[4736]: I0316 17:38:08.161536 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561372-4jt7c"] Mar 16 17:38:08 crc kubenswrapper[4736]: I0316 17:38:08.991245 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f429c505-8dd1-49de-b210-c2a2974d47df" path="/var/lib/kubelet/pods/f429c505-8dd1-49de-b210-c2a2974d47df/volumes" Mar 16 17:38:13 crc kubenswrapper[4736]: I0316 17:38:13.978236 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:38:13 crc kubenswrapper[4736]: E0316 17:38:13.979556 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:38:26 crc kubenswrapper[4736]: I0316 17:38:26.978603 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:38:26 crc kubenswrapper[4736]: E0316 17:38:26.979501 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:38:37 crc kubenswrapper[4736]: I0316 17:38:37.977825 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:38:37 crc kubenswrapper[4736]: E0316 17:38:37.978670 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:38:50 crc kubenswrapper[4736]: I0316 17:38:50.978770 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:38:50 crc kubenswrapper[4736]: E0316 17:38:50.981357 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:39:01 crc kubenswrapper[4736]: I0316 17:39:01.978463 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:39:01 crc kubenswrapper[4736]: E0316 17:39:01.979288 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:39:03 crc kubenswrapper[4736]: I0316 17:39:03.192139 4736 scope.go:117] "RemoveContainer" containerID="9a06f7dccbb02fbe81e87f5c95605eae25c2ac590e4ec0bcb44c04bafef10f7d" Mar 16 17:39:12 crc kubenswrapper[4736]: I0316 17:39:12.978630 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:39:12 crc kubenswrapper[4736]: E0316 17:39:12.979721 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:39:26 crc kubenswrapper[4736]: I0316 17:39:26.979025 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:39:26 crc kubenswrapper[4736]: E0316 17:39:26.980034 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:39:37 crc kubenswrapper[4736]: I0316 17:39:37.995438 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:39:37 crc kubenswrapper[4736]: E0316 17:39:37.996542 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:39:50 crc kubenswrapper[4736]: I0316 17:39:50.977905 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:39:50 crc kubenswrapper[4736]: E0316 17:39:50.978726 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.153079 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561380-276jd"] Mar 16 17:40:00 crc kubenswrapper[4736]: E0316 17:40:00.157201 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59f235d-e272-4ebb-8573-e1473496456d" containerName="oc" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.157237 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59f235d-e272-4ebb-8573-e1473496456d" containerName="oc" Mar 16 17:40:00 crc kubenswrapper[4736]: E0316 17:40:00.157256 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" containerName="registry-server" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.157263 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" containerName="registry-server" Mar 16 17:40:00 crc kubenswrapper[4736]: E0316 17:40:00.157282 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" containerName="extract-utilities" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.157290 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" containerName="extract-utilities" Mar 16 17:40:00 crc kubenswrapper[4736]: E0316 17:40:00.157309 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" containerName="extract-content" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.157329 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" containerName="extract-content" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.157680 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59f235d-e272-4ebb-8573-e1473496456d" containerName="oc" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.157697 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6e4a6c-d57e-4582-89eb-0d671b3c7fbf" containerName="registry-server" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.158517 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561380-276jd" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.165412 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.165621 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.165770 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.167509 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561380-276jd"] Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.331443 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z79nk\" (UniqueName: \"kubernetes.io/projected/afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a-kube-api-access-z79nk\") pod \"auto-csr-approver-29561380-276jd\" (UID: \"afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a\") " pod="openshift-infra/auto-csr-approver-29561380-276jd" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.433664 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z79nk\" (UniqueName: \"kubernetes.io/projected/afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a-kube-api-access-z79nk\") pod \"auto-csr-approver-29561380-276jd\" (UID: \"afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a\") " pod="openshift-infra/auto-csr-approver-29561380-276jd" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.455684 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z79nk\" (UniqueName: \"kubernetes.io/projected/afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a-kube-api-access-z79nk\") pod \"auto-csr-approver-29561380-276jd\" (UID: \"afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a\") " pod="openshift-infra/auto-csr-approver-29561380-276jd" Mar 16 17:40:00 crc kubenswrapper[4736]: I0316 17:40:00.481316 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561380-276jd" Mar 16 17:40:01 crc kubenswrapper[4736]: I0316 17:40:01.129220 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561380-276jd"] Mar 16 17:40:01 crc kubenswrapper[4736]: I0316 17:40:01.187280 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561380-276jd" event={"ID":"afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a","Type":"ContainerStarted","Data":"76dd8eba1f839f2f317d1d98a0d781773a6796fc7075dec5d2baea064847dda7"} Mar 16 17:40:03 crc kubenswrapper[4736]: I0316 17:40:03.210861 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561380-276jd" event={"ID":"afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a","Type":"ContainerStarted","Data":"dd2b7b54d992e8628119b3b0c3b4512ba78433bb0adf3be9968b9c0a49e00b4c"} Mar 16 17:40:03 crc kubenswrapper[4736]: I0316 17:40:03.232778 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561380-276jd" podStartSLOduration=1.931565229 podStartE2EDuration="3.232757234s" podCreationTimestamp="2026-03-16 17:40:00 +0000 UTC" firstStartedPulling="2026-03-16 17:40:01.143314795 +0000 UTC m=+8802.870705072" lastFinishedPulling="2026-03-16 17:40:02.44450679 +0000 UTC m=+8804.171897077" observedRunningTime="2026-03-16 17:40:03.226337099 +0000 UTC m=+8804.953727386" watchObservedRunningTime="2026-03-16 17:40:03.232757234 +0000 UTC m=+8804.960147531" Mar 16 17:40:04 crc kubenswrapper[4736]: I0316 17:40:04.222834 4736 generic.go:334] "Generic (PLEG): container finished" podID="afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a" containerID="dd2b7b54d992e8628119b3b0c3b4512ba78433bb0adf3be9968b9c0a49e00b4c" exitCode=0 Mar 16 17:40:04 crc kubenswrapper[4736]: I0316 17:40:04.222883 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561380-276jd" event={"ID":"afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a","Type":"ContainerDied","Data":"dd2b7b54d992e8628119b3b0c3b4512ba78433bb0adf3be9968b9c0a49e00b4c"} Mar 16 17:40:04 crc kubenswrapper[4736]: I0316 17:40:04.978233 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:40:04 crc kubenswrapper[4736]: E0316 17:40:04.979437 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:40:05 crc kubenswrapper[4736]: I0316 17:40:05.675883 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561380-276jd" Mar 16 17:40:05 crc kubenswrapper[4736]: I0316 17:40:05.830867 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z79nk\" (UniqueName: \"kubernetes.io/projected/afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a-kube-api-access-z79nk\") pod \"afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a\" (UID: \"afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a\") " Mar 16 17:40:05 crc kubenswrapper[4736]: I0316 17:40:05.840407 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a-kube-api-access-z79nk" (OuterVolumeSpecName: "kube-api-access-z79nk") pod "afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a" (UID: "afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a"). InnerVolumeSpecName "kube-api-access-z79nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:40:05 crc kubenswrapper[4736]: I0316 17:40:05.933779 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z79nk\" (UniqueName: \"kubernetes.io/projected/afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a-kube-api-access-z79nk\") on node \"crc\" DevicePath \"\"" Mar 16 17:40:06 crc kubenswrapper[4736]: I0316 17:40:06.241491 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561380-276jd" event={"ID":"afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a","Type":"ContainerDied","Data":"76dd8eba1f839f2f317d1d98a0d781773a6796fc7075dec5d2baea064847dda7"} Mar 16 17:40:06 crc kubenswrapper[4736]: I0316 17:40:06.241540 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76dd8eba1f839f2f317d1d98a0d781773a6796fc7075dec5d2baea064847dda7" Mar 16 17:40:06 crc kubenswrapper[4736]: I0316 17:40:06.241604 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561380-276jd" Mar 16 17:40:06 crc kubenswrapper[4736]: I0316 17:40:06.311558 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561374-mfsrr"] Mar 16 17:40:06 crc kubenswrapper[4736]: I0316 17:40:06.321938 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561374-mfsrr"] Mar 16 17:40:06 crc kubenswrapper[4736]: I0316 17:40:06.989593 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6acd0f5b-8463-4acc-9dac-f5129e0dcea8" path="/var/lib/kubelet/pods/6acd0f5b-8463-4acc-9dac-f5129e0dcea8/volumes" Mar 16 17:40:15 crc kubenswrapper[4736]: I0316 17:40:15.978166 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:40:16 crc kubenswrapper[4736]: I0316 17:40:16.329015 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"fc7b974ed672b1589601d1bec52901fe7ac75bc77e34079673e3d90c9d3b20d4"} Mar 16 17:41:03 crc kubenswrapper[4736]: I0316 17:41:03.363352 4736 scope.go:117] "RemoveContainer" containerID="23a2537b15dc16f527427374d3a3d4d60b4881010bda9dd59c41dead69d8297f" Mar 16 17:42:00 crc kubenswrapper[4736]: I0316 17:42:00.143894 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561382-khbfs"] Mar 16 17:42:00 crc kubenswrapper[4736]: E0316 17:42:00.160257 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a" containerName="oc" Mar 16 17:42:00 crc kubenswrapper[4736]: I0316 17:42:00.160492 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a" containerName="oc" Mar 16 17:42:00 crc kubenswrapper[4736]: I0316 17:42:00.162342 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a" containerName="oc" Mar 16 17:42:00 crc kubenswrapper[4736]: I0316 17:42:00.172058 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561382-khbfs"] Mar 16 17:42:00 crc kubenswrapper[4736]: I0316 17:42:00.172226 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561382-khbfs" Mar 16 17:42:00 crc kubenswrapper[4736]: I0316 17:42:00.175841 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:42:00 crc kubenswrapper[4736]: I0316 17:42:00.177168 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:42:00 crc kubenswrapper[4736]: I0316 17:42:00.177177 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:42:00 crc kubenswrapper[4736]: I0316 17:42:00.201463 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z686f\" (UniqueName: \"kubernetes.io/projected/91675444-4ed8-4f0f-94bb-d358385a3dbc-kube-api-access-z686f\") pod \"auto-csr-approver-29561382-khbfs\" (UID: \"91675444-4ed8-4f0f-94bb-d358385a3dbc\") " pod="openshift-infra/auto-csr-approver-29561382-khbfs" Mar 16 17:42:00 crc kubenswrapper[4736]: I0316 17:42:00.303339 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z686f\" (UniqueName: \"kubernetes.io/projected/91675444-4ed8-4f0f-94bb-d358385a3dbc-kube-api-access-z686f\") pod \"auto-csr-approver-29561382-khbfs\" (UID: \"91675444-4ed8-4f0f-94bb-d358385a3dbc\") " pod="openshift-infra/auto-csr-approver-29561382-khbfs" Mar 16 17:42:00 crc kubenswrapper[4736]: I0316 17:42:00.330278 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z686f\" (UniqueName: \"kubernetes.io/projected/91675444-4ed8-4f0f-94bb-d358385a3dbc-kube-api-access-z686f\") pod \"auto-csr-approver-29561382-khbfs\" (UID: \"91675444-4ed8-4f0f-94bb-d358385a3dbc\") " pod="openshift-infra/auto-csr-approver-29561382-khbfs" Mar 16 17:42:00 crc kubenswrapper[4736]: I0316 17:42:00.504028 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561382-khbfs" Mar 16 17:42:00 crc kubenswrapper[4736]: I0316 17:42:00.990978 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561382-khbfs"] Mar 16 17:42:01 crc kubenswrapper[4736]: I0316 17:42:01.008448 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 17:42:01 crc kubenswrapper[4736]: I0316 17:42:01.373944 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561382-khbfs" event={"ID":"91675444-4ed8-4f0f-94bb-d358385a3dbc","Type":"ContainerStarted","Data":"c2f1cc4d181b3677c1568323eec5de471ac1b79153be8235dc0d218f41210140"} Mar 16 17:42:03 crc kubenswrapper[4736]: I0316 17:42:03.398508 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561382-khbfs" event={"ID":"91675444-4ed8-4f0f-94bb-d358385a3dbc","Type":"ContainerStarted","Data":"30ebc6af604f01a832a33dd64bef5c9a093fd49fca1cf2fc086e25ee2cef83c1"} Mar 16 17:42:03 crc kubenswrapper[4736]: I0316 17:42:03.429399 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561382-khbfs" podStartSLOduration=2.096776653 podStartE2EDuration="3.429372326s" podCreationTimestamp="2026-03-16 17:42:00 +0000 UTC" firstStartedPulling="2026-03-16 17:42:01.00231892 +0000 UTC m=+8922.729709207" lastFinishedPulling="2026-03-16 17:42:02.334914553 +0000 UTC m=+8924.062304880" observedRunningTime="2026-03-16 17:42:03.416381011 +0000 UTC m=+8925.143771298" watchObservedRunningTime="2026-03-16 17:42:03.429372326 +0000 UTC m=+8925.156762633" Mar 16 17:42:04 crc kubenswrapper[4736]: E0316 17:42:04.364861 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91675444_4ed8_4f0f_94bb_d358385a3dbc.slice/crio-conmon-30ebc6af604f01a832a33dd64bef5c9a093fd49fca1cf2fc086e25ee2cef83c1.scope\": RecentStats: unable to find data in memory cache]" Mar 16 17:42:04 crc kubenswrapper[4736]: I0316 17:42:04.419701 4736 generic.go:334] "Generic (PLEG): container finished" podID="91675444-4ed8-4f0f-94bb-d358385a3dbc" containerID="30ebc6af604f01a832a33dd64bef5c9a093fd49fca1cf2fc086e25ee2cef83c1" exitCode=0 Mar 16 17:42:04 crc kubenswrapper[4736]: I0316 17:42:04.419774 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561382-khbfs" event={"ID":"91675444-4ed8-4f0f-94bb-d358385a3dbc","Type":"ContainerDied","Data":"30ebc6af604f01a832a33dd64bef5c9a093fd49fca1cf2fc086e25ee2cef83c1"} Mar 16 17:42:05 crc kubenswrapper[4736]: I0316 17:42:05.815329 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561382-khbfs" Mar 16 17:42:05 crc kubenswrapper[4736]: I0316 17:42:05.918661 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z686f\" (UniqueName: \"kubernetes.io/projected/91675444-4ed8-4f0f-94bb-d358385a3dbc-kube-api-access-z686f\") pod \"91675444-4ed8-4f0f-94bb-d358385a3dbc\" (UID: \"91675444-4ed8-4f0f-94bb-d358385a3dbc\") " Mar 16 17:42:05 crc kubenswrapper[4736]: I0316 17:42:05.926198 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91675444-4ed8-4f0f-94bb-d358385a3dbc-kube-api-access-z686f" (OuterVolumeSpecName: "kube-api-access-z686f") pod "91675444-4ed8-4f0f-94bb-d358385a3dbc" (UID: "91675444-4ed8-4f0f-94bb-d358385a3dbc"). InnerVolumeSpecName "kube-api-access-z686f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:42:06 crc kubenswrapper[4736]: I0316 17:42:06.021328 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z686f\" (UniqueName: \"kubernetes.io/projected/91675444-4ed8-4f0f-94bb-d358385a3dbc-kube-api-access-z686f\") on node \"crc\" DevicePath \"\"" Mar 16 17:42:06 crc kubenswrapper[4736]: I0316 17:42:06.439990 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561382-khbfs" event={"ID":"91675444-4ed8-4f0f-94bb-d358385a3dbc","Type":"ContainerDied","Data":"c2f1cc4d181b3677c1568323eec5de471ac1b79153be8235dc0d218f41210140"} Mar 16 17:42:06 crc kubenswrapper[4736]: I0316 17:42:06.440037 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2f1cc4d181b3677c1568323eec5de471ac1b79153be8235dc0d218f41210140" Mar 16 17:42:06 crc kubenswrapper[4736]: I0316 17:42:06.440160 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561382-khbfs" Mar 16 17:42:06 crc kubenswrapper[4736]: I0316 17:42:06.497872 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561376-xlmhs"] Mar 16 17:42:06 crc kubenswrapper[4736]: I0316 17:42:06.510727 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561376-xlmhs"] Mar 16 17:42:06 crc kubenswrapper[4736]: I0316 17:42:06.988311 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f66c029-5ebf-402a-adf4-9f10b085197b" path="/var/lib/kubelet/pods/1f66c029-5ebf-402a-adf4-9f10b085197b/volumes" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.089592 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qql8q"] Mar 16 17:42:34 crc kubenswrapper[4736]: E0316 17:42:34.090602 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91675444-4ed8-4f0f-94bb-d358385a3dbc" containerName="oc" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.090615 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="91675444-4ed8-4f0f-94bb-d358385a3dbc" containerName="oc" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.090858 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="91675444-4ed8-4f0f-94bb-d358385a3dbc" containerName="oc" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.097275 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.121717 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qql8q"] Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.192576 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-catalog-content\") pod \"redhat-operators-qql8q\" (UID: \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\") " pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.192669 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-262bs\" (UniqueName: \"kubernetes.io/projected/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-kube-api-access-262bs\") pod \"redhat-operators-qql8q\" (UID: \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\") " pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.192726 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-utilities\") pod \"redhat-operators-qql8q\" (UID: \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\") " pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.294517 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-catalog-content\") pod \"redhat-operators-qql8q\" (UID: \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\") " pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.294591 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-262bs\" (UniqueName: \"kubernetes.io/projected/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-kube-api-access-262bs\") pod \"redhat-operators-qql8q\" (UID: \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\") " pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.294616 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-utilities\") pod \"redhat-operators-qql8q\" (UID: \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\") " pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.295236 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-utilities\") pod \"redhat-operators-qql8q\" (UID: \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\") " pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.295260 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-catalog-content\") pod \"redhat-operators-qql8q\" (UID: \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\") " pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.317490 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-262bs\" (UniqueName: \"kubernetes.io/projected/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-kube-api-access-262bs\") pod \"redhat-operators-qql8q\" (UID: \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\") " pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.443163 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:42:34 crc kubenswrapper[4736]: I0316 17:42:34.991212 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qql8q"] Mar 16 17:42:35 crc kubenswrapper[4736]: I0316 17:42:35.747749 4736 generic.go:334] "Generic (PLEG): container finished" podID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerID="13e579fee27d9b18181982f4e4cd420a72cf379668151f70a904d41a51d25aaf" exitCode=0 Mar 16 17:42:35 crc kubenswrapper[4736]: I0316 17:42:35.748439 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qql8q" event={"ID":"c50499fc-837e-4bae-a6cc-0b0762a8fbf0","Type":"ContainerDied","Data":"13e579fee27d9b18181982f4e4cd420a72cf379668151f70a904d41a51d25aaf"} Mar 16 17:42:35 crc kubenswrapper[4736]: I0316 17:42:35.748494 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qql8q" event={"ID":"c50499fc-837e-4bae-a6cc-0b0762a8fbf0","Type":"ContainerStarted","Data":"de4dddc3f6418e71e26f3090ab6eb653c41618a3ca1cb10c8411e808eb519d6a"} Mar 16 17:42:36 crc kubenswrapper[4736]: I0316 17:42:36.759443 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qql8q" event={"ID":"c50499fc-837e-4bae-a6cc-0b0762a8fbf0","Type":"ContainerStarted","Data":"68ffe61205dbd3e78ef2e2c9dc655eb9329fcb107d35e4f7df26a8a6d76ede81"} Mar 16 17:42:38 crc kubenswrapper[4736]: I0316 17:42:38.508074 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:42:38 crc kubenswrapper[4736]: I0316 17:42:38.509364 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:42:41 crc kubenswrapper[4736]: I0316 17:42:41.815885 4736 generic.go:334] "Generic (PLEG): container finished" podID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerID="68ffe61205dbd3e78ef2e2c9dc655eb9329fcb107d35e4f7df26a8a6d76ede81" exitCode=0 Mar 16 17:42:41 crc kubenswrapper[4736]: I0316 17:42:41.815974 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qql8q" event={"ID":"c50499fc-837e-4bae-a6cc-0b0762a8fbf0","Type":"ContainerDied","Data":"68ffe61205dbd3e78ef2e2c9dc655eb9329fcb107d35e4f7df26a8a6d76ede81"} Mar 16 17:42:42 crc kubenswrapper[4736]: I0316 17:42:42.830430 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qql8q" event={"ID":"c50499fc-837e-4bae-a6cc-0b0762a8fbf0","Type":"ContainerStarted","Data":"ac34104a0b268680131dec74e6c0af2aa7e44066cf55f65927bc89086e1a207b"} Mar 16 17:42:42 crc kubenswrapper[4736]: I0316 17:42:42.879391 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qql8q" podStartSLOduration=2.39888098 podStartE2EDuration="8.879371972s" podCreationTimestamp="2026-03-16 17:42:34 +0000 UTC" firstStartedPulling="2026-03-16 17:42:35.751184871 +0000 UTC m=+8957.478575188" lastFinishedPulling="2026-03-16 17:42:42.231675873 +0000 UTC m=+8963.959066180" observedRunningTime="2026-03-16 17:42:42.86722459 +0000 UTC m=+8964.594614877" watchObservedRunningTime="2026-03-16 17:42:42.879371972 +0000 UTC m=+8964.606762259" Mar 16 17:42:44 crc kubenswrapper[4736]: I0316 17:42:44.443863 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:42:44 crc kubenswrapper[4736]: I0316 17:42:44.444233 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:42:45 crc kubenswrapper[4736]: I0316 17:42:45.496706 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qql8q" podUID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerName="registry-server" probeResult="failure" output=< Mar 16 17:42:45 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:42:45 crc kubenswrapper[4736]: > Mar 16 17:42:55 crc kubenswrapper[4736]: I0316 17:42:55.522871 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qql8q" podUID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerName="registry-server" probeResult="failure" output=< Mar 16 17:42:55 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:42:55 crc kubenswrapper[4736]: > Mar 16 17:43:03 crc kubenswrapper[4736]: I0316 17:43:03.544585 4736 scope.go:117] "RemoveContainer" containerID="ba949a9cbb4a232e28e43d61e14493f92e732c9acef87abe7eb237bc70be476e" Mar 16 17:43:05 crc kubenswrapper[4736]: I0316 17:43:05.509942 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qql8q" podUID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerName="registry-server" probeResult="failure" output=< Mar 16 17:43:05 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:43:05 crc kubenswrapper[4736]: > Mar 16 17:43:08 crc kubenswrapper[4736]: I0316 17:43:08.508004 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:43:08 crc kubenswrapper[4736]: I0316 17:43:08.508344 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:43:15 crc kubenswrapper[4736]: I0316 17:43:15.499145 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qql8q" podUID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerName="registry-server" probeResult="failure" output=< Mar 16 17:43:15 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:43:15 crc kubenswrapper[4736]: > Mar 16 17:43:24 crc kubenswrapper[4736]: I0316 17:43:24.724961 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:43:24 crc kubenswrapper[4736]: I0316 17:43:24.826684 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:43:25 crc kubenswrapper[4736]: I0316 17:43:25.002205 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qql8q"] Mar 16 17:43:26 crc kubenswrapper[4736]: I0316 17:43:26.215610 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qql8q" podUID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerName="registry-server" containerID="cri-o://ac34104a0b268680131dec74e6c0af2aa7e44066cf55f65927bc89086e1a207b" gracePeriod=2 Mar 16 17:43:26 crc kubenswrapper[4736]: E0316 17:43:26.472810 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc50499fc_837e_4bae_a6cc_0b0762a8fbf0.slice/crio-ac34104a0b268680131dec74e6c0af2aa7e44066cf55f65927bc89086e1a207b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc50499fc_837e_4bae_a6cc_0b0762a8fbf0.slice/crio-conmon-ac34104a0b268680131dec74e6c0af2aa7e44066cf55f65927bc89086e1a207b.scope\": RecentStats: unable to find data in memory cache]" Mar 16 17:43:27 crc kubenswrapper[4736]: I0316 17:43:27.225421 4736 generic.go:334] "Generic (PLEG): container finished" podID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerID="ac34104a0b268680131dec74e6c0af2aa7e44066cf55f65927bc89086e1a207b" exitCode=0 Mar 16 17:43:27 crc kubenswrapper[4736]: I0316 17:43:27.225498 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qql8q" event={"ID":"c50499fc-837e-4bae-a6cc-0b0762a8fbf0","Type":"ContainerDied","Data":"ac34104a0b268680131dec74e6c0af2aa7e44066cf55f65927bc89086e1a207b"} Mar 16 17:43:27 crc kubenswrapper[4736]: I0316 17:43:27.225735 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qql8q" event={"ID":"c50499fc-837e-4bae-a6cc-0b0762a8fbf0","Type":"ContainerDied","Data":"de4dddc3f6418e71e26f3090ab6eb653c41618a3ca1cb10c8411e808eb519d6a"} Mar 16 17:43:27 crc kubenswrapper[4736]: I0316 17:43:27.225750 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de4dddc3f6418e71e26f3090ab6eb653c41618a3ca1cb10c8411e808eb519d6a" Mar 16 17:43:27 crc kubenswrapper[4736]: I0316 17:43:27.316435 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:43:27 crc kubenswrapper[4736]: I0316 17:43:27.481235 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-262bs\" (UniqueName: \"kubernetes.io/projected/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-kube-api-access-262bs\") pod \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\" (UID: \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\") " Mar 16 17:43:27 crc kubenswrapper[4736]: I0316 17:43:27.481527 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-utilities\") pod \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\" (UID: \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\") " Mar 16 17:43:27 crc kubenswrapper[4736]: I0316 17:43:27.481562 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-catalog-content\") pod \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\" (UID: \"c50499fc-837e-4bae-a6cc-0b0762a8fbf0\") " Mar 16 17:43:27 crc kubenswrapper[4736]: I0316 17:43:27.481992 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-utilities" (OuterVolumeSpecName: "utilities") pod "c50499fc-837e-4bae-a6cc-0b0762a8fbf0" (UID: "c50499fc-837e-4bae-a6cc-0b0762a8fbf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:43:27 crc kubenswrapper[4736]: I0316 17:43:27.498478 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-kube-api-access-262bs" (OuterVolumeSpecName: "kube-api-access-262bs") pod "c50499fc-837e-4bae-a6cc-0b0762a8fbf0" (UID: "c50499fc-837e-4bae-a6cc-0b0762a8fbf0"). InnerVolumeSpecName "kube-api-access-262bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:43:27 crc kubenswrapper[4736]: I0316 17:43:27.585231 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:43:27 crc kubenswrapper[4736]: I0316 17:43:27.585578 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-262bs\" (UniqueName: \"kubernetes.io/projected/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-kube-api-access-262bs\") on node \"crc\" DevicePath \"\"" Mar 16 17:43:27 crc kubenswrapper[4736]: I0316 17:43:27.640770 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c50499fc-837e-4bae-a6cc-0b0762a8fbf0" (UID: "c50499fc-837e-4bae-a6cc-0b0762a8fbf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:43:27 crc kubenswrapper[4736]: I0316 17:43:27.687615 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c50499fc-837e-4bae-a6cc-0b0762a8fbf0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:43:28 crc kubenswrapper[4736]: I0316 17:43:28.244187 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qql8q" Mar 16 17:43:28 crc kubenswrapper[4736]: I0316 17:43:28.280885 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qql8q"] Mar 16 17:43:28 crc kubenswrapper[4736]: I0316 17:43:28.300011 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qql8q"] Mar 16 17:43:28 crc kubenswrapper[4736]: I0316 17:43:28.991989 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" path="/var/lib/kubelet/pods/c50499fc-837e-4bae-a6cc-0b0762a8fbf0/volumes" Mar 16 17:43:38 crc kubenswrapper[4736]: I0316 17:43:38.508022 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:43:38 crc kubenswrapper[4736]: I0316 17:43:38.508592 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:43:38 crc kubenswrapper[4736]: I0316 17:43:38.508641 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 17:43:38 crc kubenswrapper[4736]: I0316 17:43:38.509522 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc7b974ed672b1589601d1bec52901fe7ac75bc77e34079673e3d90c9d3b20d4"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 17:43:38 crc kubenswrapper[4736]: I0316 17:43:38.509585 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://fc7b974ed672b1589601d1bec52901fe7ac75bc77e34079673e3d90c9d3b20d4" gracePeriod=600 Mar 16 17:43:39 crc kubenswrapper[4736]: I0316 17:43:39.357324 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="fc7b974ed672b1589601d1bec52901fe7ac75bc77e34079673e3d90c9d3b20d4" exitCode=0 Mar 16 17:43:39 crc kubenswrapper[4736]: I0316 17:43:39.357373 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"fc7b974ed672b1589601d1bec52901fe7ac75bc77e34079673e3d90c9d3b20d4"} Mar 16 17:43:39 crc kubenswrapper[4736]: I0316 17:43:39.357681 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43"} Mar 16 17:43:39 crc kubenswrapper[4736]: I0316 17:43:39.357704 4736 scope.go:117] "RemoveContainer" containerID="a6f13bd75908353872b3bf2c388585e5e49c3a2e9688424519159d9839fad78a" Mar 16 17:44:00 crc kubenswrapper[4736]: I0316 17:44:00.225751 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561384-sz9d4"] Mar 16 17:44:00 crc kubenswrapper[4736]: E0316 17:44:00.228548 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerName="extract-utilities" Mar 16 17:44:00 crc kubenswrapper[4736]: I0316 17:44:00.228571 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerName="extract-utilities" Mar 16 17:44:00 crc kubenswrapper[4736]: E0316 17:44:00.228601 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerName="registry-server" Mar 16 17:44:00 crc kubenswrapper[4736]: I0316 17:44:00.228612 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerName="registry-server" Mar 16 17:44:00 crc kubenswrapper[4736]: E0316 17:44:00.228640 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerName="extract-content" Mar 16 17:44:00 crc kubenswrapper[4736]: I0316 17:44:00.228651 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerName="extract-content" Mar 16 17:44:00 crc kubenswrapper[4736]: I0316 17:44:00.228985 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c50499fc-837e-4bae-a6cc-0b0762a8fbf0" containerName="registry-server" Mar 16 17:44:00 crc kubenswrapper[4736]: I0316 17:44:00.229922 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561384-sz9d4" Mar 16 17:44:00 crc kubenswrapper[4736]: I0316 17:44:00.233548 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:44:00 crc kubenswrapper[4736]: I0316 17:44:00.233572 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:44:00 crc kubenswrapper[4736]: I0316 17:44:00.233855 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:44:00 crc kubenswrapper[4736]: I0316 17:44:00.260586 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561384-sz9d4"] Mar 16 17:44:00 crc kubenswrapper[4736]: I0316 17:44:00.290048 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdg5v\" (UniqueName: \"kubernetes.io/projected/161669e9-56d1-422a-88de-59a3f8e60b0a-kube-api-access-sdg5v\") pod \"auto-csr-approver-29561384-sz9d4\" (UID: \"161669e9-56d1-422a-88de-59a3f8e60b0a\") " pod="openshift-infra/auto-csr-approver-29561384-sz9d4" Mar 16 17:44:00 crc kubenswrapper[4736]: I0316 17:44:00.392487 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdg5v\" (UniqueName: \"kubernetes.io/projected/161669e9-56d1-422a-88de-59a3f8e60b0a-kube-api-access-sdg5v\") pod \"auto-csr-approver-29561384-sz9d4\" (UID: \"161669e9-56d1-422a-88de-59a3f8e60b0a\") " pod="openshift-infra/auto-csr-approver-29561384-sz9d4" Mar 16 17:44:00 crc kubenswrapper[4736]: I0316 17:44:00.430864 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdg5v\" (UniqueName: \"kubernetes.io/projected/161669e9-56d1-422a-88de-59a3f8e60b0a-kube-api-access-sdg5v\") pod \"auto-csr-approver-29561384-sz9d4\" (UID: \"161669e9-56d1-422a-88de-59a3f8e60b0a\") " pod="openshift-infra/auto-csr-approver-29561384-sz9d4" Mar 16 17:44:00 crc kubenswrapper[4736]: I0316 17:44:00.555570 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561384-sz9d4" Mar 16 17:44:01 crc kubenswrapper[4736]: I0316 17:44:01.112523 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561384-sz9d4"] Mar 16 17:44:01 crc kubenswrapper[4736]: I0316 17:44:01.580313 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561384-sz9d4" event={"ID":"161669e9-56d1-422a-88de-59a3f8e60b0a","Type":"ContainerStarted","Data":"a6ffcd21cd22700f4a7a9ecb87e636b66b9db071a92eb6a6457c33ffaee72b52"} Mar 16 17:44:03 crc kubenswrapper[4736]: I0316 17:44:03.610910 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561384-sz9d4" event={"ID":"161669e9-56d1-422a-88de-59a3f8e60b0a","Type":"ContainerStarted","Data":"10d425e35978182add2f5877db821b3b7d30ee29d5603f1a67489bebff0fd589"} Mar 16 17:44:03 crc kubenswrapper[4736]: I0316 17:44:03.645320 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561384-sz9d4" podStartSLOduration=2.382857636 podStartE2EDuration="3.645300344s" podCreationTimestamp="2026-03-16 17:44:00 +0000 UTC" firstStartedPulling="2026-03-16 17:44:01.133484615 +0000 UTC m=+9042.860874922" lastFinishedPulling="2026-03-16 17:44:02.395927313 +0000 UTC m=+9044.123317630" observedRunningTime="2026-03-16 17:44:03.63015306 +0000 UTC m=+9045.357543397" watchObservedRunningTime="2026-03-16 17:44:03.645300344 +0000 UTC m=+9045.372690641" Mar 16 17:44:04 crc kubenswrapper[4736]: I0316 17:44:04.626808 4736 generic.go:334] "Generic (PLEG): container finished" podID="161669e9-56d1-422a-88de-59a3f8e60b0a" containerID="10d425e35978182add2f5877db821b3b7d30ee29d5603f1a67489bebff0fd589" exitCode=0 Mar 16 17:44:04 crc kubenswrapper[4736]: I0316 17:44:04.626871 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561384-sz9d4" event={"ID":"161669e9-56d1-422a-88de-59a3f8e60b0a","Type":"ContainerDied","Data":"10d425e35978182add2f5877db821b3b7d30ee29d5603f1a67489bebff0fd589"} Mar 16 17:44:06 crc kubenswrapper[4736]: I0316 17:44:06.093478 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561384-sz9d4" Mar 16 17:44:06 crc kubenswrapper[4736]: I0316 17:44:06.187790 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdg5v\" (UniqueName: \"kubernetes.io/projected/161669e9-56d1-422a-88de-59a3f8e60b0a-kube-api-access-sdg5v\") pod \"161669e9-56d1-422a-88de-59a3f8e60b0a\" (UID: \"161669e9-56d1-422a-88de-59a3f8e60b0a\") " Mar 16 17:44:06 crc kubenswrapper[4736]: I0316 17:44:06.200416 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/161669e9-56d1-422a-88de-59a3f8e60b0a-kube-api-access-sdg5v" (OuterVolumeSpecName: "kube-api-access-sdg5v") pod "161669e9-56d1-422a-88de-59a3f8e60b0a" (UID: "161669e9-56d1-422a-88de-59a3f8e60b0a"). InnerVolumeSpecName "kube-api-access-sdg5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:44:06 crc kubenswrapper[4736]: I0316 17:44:06.290603 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdg5v\" (UniqueName: \"kubernetes.io/projected/161669e9-56d1-422a-88de-59a3f8e60b0a-kube-api-access-sdg5v\") on node \"crc\" DevicePath \"\"" Mar 16 17:44:06 crc kubenswrapper[4736]: I0316 17:44:06.647800 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561384-sz9d4" event={"ID":"161669e9-56d1-422a-88de-59a3f8e60b0a","Type":"ContainerDied","Data":"a6ffcd21cd22700f4a7a9ecb87e636b66b9db071a92eb6a6457c33ffaee72b52"} Mar 16 17:44:06 crc kubenswrapper[4736]: I0316 17:44:06.647848 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6ffcd21cd22700f4a7a9ecb87e636b66b9db071a92eb6a6457c33ffaee72b52" Mar 16 17:44:06 crc kubenswrapper[4736]: I0316 17:44:06.647849 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561384-sz9d4" Mar 16 17:44:06 crc kubenswrapper[4736]: I0316 17:44:06.734375 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561378-gp8cb"] Mar 16 17:44:06 crc kubenswrapper[4736]: I0316 17:44:06.744439 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561378-gp8cb"] Mar 16 17:44:06 crc kubenswrapper[4736]: I0316 17:44:06.991702 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59f235d-e272-4ebb-8573-e1473496456d" path="/var/lib/kubelet/pods/e59f235d-e272-4ebb-8573-e1473496456d/volumes" Mar 16 17:44:40 crc kubenswrapper[4736]: E0316 17:44:40.662205 4736 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.30:57728->38.102.83.30:38289: write tcp 38.102.83.30:57728->38.102.83.30:38289: write: broken pipe Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.149849 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk"] Mar 16 17:45:00 crc kubenswrapper[4736]: E0316 17:45:00.150743 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161669e9-56d1-422a-88de-59a3f8e60b0a" containerName="oc" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.150757 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="161669e9-56d1-422a-88de-59a3f8e60b0a" containerName="oc" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.150971 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="161669e9-56d1-422a-88de-59a3f8e60b0a" containerName="oc" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.151556 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.155329 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.155470 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.166487 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk"] Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.237164 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txwt6\" (UniqueName: \"kubernetes.io/projected/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-kube-api-access-txwt6\") pod \"collect-profiles-29561385-ljmfk\" (UID: \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.237561 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-config-volume\") pod \"collect-profiles-29561385-ljmfk\" (UID: \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.237606 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-secret-volume\") pod \"collect-profiles-29561385-ljmfk\" (UID: \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.340825 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txwt6\" (UniqueName: \"kubernetes.io/projected/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-kube-api-access-txwt6\") pod \"collect-profiles-29561385-ljmfk\" (UID: \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.340986 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-config-volume\") pod \"collect-profiles-29561385-ljmfk\" (UID: \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.341030 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-secret-volume\") pod \"collect-profiles-29561385-ljmfk\" (UID: \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.342599 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-config-volume\") pod \"collect-profiles-29561385-ljmfk\" (UID: \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.348344 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-secret-volume\") pod \"collect-profiles-29561385-ljmfk\" (UID: \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.361617 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txwt6\" (UniqueName: \"kubernetes.io/projected/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-kube-api-access-txwt6\") pod \"collect-profiles-29561385-ljmfk\" (UID: \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" Mar 16 17:45:00 crc kubenswrapper[4736]: I0316 17:45:00.487624 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" Mar 16 17:45:01 crc kubenswrapper[4736]: I0316 17:45:01.082963 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk"] Mar 16 17:45:01 crc kubenswrapper[4736]: I0316 17:45:01.321478 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" event={"ID":"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d","Type":"ContainerStarted","Data":"3c62dbb0a450cd138ae1f2a14f82937edabcc97a8105d149f4566b4f38b35978"} Mar 16 17:45:01 crc kubenswrapper[4736]: I0316 17:45:01.321834 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" event={"ID":"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d","Type":"ContainerStarted","Data":"405b802dba4b993faa59b6f82acd3f64e01901ff96c72cdcad04e991f0bf7e90"} Mar 16 17:45:01 crc kubenswrapper[4736]: I0316 17:45:01.342458 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" podStartSLOduration=1.342441626 podStartE2EDuration="1.342441626s" podCreationTimestamp="2026-03-16 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 17:45:01.334412687 +0000 UTC m=+9103.061802974" watchObservedRunningTime="2026-03-16 17:45:01.342441626 +0000 UTC m=+9103.069831913" Mar 16 17:45:02 crc kubenswrapper[4736]: I0316 17:45:02.331532 4736 generic.go:334] "Generic (PLEG): container finished" podID="bf4cf0c2-3aa0-4f87-86f9-772752b42d5d" containerID="3c62dbb0a450cd138ae1f2a14f82937edabcc97a8105d149f4566b4f38b35978" exitCode=0 Mar 16 17:45:02 crc kubenswrapper[4736]: I0316 17:45:02.331570 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" event={"ID":"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d","Type":"ContainerDied","Data":"3c62dbb0a450cd138ae1f2a14f82937edabcc97a8105d149f4566b4f38b35978"} Mar 16 17:45:03 crc kubenswrapper[4736]: I0316 17:45:03.666858 4736 scope.go:117] "RemoveContainer" containerID="067f4ef6415c278d2484b6a90c1ef1c5f842e2626b3f85778242d961d88aac6d" Mar 16 17:45:03 crc kubenswrapper[4736]: I0316 17:45:03.834321 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" Mar 16 17:45:03 crc kubenswrapper[4736]: I0316 17:45:03.920660 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txwt6\" (UniqueName: \"kubernetes.io/projected/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-kube-api-access-txwt6\") pod \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\" (UID: \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\") " Mar 16 17:45:03 crc kubenswrapper[4736]: I0316 17:45:03.920747 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-config-volume\") pod \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\" (UID: \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\") " Mar 16 17:45:03 crc kubenswrapper[4736]: I0316 17:45:03.920942 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-secret-volume\") pod \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\" (UID: \"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d\") " Mar 16 17:45:03 crc kubenswrapper[4736]: I0316 17:45:03.921795 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-config-volume" (OuterVolumeSpecName: "config-volume") pod "bf4cf0c2-3aa0-4f87-86f9-772752b42d5d" (UID: "bf4cf0c2-3aa0-4f87-86f9-772752b42d5d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 17:45:03 crc kubenswrapper[4736]: I0316 17:45:03.928736 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-kube-api-access-txwt6" (OuterVolumeSpecName: "kube-api-access-txwt6") pod "bf4cf0c2-3aa0-4f87-86f9-772752b42d5d" (UID: "bf4cf0c2-3aa0-4f87-86f9-772752b42d5d"). InnerVolumeSpecName "kube-api-access-txwt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:45:03 crc kubenswrapper[4736]: I0316 17:45:03.931867 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bf4cf0c2-3aa0-4f87-86f9-772752b42d5d" (UID: "bf4cf0c2-3aa0-4f87-86f9-772752b42d5d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 17:45:04 crc kubenswrapper[4736]: I0316 17:45:04.023647 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txwt6\" (UniqueName: \"kubernetes.io/projected/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-kube-api-access-txwt6\") on node \"crc\" DevicePath \"\"" Mar 16 17:45:04 crc kubenswrapper[4736]: I0316 17:45:04.023683 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 17:45:04 crc kubenswrapper[4736]: I0316 17:45:04.023694 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 17:45:04 crc kubenswrapper[4736]: I0316 17:45:04.349008 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" event={"ID":"bf4cf0c2-3aa0-4f87-86f9-772752b42d5d","Type":"ContainerDied","Data":"405b802dba4b993faa59b6f82acd3f64e01901ff96c72cdcad04e991f0bf7e90"} Mar 16 17:45:04 crc kubenswrapper[4736]: I0316 17:45:04.349047 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="405b802dba4b993faa59b6f82acd3f64e01901ff96c72cdcad04e991f0bf7e90" Mar 16 17:45:04 crc kubenswrapper[4736]: I0316 17:45:04.349061 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk" Mar 16 17:45:04 crc kubenswrapper[4736]: I0316 17:45:04.425242 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk"] Mar 16 17:45:04 crc kubenswrapper[4736]: I0316 17:45:04.434137 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561340-9xvwk"] Mar 16 17:45:04 crc kubenswrapper[4736]: I0316 17:45:04.993583 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73a6883c-3afc-4930-91a7-510201651ed9" path="/var/lib/kubelet/pods/73a6883c-3afc-4930-91a7-510201651ed9/volumes" Mar 16 17:45:38 crc kubenswrapper[4736]: I0316 17:45:38.507707 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:45:38 crc kubenswrapper[4736]: I0316 17:45:38.508559 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:46:00 crc kubenswrapper[4736]: I0316 17:46:00.148257 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561386-2dd7q"] Mar 16 17:46:00 crc kubenswrapper[4736]: E0316 17:46:00.149173 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf4cf0c2-3aa0-4f87-86f9-772752b42d5d" containerName="collect-profiles" Mar 16 17:46:00 crc kubenswrapper[4736]: I0316 17:46:00.149184 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf4cf0c2-3aa0-4f87-86f9-772752b42d5d" containerName="collect-profiles" Mar 16 17:46:00 crc kubenswrapper[4736]: I0316 17:46:00.149357 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf4cf0c2-3aa0-4f87-86f9-772752b42d5d" containerName="collect-profiles" Mar 16 17:46:00 crc kubenswrapper[4736]: I0316 17:46:00.149999 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561386-2dd7q" Mar 16 17:46:00 crc kubenswrapper[4736]: I0316 17:46:00.182018 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561386-2dd7q"] Mar 16 17:46:00 crc kubenswrapper[4736]: I0316 17:46:00.186168 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:46:00 crc kubenswrapper[4736]: I0316 17:46:00.186441 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:46:00 crc kubenswrapper[4736]: I0316 17:46:00.186581 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:46:00 crc kubenswrapper[4736]: I0316 17:46:00.288950 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kf7z\" (UniqueName: \"kubernetes.io/projected/c3f4a478-1082-40bd-9e7e-486e9a6d5ad8-kube-api-access-7kf7z\") pod \"auto-csr-approver-29561386-2dd7q\" (UID: \"c3f4a478-1082-40bd-9e7e-486e9a6d5ad8\") " pod="openshift-infra/auto-csr-approver-29561386-2dd7q" Mar 16 17:46:00 crc kubenswrapper[4736]: I0316 17:46:00.390236 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kf7z\" (UniqueName: \"kubernetes.io/projected/c3f4a478-1082-40bd-9e7e-486e9a6d5ad8-kube-api-access-7kf7z\") pod \"auto-csr-approver-29561386-2dd7q\" (UID: \"c3f4a478-1082-40bd-9e7e-486e9a6d5ad8\") " pod="openshift-infra/auto-csr-approver-29561386-2dd7q" Mar 16 17:46:00 crc kubenswrapper[4736]: I0316 17:46:00.418373 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kf7z\" (UniqueName: \"kubernetes.io/projected/c3f4a478-1082-40bd-9e7e-486e9a6d5ad8-kube-api-access-7kf7z\") pod \"auto-csr-approver-29561386-2dd7q\" (UID: \"c3f4a478-1082-40bd-9e7e-486e9a6d5ad8\") " pod="openshift-infra/auto-csr-approver-29561386-2dd7q" Mar 16 17:46:00 crc kubenswrapper[4736]: I0316 17:46:00.499537 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561386-2dd7q" Mar 16 17:46:01 crc kubenswrapper[4736]: I0316 17:46:01.021004 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561386-2dd7q"] Mar 16 17:46:01 crc kubenswrapper[4736]: I0316 17:46:01.267339 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561386-2dd7q" event={"ID":"c3f4a478-1082-40bd-9e7e-486e9a6d5ad8","Type":"ContainerStarted","Data":"81ce9ef8a9bee28dc500f4827cfe82bfc6361daafd4213a33cb6b4fb3364d4d3"} Mar 16 17:46:03 crc kubenswrapper[4736]: I0316 17:46:03.290168 4736 generic.go:334] "Generic (PLEG): container finished" podID="c3f4a478-1082-40bd-9e7e-486e9a6d5ad8" containerID="ddbe79d31002430a6d12c8e075debb35962424c41650b1381f195d6297b58c8b" exitCode=0 Mar 16 17:46:03 crc kubenswrapper[4736]: I0316 17:46:03.290283 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561386-2dd7q" event={"ID":"c3f4a478-1082-40bd-9e7e-486e9a6d5ad8","Type":"ContainerDied","Data":"ddbe79d31002430a6d12c8e075debb35962424c41650b1381f195d6297b58c8b"} Mar 16 17:46:03 crc kubenswrapper[4736]: I0316 17:46:03.754325 4736 scope.go:117] "RemoveContainer" containerID="c4c991cc8b8e0001dd0ef00be6d1f19243a09c6a7b317782a3f91a88f5a1a26e" Mar 16 17:46:04 crc kubenswrapper[4736]: I0316 17:46:04.665559 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561386-2dd7q" Mar 16 17:46:04 crc kubenswrapper[4736]: I0316 17:46:04.816302 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kf7z\" (UniqueName: \"kubernetes.io/projected/c3f4a478-1082-40bd-9e7e-486e9a6d5ad8-kube-api-access-7kf7z\") pod \"c3f4a478-1082-40bd-9e7e-486e9a6d5ad8\" (UID: \"c3f4a478-1082-40bd-9e7e-486e9a6d5ad8\") " Mar 16 17:46:04 crc kubenswrapper[4736]: I0316 17:46:04.824495 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f4a478-1082-40bd-9e7e-486e9a6d5ad8-kube-api-access-7kf7z" (OuterVolumeSpecName: "kube-api-access-7kf7z") pod "c3f4a478-1082-40bd-9e7e-486e9a6d5ad8" (UID: "c3f4a478-1082-40bd-9e7e-486e9a6d5ad8"). InnerVolumeSpecName "kube-api-access-7kf7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:46:04 crc kubenswrapper[4736]: I0316 17:46:04.919769 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kf7z\" (UniqueName: \"kubernetes.io/projected/c3f4a478-1082-40bd-9e7e-486e9a6d5ad8-kube-api-access-7kf7z\") on node \"crc\" DevicePath \"\"" Mar 16 17:46:05 crc kubenswrapper[4736]: I0316 17:46:05.326319 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561386-2dd7q" event={"ID":"c3f4a478-1082-40bd-9e7e-486e9a6d5ad8","Type":"ContainerDied","Data":"81ce9ef8a9bee28dc500f4827cfe82bfc6361daafd4213a33cb6b4fb3364d4d3"} Mar 16 17:46:05 crc kubenswrapper[4736]: I0316 17:46:05.326370 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81ce9ef8a9bee28dc500f4827cfe82bfc6361daafd4213a33cb6b4fb3364d4d3" Mar 16 17:46:05 crc kubenswrapper[4736]: I0316 17:46:05.326369 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561386-2dd7q" Mar 16 17:46:05 crc kubenswrapper[4736]: I0316 17:46:05.781372 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561380-276jd"] Mar 16 17:46:05 crc kubenswrapper[4736]: I0316 17:46:05.789996 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561380-276jd"] Mar 16 17:46:06 crc kubenswrapper[4736]: I0316 17:46:06.993338 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a" path="/var/lib/kubelet/pods/afc1ff7d-16c5-46e7-85e9-19f5cba5fd0a/volumes" Mar 16 17:46:08 crc kubenswrapper[4736]: I0316 17:46:08.507696 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:46:08 crc kubenswrapper[4736]: I0316 17:46:08.508217 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:46:38 crc kubenswrapper[4736]: I0316 17:46:38.508042 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:46:38 crc kubenswrapper[4736]: I0316 17:46:38.508952 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:46:38 crc kubenswrapper[4736]: I0316 17:46:38.509064 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 17:46:38 crc kubenswrapper[4736]: I0316 17:46:38.511517 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 17:46:38 crc kubenswrapper[4736]: I0316 17:46:38.511926 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" gracePeriod=600 Mar 16 17:46:38 crc kubenswrapper[4736]: E0316 17:46:38.650220 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:46:38 crc kubenswrapper[4736]: I0316 17:46:38.747546 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" exitCode=0 Mar 16 17:46:38 crc kubenswrapper[4736]: I0316 17:46:38.747588 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43"} Mar 16 17:46:38 crc kubenswrapper[4736]: I0316 17:46:38.747623 4736 scope.go:117] "RemoveContainer" containerID="fc7b974ed672b1589601d1bec52901fe7ac75bc77e34079673e3d90c9d3b20d4" Mar 16 17:46:38 crc kubenswrapper[4736]: I0316 17:46:38.748382 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:46:38 crc kubenswrapper[4736]: E0316 17:46:38.748707 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:46:51 crc kubenswrapper[4736]: I0316 17:46:51.978512 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:46:51 crc kubenswrapper[4736]: E0316 17:46:51.979885 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:47:02 crc kubenswrapper[4736]: I0316 17:47:02.982524 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:47:02 crc kubenswrapper[4736]: E0316 17:47:02.983634 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:47:03 crc kubenswrapper[4736]: I0316 17:47:03.837827 4736 scope.go:117] "RemoveContainer" containerID="dd2b7b54d992e8628119b3b0c3b4512ba78433bb0adf3be9968b9c0a49e00b4c" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.223701 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7mlz5"] Mar 16 17:47:05 crc kubenswrapper[4736]: E0316 17:47:05.224447 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f4a478-1082-40bd-9e7e-486e9a6d5ad8" containerName="oc" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.224479 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f4a478-1082-40bd-9e7e-486e9a6d5ad8" containerName="oc" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.224748 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f4a478-1082-40bd-9e7e-486e9a6d5ad8" containerName="oc" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.226380 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.236875 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7mlz5"] Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.360581 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5nfj\" (UniqueName: \"kubernetes.io/projected/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-kube-api-access-t5nfj\") pod \"community-operators-7mlz5\" (UID: \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\") " pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.360724 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-catalog-content\") pod \"community-operators-7mlz5\" (UID: \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\") " pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.360763 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-utilities\") pod \"community-operators-7mlz5\" (UID: \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\") " pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.462078 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-catalog-content\") pod \"community-operators-7mlz5\" (UID: \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\") " pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.462160 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-utilities\") pod \"community-operators-7mlz5\" (UID: \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\") " pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.462228 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5nfj\" (UniqueName: \"kubernetes.io/projected/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-kube-api-access-t5nfj\") pod \"community-operators-7mlz5\" (UID: \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\") " pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.462596 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-catalog-content\") pod \"community-operators-7mlz5\" (UID: \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\") " pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.462707 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-utilities\") pod \"community-operators-7mlz5\" (UID: \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\") " pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.480689 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5nfj\" (UniqueName: \"kubernetes.io/projected/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-kube-api-access-t5nfj\") pod \"community-operators-7mlz5\" (UID: \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\") " pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.552193 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.829214 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9kgtz"] Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.831498 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.842241 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9kgtz"] Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.976517 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-utilities\") pod \"certified-operators-9kgtz\" (UID: \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\") " pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.976600 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncrss\" (UniqueName: \"kubernetes.io/projected/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-kube-api-access-ncrss\") pod \"certified-operators-9kgtz\" (UID: \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\") " pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:05 crc kubenswrapper[4736]: I0316 17:47:05.977120 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-catalog-content\") pod \"certified-operators-9kgtz\" (UID: \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\") " pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:06 crc kubenswrapper[4736]: I0316 17:47:06.078659 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-catalog-content\") pod \"certified-operators-9kgtz\" (UID: \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\") " pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:06 crc kubenswrapper[4736]: I0316 17:47:06.078759 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-utilities\") pod \"certified-operators-9kgtz\" (UID: \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\") " pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:06 crc kubenswrapper[4736]: I0316 17:47:06.078823 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncrss\" (UniqueName: \"kubernetes.io/projected/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-kube-api-access-ncrss\") pod \"certified-operators-9kgtz\" (UID: \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\") " pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:06 crc kubenswrapper[4736]: I0316 17:47:06.081019 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-catalog-content\") pod \"certified-operators-9kgtz\" (UID: \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\") " pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:06 crc kubenswrapper[4736]: I0316 17:47:06.082135 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-utilities\") pod \"certified-operators-9kgtz\" (UID: \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\") " pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:06 crc kubenswrapper[4736]: I0316 17:47:06.091047 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7mlz5"] Mar 16 17:47:06 crc kubenswrapper[4736]: I0316 17:47:06.116338 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncrss\" (UniqueName: \"kubernetes.io/projected/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-kube-api-access-ncrss\") pod \"certified-operators-9kgtz\" (UID: \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\") " pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:06 crc kubenswrapper[4736]: I0316 17:47:06.162632 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:06 crc kubenswrapper[4736]: I0316 17:47:06.626531 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9kgtz"] Mar 16 17:47:06 crc kubenswrapper[4736]: W0316 17:47:06.632622 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5285679_c9a8_4e29_bdf8_694d4d9fa4b1.slice/crio-1bbcf74e2c0e83e11de126b87a88ba7e6baa5cc3bf71e706b0692c9c33b7edcf WatchSource:0}: Error finding container 1bbcf74e2c0e83e11de126b87a88ba7e6baa5cc3bf71e706b0692c9c33b7edcf: Status 404 returned error can't find the container with id 1bbcf74e2c0e83e11de126b87a88ba7e6baa5cc3bf71e706b0692c9c33b7edcf Mar 16 17:47:07 crc kubenswrapper[4736]: I0316 17:47:07.078280 4736 generic.go:334] "Generic (PLEG): container finished" podID="f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" containerID="6a819583fd345c563e39190fab159c0da9ef8330480ebbd01c6fb34410c1b610" exitCode=0 Mar 16 17:47:07 crc kubenswrapper[4736]: I0316 17:47:07.078378 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kgtz" event={"ID":"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1","Type":"ContainerDied","Data":"6a819583fd345c563e39190fab159c0da9ef8330480ebbd01c6fb34410c1b610"} Mar 16 17:47:07 crc kubenswrapper[4736]: I0316 17:47:07.078789 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kgtz" event={"ID":"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1","Type":"ContainerStarted","Data":"1bbcf74e2c0e83e11de126b87a88ba7e6baa5cc3bf71e706b0692c9c33b7edcf"} Mar 16 17:47:07 crc kubenswrapper[4736]: I0316 17:47:07.080306 4736 generic.go:334] "Generic (PLEG): container finished" podID="2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" containerID="f90626a5064dd68bbaa64c802d8b58c228dd0996f9d1cae7210464891cce9d21" exitCode=0 Mar 16 17:47:07 crc kubenswrapper[4736]: I0316 17:47:07.080351 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mlz5" event={"ID":"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4","Type":"ContainerDied","Data":"f90626a5064dd68bbaa64c802d8b58c228dd0996f9d1cae7210464891cce9d21"} Mar 16 17:47:07 crc kubenswrapper[4736]: I0316 17:47:07.080369 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mlz5" event={"ID":"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4","Type":"ContainerStarted","Data":"7336cfd192d1bf6b4ade38c682ed2bd056bcd2616a23199b64065d09f1222245"} Mar 16 17:47:07 crc kubenswrapper[4736]: I0316 17:47:07.080435 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 17:47:08 crc kubenswrapper[4736]: I0316 17:47:08.089461 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kgtz" event={"ID":"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1","Type":"ContainerStarted","Data":"aa68d3ca06c5fea9587b1bf95f2047cf77468b97bb0198423c67bee7b223ec02"} Mar 16 17:47:10 crc kubenswrapper[4736]: I0316 17:47:10.114019 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mlz5" event={"ID":"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4","Type":"ContainerStarted","Data":"1ad110536af9d64f78a138c19ef2957b4f5982665c805186cd7c8332b8cc12c3"} Mar 16 17:47:10 crc kubenswrapper[4736]: I0316 17:47:10.116970 4736 generic.go:334] "Generic (PLEG): container finished" podID="f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" containerID="aa68d3ca06c5fea9587b1bf95f2047cf77468b97bb0198423c67bee7b223ec02" exitCode=0 Mar 16 17:47:10 crc kubenswrapper[4736]: I0316 17:47:10.117015 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kgtz" event={"ID":"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1","Type":"ContainerDied","Data":"aa68d3ca06c5fea9587b1bf95f2047cf77468b97bb0198423c67bee7b223ec02"} Mar 16 17:47:11 crc kubenswrapper[4736]: I0316 17:47:11.131339 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kgtz" event={"ID":"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1","Type":"ContainerStarted","Data":"884c9bc351557274e2bbb07f500390de3edcca8f7852369ce2bad70426c5b7af"} Mar 16 17:47:11 crc kubenswrapper[4736]: I0316 17:47:11.166071 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9kgtz" podStartSLOduration=2.653849018 podStartE2EDuration="6.16604665s" podCreationTimestamp="2026-03-16 17:47:05 +0000 UTC" firstStartedPulling="2026-03-16 17:47:07.08019881 +0000 UTC m=+9228.807589097" lastFinishedPulling="2026-03-16 17:47:10.592396442 +0000 UTC m=+9232.319786729" observedRunningTime="2026-03-16 17:47:11.156047458 +0000 UTC m=+9232.883437785" watchObservedRunningTime="2026-03-16 17:47:11.16604665 +0000 UTC m=+9232.893436957" Mar 16 17:47:12 crc kubenswrapper[4736]: I0316 17:47:12.140140 4736 generic.go:334] "Generic (PLEG): container finished" podID="2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" containerID="1ad110536af9d64f78a138c19ef2957b4f5982665c805186cd7c8332b8cc12c3" exitCode=0 Mar 16 17:47:12 crc kubenswrapper[4736]: I0316 17:47:12.140183 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mlz5" event={"ID":"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4","Type":"ContainerDied","Data":"1ad110536af9d64f78a138c19ef2957b4f5982665c805186cd7c8332b8cc12c3"} Mar 16 17:47:13 crc kubenswrapper[4736]: I0316 17:47:13.163515 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mlz5" event={"ID":"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4","Type":"ContainerStarted","Data":"1d33d99f98f7632b51eb294887cab13e2da55f05ef0e7734c18f7d6b3434acba"} Mar 16 17:47:13 crc kubenswrapper[4736]: I0316 17:47:13.211268 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7mlz5" podStartSLOduration=2.7692522950000003 podStartE2EDuration="8.211241152s" podCreationTimestamp="2026-03-16 17:47:05 +0000 UTC" firstStartedPulling="2026-03-16 17:47:07.082557863 +0000 UTC m=+9228.809948150" lastFinishedPulling="2026-03-16 17:47:12.52454672 +0000 UTC m=+9234.251937007" observedRunningTime="2026-03-16 17:47:13.183388282 +0000 UTC m=+9234.910778569" watchObservedRunningTime="2026-03-16 17:47:13.211241152 +0000 UTC m=+9234.938631449" Mar 16 17:47:15 crc kubenswrapper[4736]: I0316 17:47:15.553714 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:15 crc kubenswrapper[4736]: I0316 17:47:15.553790 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:17 crc kubenswrapper[4736]: I0316 17:47:17.030081 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:17 crc kubenswrapper[4736]: I0316 17:47:17.057624 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-7mlz5" podUID="2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" containerName="registry-server" probeResult="failure" output=< Mar 16 17:47:17 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:47:17 crc kubenswrapper[4736]: > Mar 16 17:47:17 crc kubenswrapper[4736]: I0316 17:47:17.069917 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:47:17 crc kubenswrapper[4736]: E0316 17:47:17.070348 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:47:17 crc kubenswrapper[4736]: I0316 17:47:17.099967 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:18 crc kubenswrapper[4736]: I0316 17:47:18.130611 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9kgtz" podUID="f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" containerName="registry-server" probeResult="failure" output=< Mar 16 17:47:18 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:47:18 crc kubenswrapper[4736]: > Mar 16 17:47:25 crc kubenswrapper[4736]: I0316 17:47:25.621166 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:25 crc kubenswrapper[4736]: I0316 17:47:25.699155 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:25 crc kubenswrapper[4736]: I0316 17:47:25.981508 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7mlz5"] Mar 16 17:47:26 crc kubenswrapper[4736]: I0316 17:47:26.245319 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:26 crc kubenswrapper[4736]: I0316 17:47:26.305013 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:27 crc kubenswrapper[4736]: I0316 17:47:27.186372 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7mlz5" podUID="2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" containerName="registry-server" containerID="cri-o://1d33d99f98f7632b51eb294887cab13e2da55f05ef0e7734c18f7d6b3434acba" gracePeriod=2 Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.197356 4736 generic.go:334] "Generic (PLEG): container finished" podID="2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" containerID="1d33d99f98f7632b51eb294887cab13e2da55f05ef0e7734c18f7d6b3434acba" exitCode=0 Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.197404 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mlz5" event={"ID":"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4","Type":"ContainerDied","Data":"1d33d99f98f7632b51eb294887cab13e2da55f05ef0e7734c18f7d6b3434acba"} Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.197915 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7mlz5" event={"ID":"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4","Type":"ContainerDied","Data":"7336cfd192d1bf6b4ade38c682ed2bd056bcd2616a23199b64065d09f1222245"} Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.197932 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7336cfd192d1bf6b4ade38c682ed2bd056bcd2616a23199b64065d09f1222245" Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.206924 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.286142 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-utilities\") pod \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\" (UID: \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\") " Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.286275 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5nfj\" (UniqueName: \"kubernetes.io/projected/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-kube-api-access-t5nfj\") pod \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\" (UID: \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\") " Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.286423 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-catalog-content\") pod \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\" (UID: \"2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4\") " Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.287195 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-utilities" (OuterVolumeSpecName: "utilities") pod "2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" (UID: "2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.305082 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-kube-api-access-t5nfj" (OuterVolumeSpecName: "kube-api-access-t5nfj") pod "2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" (UID: "2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4"). InnerVolumeSpecName "kube-api-access-t5nfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.337425 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" (UID: "2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.381360 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9kgtz"] Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.381703 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9kgtz" podUID="f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" containerName="registry-server" containerID="cri-o://884c9bc351557274e2bbb07f500390de3edcca8f7852369ce2bad70426c5b7af" gracePeriod=2 Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.388531 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.388575 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.388598 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5nfj\" (UniqueName: \"kubernetes.io/projected/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4-kube-api-access-t5nfj\") on node \"crc\" DevicePath \"\"" Mar 16 17:47:28 crc kubenswrapper[4736]: I0316 17:47:28.995867 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.101359 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-catalog-content\") pod \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\" (UID: \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\") " Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.101604 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-utilities\") pod \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\" (UID: \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\") " Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.101751 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncrss\" (UniqueName: \"kubernetes.io/projected/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-kube-api-access-ncrss\") pod \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\" (UID: \"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1\") " Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.104158 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-utilities" (OuterVolumeSpecName: "utilities") pod "f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" (UID: "f5285679-c9a8-4e29-bdf8-694d4d9fa4b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.108433 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-kube-api-access-ncrss" (OuterVolumeSpecName: "kube-api-access-ncrss") pod "f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" (UID: "f5285679-c9a8-4e29-bdf8-694d4d9fa4b1"). InnerVolumeSpecName "kube-api-access-ncrss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.168909 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" (UID: "f5285679-c9a8-4e29-bdf8-694d4d9fa4b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.203941 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.203971 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.203986 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncrss\" (UniqueName: \"kubernetes.io/projected/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1-kube-api-access-ncrss\") on node \"crc\" DevicePath \"\"" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.210732 4736 generic.go:334] "Generic (PLEG): container finished" podID="f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" containerID="884c9bc351557274e2bbb07f500390de3edcca8f7852369ce2bad70426c5b7af" exitCode=0 Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.210793 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9kgtz" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.210827 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7mlz5" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.210830 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kgtz" event={"ID":"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1","Type":"ContainerDied","Data":"884c9bc351557274e2bbb07f500390de3edcca8f7852369ce2bad70426c5b7af"} Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.210885 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9kgtz" event={"ID":"f5285679-c9a8-4e29-bdf8-694d4d9fa4b1","Type":"ContainerDied","Data":"1bbcf74e2c0e83e11de126b87a88ba7e6baa5cc3bf71e706b0692c9c33b7edcf"} Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.210906 4736 scope.go:117] "RemoveContainer" containerID="884c9bc351557274e2bbb07f500390de3edcca8f7852369ce2bad70426c5b7af" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.243237 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7mlz5"] Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.260004 4736 scope.go:117] "RemoveContainer" containerID="aa68d3ca06c5fea9587b1bf95f2047cf77468b97bb0198423c67bee7b223ec02" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.260144 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7mlz5"] Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.270169 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9kgtz"] Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.279574 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9kgtz"] Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.289668 4736 scope.go:117] "RemoveContainer" containerID="6a819583fd345c563e39190fab159c0da9ef8330480ebbd01c6fb34410c1b610" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.311495 4736 scope.go:117] "RemoveContainer" containerID="884c9bc351557274e2bbb07f500390de3edcca8f7852369ce2bad70426c5b7af" Mar 16 17:47:29 crc kubenswrapper[4736]: E0316 17:47:29.315329 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"884c9bc351557274e2bbb07f500390de3edcca8f7852369ce2bad70426c5b7af\": container with ID starting with 884c9bc351557274e2bbb07f500390de3edcca8f7852369ce2bad70426c5b7af not found: ID does not exist" containerID="884c9bc351557274e2bbb07f500390de3edcca8f7852369ce2bad70426c5b7af" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.315376 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"884c9bc351557274e2bbb07f500390de3edcca8f7852369ce2bad70426c5b7af"} err="failed to get container status \"884c9bc351557274e2bbb07f500390de3edcca8f7852369ce2bad70426c5b7af\": rpc error: code = NotFound desc = could not find container \"884c9bc351557274e2bbb07f500390de3edcca8f7852369ce2bad70426c5b7af\": container with ID starting with 884c9bc351557274e2bbb07f500390de3edcca8f7852369ce2bad70426c5b7af not found: ID does not exist" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.315403 4736 scope.go:117] "RemoveContainer" containerID="aa68d3ca06c5fea9587b1bf95f2047cf77468b97bb0198423c67bee7b223ec02" Mar 16 17:47:29 crc kubenswrapper[4736]: E0316 17:47:29.315812 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa68d3ca06c5fea9587b1bf95f2047cf77468b97bb0198423c67bee7b223ec02\": container with ID starting with aa68d3ca06c5fea9587b1bf95f2047cf77468b97bb0198423c67bee7b223ec02 not found: ID does not exist" containerID="aa68d3ca06c5fea9587b1bf95f2047cf77468b97bb0198423c67bee7b223ec02" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.315835 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa68d3ca06c5fea9587b1bf95f2047cf77468b97bb0198423c67bee7b223ec02"} err="failed to get container status \"aa68d3ca06c5fea9587b1bf95f2047cf77468b97bb0198423c67bee7b223ec02\": rpc error: code = NotFound desc = could not find container \"aa68d3ca06c5fea9587b1bf95f2047cf77468b97bb0198423c67bee7b223ec02\": container with ID starting with aa68d3ca06c5fea9587b1bf95f2047cf77468b97bb0198423c67bee7b223ec02 not found: ID does not exist" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.315851 4736 scope.go:117] "RemoveContainer" containerID="6a819583fd345c563e39190fab159c0da9ef8330480ebbd01c6fb34410c1b610" Mar 16 17:47:29 crc kubenswrapper[4736]: E0316 17:47:29.316036 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a819583fd345c563e39190fab159c0da9ef8330480ebbd01c6fb34410c1b610\": container with ID starting with 6a819583fd345c563e39190fab159c0da9ef8330480ebbd01c6fb34410c1b610 not found: ID does not exist" containerID="6a819583fd345c563e39190fab159c0da9ef8330480ebbd01c6fb34410c1b610" Mar 16 17:47:29 crc kubenswrapper[4736]: I0316 17:47:29.316054 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a819583fd345c563e39190fab159c0da9ef8330480ebbd01c6fb34410c1b610"} err="failed to get container status \"6a819583fd345c563e39190fab159c0da9ef8330480ebbd01c6fb34410c1b610\": rpc error: code = NotFound desc = could not find container \"6a819583fd345c563e39190fab159c0da9ef8330480ebbd01c6fb34410c1b610\": container with ID starting with 6a819583fd345c563e39190fab159c0da9ef8330480ebbd01c6fb34410c1b610 not found: ID does not exist" Mar 16 17:47:30 crc kubenswrapper[4736]: I0316 17:47:30.996413 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" path="/var/lib/kubelet/pods/2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4/volumes" Mar 16 17:47:31 crc kubenswrapper[4736]: I0316 17:47:31.000135 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" path="/var/lib/kubelet/pods/f5285679-c9a8-4e29-bdf8-694d4d9fa4b1/volumes" Mar 16 17:47:31 crc kubenswrapper[4736]: I0316 17:47:31.978643 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:47:31 crc kubenswrapper[4736]: E0316 17:47:31.979160 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:47:46 crc kubenswrapper[4736]: I0316 17:47:46.980482 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:47:46 crc kubenswrapper[4736]: E0316 17:47:46.981495 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.185732 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561388-wz6c2"] Mar 16 17:48:00 crc kubenswrapper[4736]: E0316 17:48:00.188010 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" containerName="extract-content" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.188136 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" containerName="extract-content" Mar 16 17:48:00 crc kubenswrapper[4736]: E0316 17:48:00.188244 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" containerName="registry-server" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.188326 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" containerName="registry-server" Mar 16 17:48:00 crc kubenswrapper[4736]: E0316 17:48:00.188441 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" containerName="extract-content" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.188521 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" containerName="extract-content" Mar 16 17:48:00 crc kubenswrapper[4736]: E0316 17:48:00.188638 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" containerName="extract-utilities" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.188724 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" containerName="extract-utilities" Mar 16 17:48:00 crc kubenswrapper[4736]: E0316 17:48:00.188814 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" containerName="extract-utilities" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.188891 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" containerName="extract-utilities" Mar 16 17:48:00 crc kubenswrapper[4736]: E0316 17:48:00.188967 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" containerName="registry-server" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.189038 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" containerName="registry-server" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.189339 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b38de6e-d4cc-4ba6-b3e7-1408ca3a47f4" containerName="registry-server" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.189440 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5285679-c9a8-4e29-bdf8-694d4d9fa4b1" containerName="registry-server" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.190265 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561388-wz6c2" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.196067 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.196405 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.198937 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.210306 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561388-wz6c2"] Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.359773 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc6tm\" (UniqueName: \"kubernetes.io/projected/3afd2be9-62fc-431e-9d0c-846909035ea6-kube-api-access-cc6tm\") pod \"auto-csr-approver-29561388-wz6c2\" (UID: \"3afd2be9-62fc-431e-9d0c-846909035ea6\") " pod="openshift-infra/auto-csr-approver-29561388-wz6c2" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.492559 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc6tm\" (UniqueName: \"kubernetes.io/projected/3afd2be9-62fc-431e-9d0c-846909035ea6-kube-api-access-cc6tm\") pod \"auto-csr-approver-29561388-wz6c2\" (UID: \"3afd2be9-62fc-431e-9d0c-846909035ea6\") " pod="openshift-infra/auto-csr-approver-29561388-wz6c2" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.527446 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc6tm\" (UniqueName: \"kubernetes.io/projected/3afd2be9-62fc-431e-9d0c-846909035ea6-kube-api-access-cc6tm\") pod \"auto-csr-approver-29561388-wz6c2\" (UID: \"3afd2be9-62fc-431e-9d0c-846909035ea6\") " pod="openshift-infra/auto-csr-approver-29561388-wz6c2" Mar 16 17:48:00 crc kubenswrapper[4736]: I0316 17:48:00.818776 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561388-wz6c2" Mar 16 17:48:01 crc kubenswrapper[4736]: I0316 17:48:01.351036 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561388-wz6c2"] Mar 16 17:48:01 crc kubenswrapper[4736]: I0316 17:48:01.590185 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561388-wz6c2" event={"ID":"3afd2be9-62fc-431e-9d0c-846909035ea6","Type":"ContainerStarted","Data":"5498591dbcab648f84b27415b443dafdf792fd7ee701a0e3a17eb601a6032ad7"} Mar 16 17:48:01 crc kubenswrapper[4736]: I0316 17:48:01.978431 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:48:01 crc kubenswrapper[4736]: E0316 17:48:01.978910 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:48:03 crc kubenswrapper[4736]: I0316 17:48:03.620813 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561388-wz6c2" event={"ID":"3afd2be9-62fc-431e-9d0c-846909035ea6","Type":"ContainerStarted","Data":"a9ec82965dc19a8b724c48ed7a8dac03baa3d023346e32652dbcf978dcb7dc92"} Mar 16 17:48:03 crc kubenswrapper[4736]: I0316 17:48:03.635979 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561388-wz6c2" podStartSLOduration=2.333603151 podStartE2EDuration="3.635962838s" podCreationTimestamp="2026-03-16 17:48:00 +0000 UTC" firstStartedPulling="2026-03-16 17:48:01.367149542 +0000 UTC m=+9283.094539829" lastFinishedPulling="2026-03-16 17:48:02.669509219 +0000 UTC m=+9284.396899516" observedRunningTime="2026-03-16 17:48:03.63346979 +0000 UTC m=+9285.360860087" watchObservedRunningTime="2026-03-16 17:48:03.635962838 +0000 UTC m=+9285.363353115" Mar 16 17:48:04 crc kubenswrapper[4736]: I0316 17:48:04.638792 4736 generic.go:334] "Generic (PLEG): container finished" podID="3afd2be9-62fc-431e-9d0c-846909035ea6" containerID="a9ec82965dc19a8b724c48ed7a8dac03baa3d023346e32652dbcf978dcb7dc92" exitCode=0 Mar 16 17:48:04 crc kubenswrapper[4736]: I0316 17:48:04.638923 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561388-wz6c2" event={"ID":"3afd2be9-62fc-431e-9d0c-846909035ea6","Type":"ContainerDied","Data":"a9ec82965dc19a8b724c48ed7a8dac03baa3d023346e32652dbcf978dcb7dc92"} Mar 16 17:48:06 crc kubenswrapper[4736]: I0316 17:48:06.067465 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561388-wz6c2" Mar 16 17:48:06 crc kubenswrapper[4736]: I0316 17:48:06.254806 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc6tm\" (UniqueName: \"kubernetes.io/projected/3afd2be9-62fc-431e-9d0c-846909035ea6-kube-api-access-cc6tm\") pod \"3afd2be9-62fc-431e-9d0c-846909035ea6\" (UID: \"3afd2be9-62fc-431e-9d0c-846909035ea6\") " Mar 16 17:48:06 crc kubenswrapper[4736]: I0316 17:48:06.264395 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3afd2be9-62fc-431e-9d0c-846909035ea6-kube-api-access-cc6tm" (OuterVolumeSpecName: "kube-api-access-cc6tm") pod "3afd2be9-62fc-431e-9d0c-846909035ea6" (UID: "3afd2be9-62fc-431e-9d0c-846909035ea6"). InnerVolumeSpecName "kube-api-access-cc6tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:48:06 crc kubenswrapper[4736]: I0316 17:48:06.359477 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc6tm\" (UniqueName: \"kubernetes.io/projected/3afd2be9-62fc-431e-9d0c-846909035ea6-kube-api-access-cc6tm\") on node \"crc\" DevicePath \"\"" Mar 16 17:48:06 crc kubenswrapper[4736]: I0316 17:48:06.674335 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561388-wz6c2" event={"ID":"3afd2be9-62fc-431e-9d0c-846909035ea6","Type":"ContainerDied","Data":"5498591dbcab648f84b27415b443dafdf792fd7ee701a0e3a17eb601a6032ad7"} Mar 16 17:48:06 crc kubenswrapper[4736]: I0316 17:48:06.674412 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5498591dbcab648f84b27415b443dafdf792fd7ee701a0e3a17eb601a6032ad7" Mar 16 17:48:06 crc kubenswrapper[4736]: I0316 17:48:06.674531 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561388-wz6c2" Mar 16 17:48:06 crc kubenswrapper[4736]: I0316 17:48:06.730708 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561382-khbfs"] Mar 16 17:48:06 crc kubenswrapper[4736]: I0316 17:48:06.738998 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561382-khbfs"] Mar 16 17:48:06 crc kubenswrapper[4736]: I0316 17:48:06.989099 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91675444-4ed8-4f0f-94bb-d358385a3dbc" path="/var/lib/kubelet/pods/91675444-4ed8-4f0f-94bb-d358385a3dbc/volumes" Mar 16 17:48:14 crc kubenswrapper[4736]: I0316 17:48:14.979128 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:48:14 crc kubenswrapper[4736]: E0316 17:48:14.980032 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:48:26 crc kubenswrapper[4736]: I0316 17:48:26.978650 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:48:26 crc kubenswrapper[4736]: E0316 17:48:26.979538 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:48:39 crc kubenswrapper[4736]: I0316 17:48:39.978281 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:48:39 crc kubenswrapper[4736]: E0316 17:48:39.978921 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:48:54 crc kubenswrapper[4736]: I0316 17:48:54.977698 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:48:54 crc kubenswrapper[4736]: E0316 17:48:54.978434 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:49:03 crc kubenswrapper[4736]: I0316 17:49:03.986163 4736 scope.go:117] "RemoveContainer" containerID="ac34104a0b268680131dec74e6c0af2aa7e44066cf55f65927bc89086e1a207b" Mar 16 17:49:04 crc kubenswrapper[4736]: I0316 17:49:04.013762 4736 scope.go:117] "RemoveContainer" containerID="68ffe61205dbd3e78ef2e2c9dc655eb9329fcb107d35e4f7df26a8a6d76ede81" Mar 16 17:49:04 crc kubenswrapper[4736]: I0316 17:49:04.049560 4736 scope.go:117] "RemoveContainer" containerID="30ebc6af604f01a832a33dd64bef5c9a093fd49fca1cf2fc086e25ee2cef83c1" Mar 16 17:49:04 crc kubenswrapper[4736]: I0316 17:49:04.121477 4736 scope.go:117] "RemoveContainer" containerID="13e579fee27d9b18181982f4e4cd420a72cf379668151f70a904d41a51d25aaf" Mar 16 17:49:08 crc kubenswrapper[4736]: I0316 17:49:08.986667 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:49:08 crc kubenswrapper[4736]: E0316 17:49:08.987698 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:49:21 crc kubenswrapper[4736]: I0316 17:49:21.681330 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rm8dk"] Mar 16 17:49:21 crc kubenswrapper[4736]: E0316 17:49:21.682414 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3afd2be9-62fc-431e-9d0c-846909035ea6" containerName="oc" Mar 16 17:49:21 crc kubenswrapper[4736]: I0316 17:49:21.682431 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3afd2be9-62fc-431e-9d0c-846909035ea6" containerName="oc" Mar 16 17:49:21 crc kubenswrapper[4736]: I0316 17:49:21.682692 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3afd2be9-62fc-431e-9d0c-846909035ea6" containerName="oc" Mar 16 17:49:21 crc kubenswrapper[4736]: I0316 17:49:21.687144 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:21 crc kubenswrapper[4736]: I0316 17:49:21.729525 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm8dk"] Mar 16 17:49:21 crc kubenswrapper[4736]: I0316 17:49:21.784619 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86550230-8471-47b7-9f9d-042adba7fda5-utilities\") pod \"redhat-marketplace-rm8dk\" (UID: \"86550230-8471-47b7-9f9d-042adba7fda5\") " pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:21 crc kubenswrapper[4736]: I0316 17:49:21.785151 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zbqg\" (UniqueName: \"kubernetes.io/projected/86550230-8471-47b7-9f9d-042adba7fda5-kube-api-access-6zbqg\") pod \"redhat-marketplace-rm8dk\" (UID: \"86550230-8471-47b7-9f9d-042adba7fda5\") " pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:21 crc kubenswrapper[4736]: I0316 17:49:21.785269 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86550230-8471-47b7-9f9d-042adba7fda5-catalog-content\") pod \"redhat-marketplace-rm8dk\" (UID: \"86550230-8471-47b7-9f9d-042adba7fda5\") " pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:21 crc kubenswrapper[4736]: I0316 17:49:21.886675 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zbqg\" (UniqueName: \"kubernetes.io/projected/86550230-8471-47b7-9f9d-042adba7fda5-kube-api-access-6zbqg\") pod \"redhat-marketplace-rm8dk\" (UID: \"86550230-8471-47b7-9f9d-042adba7fda5\") " pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:21 crc kubenswrapper[4736]: I0316 17:49:21.887063 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86550230-8471-47b7-9f9d-042adba7fda5-catalog-content\") pod \"redhat-marketplace-rm8dk\" (UID: \"86550230-8471-47b7-9f9d-042adba7fda5\") " pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:21 crc kubenswrapper[4736]: I0316 17:49:21.887552 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86550230-8471-47b7-9f9d-042adba7fda5-catalog-content\") pod \"redhat-marketplace-rm8dk\" (UID: \"86550230-8471-47b7-9f9d-042adba7fda5\") " pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:21 crc kubenswrapper[4736]: I0316 17:49:21.887711 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86550230-8471-47b7-9f9d-042adba7fda5-utilities\") pod \"redhat-marketplace-rm8dk\" (UID: \"86550230-8471-47b7-9f9d-042adba7fda5\") " pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:21 crc kubenswrapper[4736]: I0316 17:49:21.887964 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86550230-8471-47b7-9f9d-042adba7fda5-utilities\") pod \"redhat-marketplace-rm8dk\" (UID: \"86550230-8471-47b7-9f9d-042adba7fda5\") " pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:21 crc kubenswrapper[4736]: I0316 17:49:21.911403 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zbqg\" (UniqueName: \"kubernetes.io/projected/86550230-8471-47b7-9f9d-042adba7fda5-kube-api-access-6zbqg\") pod \"redhat-marketplace-rm8dk\" (UID: \"86550230-8471-47b7-9f9d-042adba7fda5\") " pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:22 crc kubenswrapper[4736]: I0316 17:49:22.018705 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:22 crc kubenswrapper[4736]: I0316 17:49:22.531442 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm8dk"] Mar 16 17:49:22 crc kubenswrapper[4736]: I0316 17:49:22.977764 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:49:22 crc kubenswrapper[4736]: E0316 17:49:22.978615 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:49:23 crc kubenswrapper[4736]: I0316 17:49:23.462760 4736 generic.go:334] "Generic (PLEG): container finished" podID="86550230-8471-47b7-9f9d-042adba7fda5" containerID="4f80c64a2110ab1a9f3244391b29a487320019f881ed5eac2e04699aef43e43b" exitCode=0 Mar 16 17:49:23 crc kubenswrapper[4736]: I0316 17:49:23.464281 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm8dk" event={"ID":"86550230-8471-47b7-9f9d-042adba7fda5","Type":"ContainerDied","Data":"4f80c64a2110ab1a9f3244391b29a487320019f881ed5eac2e04699aef43e43b"} Mar 16 17:49:23 crc kubenswrapper[4736]: I0316 17:49:23.464325 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm8dk" event={"ID":"86550230-8471-47b7-9f9d-042adba7fda5","Type":"ContainerStarted","Data":"80ce09ae8259e0c0736ed21468e6e2afbe2bc0f6566254448c05a25345af2d4f"} Mar 16 17:49:24 crc kubenswrapper[4736]: I0316 17:49:24.474639 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm8dk" event={"ID":"86550230-8471-47b7-9f9d-042adba7fda5","Type":"ContainerStarted","Data":"bbce47b57a99f577755234c075c6f4e520b39332c9a55179966b21c679a992c0"} Mar 16 17:49:25 crc kubenswrapper[4736]: E0316 17:49:25.491325 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86550230_8471_47b7_9f9d_042adba7fda5.slice/crio-bbce47b57a99f577755234c075c6f4e520b39332c9a55179966b21c679a992c0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86550230_8471_47b7_9f9d_042adba7fda5.slice/crio-conmon-bbce47b57a99f577755234c075c6f4e520b39332c9a55179966b21c679a992c0.scope\": RecentStats: unable to find data in memory cache]" Mar 16 17:49:25 crc kubenswrapper[4736]: I0316 17:49:25.493429 4736 generic.go:334] "Generic (PLEG): container finished" podID="86550230-8471-47b7-9f9d-042adba7fda5" containerID="bbce47b57a99f577755234c075c6f4e520b39332c9a55179966b21c679a992c0" exitCode=0 Mar 16 17:49:25 crc kubenswrapper[4736]: I0316 17:49:25.493479 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm8dk" event={"ID":"86550230-8471-47b7-9f9d-042adba7fda5","Type":"ContainerDied","Data":"bbce47b57a99f577755234c075c6f4e520b39332c9a55179966b21c679a992c0"} Mar 16 17:49:26 crc kubenswrapper[4736]: I0316 17:49:26.504694 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm8dk" event={"ID":"86550230-8471-47b7-9f9d-042adba7fda5","Type":"ContainerStarted","Data":"e2d70ee2f4d2879049e86e0069702f1d21f1193de7b94efb677c929e7e2545c8"} Mar 16 17:49:26 crc kubenswrapper[4736]: I0316 17:49:26.525081 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rm8dk" podStartSLOduration=3.087577211 podStartE2EDuration="5.52503029s" podCreationTimestamp="2026-03-16 17:49:21 +0000 UTC" firstStartedPulling="2026-03-16 17:49:23.469355176 +0000 UTC m=+9365.196745483" lastFinishedPulling="2026-03-16 17:49:25.906808255 +0000 UTC m=+9367.634198562" observedRunningTime="2026-03-16 17:49:26.52396462 +0000 UTC m=+9368.251354927" watchObservedRunningTime="2026-03-16 17:49:26.52503029 +0000 UTC m=+9368.252420577" Mar 16 17:49:32 crc kubenswrapper[4736]: I0316 17:49:32.019828 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:32 crc kubenswrapper[4736]: I0316 17:49:32.020365 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:33 crc kubenswrapper[4736]: I0316 17:49:33.097649 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rm8dk" podUID="86550230-8471-47b7-9f9d-042adba7fda5" containerName="registry-server" probeResult="failure" output=< Mar 16 17:49:33 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:49:33 crc kubenswrapper[4736]: > Mar 16 17:49:35 crc kubenswrapper[4736]: I0316 17:49:35.979236 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:49:35 crc kubenswrapper[4736]: E0316 17:49:35.979752 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:49:42 crc kubenswrapper[4736]: I0316 17:49:42.073486 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:42 crc kubenswrapper[4736]: I0316 17:49:42.146070 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:42 crc kubenswrapper[4736]: I0316 17:49:42.320683 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm8dk"] Mar 16 17:49:43 crc kubenswrapper[4736]: I0316 17:49:43.654661 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rm8dk" podUID="86550230-8471-47b7-9f9d-042adba7fda5" containerName="registry-server" containerID="cri-o://e2d70ee2f4d2879049e86e0069702f1d21f1193de7b94efb677c929e7e2545c8" gracePeriod=2 Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.596161 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.656248 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zbqg\" (UniqueName: \"kubernetes.io/projected/86550230-8471-47b7-9f9d-042adba7fda5-kube-api-access-6zbqg\") pod \"86550230-8471-47b7-9f9d-042adba7fda5\" (UID: \"86550230-8471-47b7-9f9d-042adba7fda5\") " Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.656345 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86550230-8471-47b7-9f9d-042adba7fda5-catalog-content\") pod \"86550230-8471-47b7-9f9d-042adba7fda5\" (UID: \"86550230-8471-47b7-9f9d-042adba7fda5\") " Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.656398 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86550230-8471-47b7-9f9d-042adba7fda5-utilities\") pod \"86550230-8471-47b7-9f9d-042adba7fda5\" (UID: \"86550230-8471-47b7-9f9d-042adba7fda5\") " Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.658106 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86550230-8471-47b7-9f9d-042adba7fda5-utilities" (OuterVolumeSpecName: "utilities") pod "86550230-8471-47b7-9f9d-042adba7fda5" (UID: "86550230-8471-47b7-9f9d-042adba7fda5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.666348 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86550230-8471-47b7-9f9d-042adba7fda5-kube-api-access-6zbqg" (OuterVolumeSpecName: "kube-api-access-6zbqg") pod "86550230-8471-47b7-9f9d-042adba7fda5" (UID: "86550230-8471-47b7-9f9d-042adba7fda5"). InnerVolumeSpecName "kube-api-access-6zbqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.709474 4736 generic.go:334] "Generic (PLEG): container finished" podID="86550230-8471-47b7-9f9d-042adba7fda5" containerID="e2d70ee2f4d2879049e86e0069702f1d21f1193de7b94efb677c929e7e2545c8" exitCode=0 Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.709516 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm8dk" event={"ID":"86550230-8471-47b7-9f9d-042adba7fda5","Type":"ContainerDied","Data":"e2d70ee2f4d2879049e86e0069702f1d21f1193de7b94efb677c929e7e2545c8"} Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.709544 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm8dk" event={"ID":"86550230-8471-47b7-9f9d-042adba7fda5","Type":"ContainerDied","Data":"80ce09ae8259e0c0736ed21468e6e2afbe2bc0f6566254448c05a25345af2d4f"} Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.709574 4736 scope.go:117] "RemoveContainer" containerID="e2d70ee2f4d2879049e86e0069702f1d21f1193de7b94efb677c929e7e2545c8" Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.709768 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm8dk" Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.784489 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86550230-8471-47b7-9f9d-042adba7fda5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86550230-8471-47b7-9f9d-042adba7fda5" (UID: "86550230-8471-47b7-9f9d-042adba7fda5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.843318 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86550230-8471-47b7-9f9d-042adba7fda5-catalog-content\") pod \"86550230-8471-47b7-9f9d-042adba7fda5\" (UID: \"86550230-8471-47b7-9f9d-042adba7fda5\") " Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.877963 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zbqg\" (UniqueName: \"kubernetes.io/projected/86550230-8471-47b7-9f9d-042adba7fda5-kube-api-access-6zbqg\") on node \"crc\" DevicePath \"\"" Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.878001 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86550230-8471-47b7-9f9d-042adba7fda5-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.878126 4736 scope.go:117] "RemoveContainer" containerID="bbce47b57a99f577755234c075c6f4e520b39332c9a55179966b21c679a992c0" Mar 16 17:49:44 crc kubenswrapper[4736]: W0316 17:49:44.882126 4736 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/86550230-8471-47b7-9f9d-042adba7fda5/volumes/kubernetes.io~empty-dir/catalog-content Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.887228 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86550230-8471-47b7-9f9d-042adba7fda5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86550230-8471-47b7-9f9d-042adba7fda5" (UID: "86550230-8471-47b7-9f9d-042adba7fda5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:49:44 crc kubenswrapper[4736]: I0316 17:49:44.979915 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86550230-8471-47b7-9f9d-042adba7fda5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:49:45 crc kubenswrapper[4736]: I0316 17:49:45.052927 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm8dk"] Mar 16 17:49:45 crc kubenswrapper[4736]: I0316 17:49:45.054194 4736 scope.go:117] "RemoveContainer" containerID="4f80c64a2110ab1a9f3244391b29a487320019f881ed5eac2e04699aef43e43b" Mar 16 17:49:45 crc kubenswrapper[4736]: I0316 17:49:45.067974 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm8dk"] Mar 16 17:49:45 crc kubenswrapper[4736]: I0316 17:49:45.075887 4736 scope.go:117] "RemoveContainer" containerID="e2d70ee2f4d2879049e86e0069702f1d21f1193de7b94efb677c929e7e2545c8" Mar 16 17:49:45 crc kubenswrapper[4736]: E0316 17:49:45.077794 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2d70ee2f4d2879049e86e0069702f1d21f1193de7b94efb677c929e7e2545c8\": container with ID starting with e2d70ee2f4d2879049e86e0069702f1d21f1193de7b94efb677c929e7e2545c8 not found: ID does not exist" containerID="e2d70ee2f4d2879049e86e0069702f1d21f1193de7b94efb677c929e7e2545c8" Mar 16 17:49:45 crc kubenswrapper[4736]: I0316 17:49:45.077826 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d70ee2f4d2879049e86e0069702f1d21f1193de7b94efb677c929e7e2545c8"} err="failed to get container status \"e2d70ee2f4d2879049e86e0069702f1d21f1193de7b94efb677c929e7e2545c8\": rpc error: code = NotFound desc = could not find container \"e2d70ee2f4d2879049e86e0069702f1d21f1193de7b94efb677c929e7e2545c8\": container with ID starting with e2d70ee2f4d2879049e86e0069702f1d21f1193de7b94efb677c929e7e2545c8 not found: ID does not exist" Mar 16 17:49:45 crc kubenswrapper[4736]: I0316 17:49:45.077852 4736 scope.go:117] "RemoveContainer" containerID="bbce47b57a99f577755234c075c6f4e520b39332c9a55179966b21c679a992c0" Mar 16 17:49:45 crc kubenswrapper[4736]: E0316 17:49:45.078457 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbce47b57a99f577755234c075c6f4e520b39332c9a55179966b21c679a992c0\": container with ID starting with bbce47b57a99f577755234c075c6f4e520b39332c9a55179966b21c679a992c0 not found: ID does not exist" containerID="bbce47b57a99f577755234c075c6f4e520b39332c9a55179966b21c679a992c0" Mar 16 17:49:45 crc kubenswrapper[4736]: I0316 17:49:45.078498 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbce47b57a99f577755234c075c6f4e520b39332c9a55179966b21c679a992c0"} err="failed to get container status \"bbce47b57a99f577755234c075c6f4e520b39332c9a55179966b21c679a992c0\": rpc error: code = NotFound desc = could not find container \"bbce47b57a99f577755234c075c6f4e520b39332c9a55179966b21c679a992c0\": container with ID starting with bbce47b57a99f577755234c075c6f4e520b39332c9a55179966b21c679a992c0 not found: ID does not exist" Mar 16 17:49:45 crc kubenswrapper[4736]: I0316 17:49:45.078521 4736 scope.go:117] "RemoveContainer" containerID="4f80c64a2110ab1a9f3244391b29a487320019f881ed5eac2e04699aef43e43b" Mar 16 17:49:45 crc kubenswrapper[4736]: E0316 17:49:45.084628 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f80c64a2110ab1a9f3244391b29a487320019f881ed5eac2e04699aef43e43b\": container with ID starting with 4f80c64a2110ab1a9f3244391b29a487320019f881ed5eac2e04699aef43e43b not found: ID does not exist" containerID="4f80c64a2110ab1a9f3244391b29a487320019f881ed5eac2e04699aef43e43b" Mar 16 17:49:45 crc kubenswrapper[4736]: I0316 17:49:45.084671 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f80c64a2110ab1a9f3244391b29a487320019f881ed5eac2e04699aef43e43b"} err="failed to get container status \"4f80c64a2110ab1a9f3244391b29a487320019f881ed5eac2e04699aef43e43b\": rpc error: code = NotFound desc = could not find container \"4f80c64a2110ab1a9f3244391b29a487320019f881ed5eac2e04699aef43e43b\": container with ID starting with 4f80c64a2110ab1a9f3244391b29a487320019f881ed5eac2e04699aef43e43b not found: ID does not exist" Mar 16 17:49:46 crc kubenswrapper[4736]: I0316 17:49:46.978091 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:49:46 crc kubenswrapper[4736]: E0316 17:49:46.978974 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:49:46 crc kubenswrapper[4736]: I0316 17:49:46.989897 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86550230-8471-47b7-9f9d-042adba7fda5" path="/var/lib/kubelet/pods/86550230-8471-47b7-9f9d-042adba7fda5/volumes" Mar 16 17:49:58 crc kubenswrapper[4736]: I0316 17:49:58.987484 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:49:58 crc kubenswrapper[4736]: E0316 17:49:58.988401 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.163491 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561390-w84mr"] Mar 16 17:50:00 crc kubenswrapper[4736]: E0316 17:50:00.163897 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86550230-8471-47b7-9f9d-042adba7fda5" containerName="registry-server" Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.163908 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="86550230-8471-47b7-9f9d-042adba7fda5" containerName="registry-server" Mar 16 17:50:00 crc kubenswrapper[4736]: E0316 17:50:00.163919 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86550230-8471-47b7-9f9d-042adba7fda5" containerName="extract-content" Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.163925 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="86550230-8471-47b7-9f9d-042adba7fda5" containerName="extract-content" Mar 16 17:50:00 crc kubenswrapper[4736]: E0316 17:50:00.163937 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86550230-8471-47b7-9f9d-042adba7fda5" containerName="extract-utilities" Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.163944 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="86550230-8471-47b7-9f9d-042adba7fda5" containerName="extract-utilities" Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.164137 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="86550230-8471-47b7-9f9d-042adba7fda5" containerName="registry-server" Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.167470 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561390-w84mr" Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.172464 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.172746 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.175475 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.180759 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561390-w84mr"] Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.303647 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k4cx\" (UniqueName: \"kubernetes.io/projected/859830c2-d94f-4fd8-a5ab-f59ade814dff-kube-api-access-7k4cx\") pod \"auto-csr-approver-29561390-w84mr\" (UID: \"859830c2-d94f-4fd8-a5ab-f59ade814dff\") " pod="openshift-infra/auto-csr-approver-29561390-w84mr" Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.405809 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k4cx\" (UniqueName: \"kubernetes.io/projected/859830c2-d94f-4fd8-a5ab-f59ade814dff-kube-api-access-7k4cx\") pod \"auto-csr-approver-29561390-w84mr\" (UID: \"859830c2-d94f-4fd8-a5ab-f59ade814dff\") " pod="openshift-infra/auto-csr-approver-29561390-w84mr" Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.426326 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k4cx\" (UniqueName: \"kubernetes.io/projected/859830c2-d94f-4fd8-a5ab-f59ade814dff-kube-api-access-7k4cx\") pod \"auto-csr-approver-29561390-w84mr\" (UID: \"859830c2-d94f-4fd8-a5ab-f59ade814dff\") " pod="openshift-infra/auto-csr-approver-29561390-w84mr" Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.487033 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561390-w84mr" Mar 16 17:50:00 crc kubenswrapper[4736]: I0316 17:50:00.970687 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561390-w84mr"] Mar 16 17:50:01 crc kubenswrapper[4736]: I0316 17:50:01.881961 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561390-w84mr" event={"ID":"859830c2-d94f-4fd8-a5ab-f59ade814dff","Type":"ContainerStarted","Data":"63c2072d24f33859d03d471ef3b9f02d991e0d3bf63741cc09fd9986e36317e4"} Mar 16 17:50:03 crc kubenswrapper[4736]: I0316 17:50:03.916017 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561390-w84mr" event={"ID":"859830c2-d94f-4fd8-a5ab-f59ade814dff","Type":"ContainerStarted","Data":"95b976bee28298ff6e6f29c5e9c58f3739c10a4858d849e4c9694e32043d0587"} Mar 16 17:50:03 crc kubenswrapper[4736]: I0316 17:50:03.956753 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561390-w84mr" podStartSLOduration=2.557873225 podStartE2EDuration="3.956728367s" podCreationTimestamp="2026-03-16 17:50:00 +0000 UTC" firstStartedPulling="2026-03-16 17:50:00.986114465 +0000 UTC m=+9402.713504752" lastFinishedPulling="2026-03-16 17:50:02.384969607 +0000 UTC m=+9404.112359894" observedRunningTime="2026-03-16 17:50:03.941760268 +0000 UTC m=+9405.669150575" watchObservedRunningTime="2026-03-16 17:50:03.956728367 +0000 UTC m=+9405.684118674" Mar 16 17:50:04 crc kubenswrapper[4736]: I0316 17:50:04.924685 4736 generic.go:334] "Generic (PLEG): container finished" podID="859830c2-d94f-4fd8-a5ab-f59ade814dff" containerID="95b976bee28298ff6e6f29c5e9c58f3739c10a4858d849e4c9694e32043d0587" exitCode=0 Mar 16 17:50:04 crc kubenswrapper[4736]: I0316 17:50:04.924870 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561390-w84mr" event={"ID":"859830c2-d94f-4fd8-a5ab-f59ade814dff","Type":"ContainerDied","Data":"95b976bee28298ff6e6f29c5e9c58f3739c10a4858d849e4c9694e32043d0587"} Mar 16 17:50:06 crc kubenswrapper[4736]: I0316 17:50:06.444313 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561390-w84mr" Mar 16 17:50:06 crc kubenswrapper[4736]: I0316 17:50:06.531555 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k4cx\" (UniqueName: \"kubernetes.io/projected/859830c2-d94f-4fd8-a5ab-f59ade814dff-kube-api-access-7k4cx\") pod \"859830c2-d94f-4fd8-a5ab-f59ade814dff\" (UID: \"859830c2-d94f-4fd8-a5ab-f59ade814dff\") " Mar 16 17:50:06 crc kubenswrapper[4736]: I0316 17:50:06.548812 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859830c2-d94f-4fd8-a5ab-f59ade814dff-kube-api-access-7k4cx" (OuterVolumeSpecName: "kube-api-access-7k4cx") pod "859830c2-d94f-4fd8-a5ab-f59ade814dff" (UID: "859830c2-d94f-4fd8-a5ab-f59ade814dff"). InnerVolumeSpecName "kube-api-access-7k4cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:50:06 crc kubenswrapper[4736]: I0316 17:50:06.634703 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k4cx\" (UniqueName: \"kubernetes.io/projected/859830c2-d94f-4fd8-a5ab-f59ade814dff-kube-api-access-7k4cx\") on node \"crc\" DevicePath \"\"" Mar 16 17:50:06 crc kubenswrapper[4736]: I0316 17:50:06.948406 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561390-w84mr" event={"ID":"859830c2-d94f-4fd8-a5ab-f59ade814dff","Type":"ContainerDied","Data":"63c2072d24f33859d03d471ef3b9f02d991e0d3bf63741cc09fd9986e36317e4"} Mar 16 17:50:06 crc kubenswrapper[4736]: I0316 17:50:06.948701 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63c2072d24f33859d03d471ef3b9f02d991e0d3bf63741cc09fd9986e36317e4" Mar 16 17:50:06 crc kubenswrapper[4736]: I0316 17:50:06.948491 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561390-w84mr" Mar 16 17:50:07 crc kubenswrapper[4736]: I0316 17:50:07.535088 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561384-sz9d4"] Mar 16 17:50:07 crc kubenswrapper[4736]: I0316 17:50:07.547018 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561384-sz9d4"] Mar 16 17:50:08 crc kubenswrapper[4736]: I0316 17:50:08.993124 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="161669e9-56d1-422a-88de-59a3f8e60b0a" path="/var/lib/kubelet/pods/161669e9-56d1-422a-88de-59a3f8e60b0a/volumes" Mar 16 17:50:10 crc kubenswrapper[4736]: I0316 17:50:10.980092 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:50:10 crc kubenswrapper[4736]: E0316 17:50:10.980644 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:50:24 crc kubenswrapper[4736]: I0316 17:50:24.978250 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:50:24 crc kubenswrapper[4736]: E0316 17:50:24.978922 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:50:38 crc kubenswrapper[4736]: I0316 17:50:38.986506 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:50:38 crc kubenswrapper[4736]: E0316 17:50:38.987631 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:50:53 crc kubenswrapper[4736]: I0316 17:50:53.979456 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:50:53 crc kubenswrapper[4736]: E0316 17:50:53.980943 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:51:04 crc kubenswrapper[4736]: I0316 17:51:04.277827 4736 scope.go:117] "RemoveContainer" containerID="10d425e35978182add2f5877db821b3b7d30ee29d5603f1a67489bebff0fd589" Mar 16 17:51:08 crc kubenswrapper[4736]: I0316 17:51:08.989276 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:51:08 crc kubenswrapper[4736]: E0316 17:51:08.990725 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:51:21 crc kubenswrapper[4736]: I0316 17:51:21.978182 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:51:21 crc kubenswrapper[4736]: E0316 17:51:21.978969 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:51:36 crc kubenswrapper[4736]: I0316 17:51:36.978744 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:51:36 crc kubenswrapper[4736]: E0316 17:51:36.979703 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:51:51 crc kubenswrapper[4736]: I0316 17:51:51.978539 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:51:53 crc kubenswrapper[4736]: I0316 17:51:53.129506 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"30a0d3d21403108f693ee58d25df024e1c56e38cab49db06a2b209796df6e1b5"} Mar 16 17:52:00 crc kubenswrapper[4736]: I0316 17:52:00.173890 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561392-76xfj"] Mar 16 17:52:00 crc kubenswrapper[4736]: E0316 17:52:00.175569 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859830c2-d94f-4fd8-a5ab-f59ade814dff" containerName="oc" Mar 16 17:52:00 crc kubenswrapper[4736]: I0316 17:52:00.175602 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="859830c2-d94f-4fd8-a5ab-f59ade814dff" containerName="oc" Mar 16 17:52:00 crc kubenswrapper[4736]: I0316 17:52:00.176161 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="859830c2-d94f-4fd8-a5ab-f59ade814dff" containerName="oc" Mar 16 17:52:00 crc kubenswrapper[4736]: I0316 17:52:00.177863 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561392-76xfj" Mar 16 17:52:00 crc kubenswrapper[4736]: I0316 17:52:00.181009 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:52:00 crc kubenswrapper[4736]: I0316 17:52:00.181029 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:52:00 crc kubenswrapper[4736]: I0316 17:52:00.184063 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:52:00 crc kubenswrapper[4736]: I0316 17:52:00.201682 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561392-76xfj"] Mar 16 17:52:00 crc kubenswrapper[4736]: I0316 17:52:00.294583 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rghlp\" (UniqueName: \"kubernetes.io/projected/c643b255-c909-4401-b9d3-4f1fa60aae98-kube-api-access-rghlp\") pod \"auto-csr-approver-29561392-76xfj\" (UID: \"c643b255-c909-4401-b9d3-4f1fa60aae98\") " pod="openshift-infra/auto-csr-approver-29561392-76xfj" Mar 16 17:52:00 crc kubenswrapper[4736]: I0316 17:52:00.396816 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rghlp\" (UniqueName: \"kubernetes.io/projected/c643b255-c909-4401-b9d3-4f1fa60aae98-kube-api-access-rghlp\") pod \"auto-csr-approver-29561392-76xfj\" (UID: \"c643b255-c909-4401-b9d3-4f1fa60aae98\") " pod="openshift-infra/auto-csr-approver-29561392-76xfj" Mar 16 17:52:00 crc kubenswrapper[4736]: I0316 17:52:00.426664 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rghlp\" (UniqueName: \"kubernetes.io/projected/c643b255-c909-4401-b9d3-4f1fa60aae98-kube-api-access-rghlp\") pod \"auto-csr-approver-29561392-76xfj\" (UID: \"c643b255-c909-4401-b9d3-4f1fa60aae98\") " pod="openshift-infra/auto-csr-approver-29561392-76xfj" Mar 16 17:52:00 crc kubenswrapper[4736]: I0316 17:52:00.509843 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561392-76xfj" Mar 16 17:52:01 crc kubenswrapper[4736]: I0316 17:52:01.076401 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561392-76xfj"] Mar 16 17:52:01 crc kubenswrapper[4736]: I0316 17:52:01.228884 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561392-76xfj" event={"ID":"c643b255-c909-4401-b9d3-4f1fa60aae98","Type":"ContainerStarted","Data":"2536223c533c741ead85b6912149396e3a0550848d141083567f4b451e3c3d65"} Mar 16 17:52:03 crc kubenswrapper[4736]: I0316 17:52:03.252871 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561392-76xfj" event={"ID":"c643b255-c909-4401-b9d3-4f1fa60aae98","Type":"ContainerStarted","Data":"7e8967ca66ee1a99c6feed062296d14b522efde12792bc7184838cd29d95feed"} Mar 16 17:52:03 crc kubenswrapper[4736]: I0316 17:52:03.272434 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561392-76xfj" podStartSLOduration=2.140308293 podStartE2EDuration="3.272411421s" podCreationTimestamp="2026-03-16 17:52:00 +0000 UTC" firstStartedPulling="2026-03-16 17:52:01.094218773 +0000 UTC m=+9522.821609100" lastFinishedPulling="2026-03-16 17:52:02.226321931 +0000 UTC m=+9523.953712228" observedRunningTime="2026-03-16 17:52:03.266292764 +0000 UTC m=+9524.993683051" watchObservedRunningTime="2026-03-16 17:52:03.272411421 +0000 UTC m=+9524.999801718" Mar 16 17:52:04 crc kubenswrapper[4736]: I0316 17:52:04.271905 4736 generic.go:334] "Generic (PLEG): container finished" podID="c643b255-c909-4401-b9d3-4f1fa60aae98" containerID="7e8967ca66ee1a99c6feed062296d14b522efde12792bc7184838cd29d95feed" exitCode=0 Mar 16 17:52:04 crc kubenswrapper[4736]: I0316 17:52:04.272015 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561392-76xfj" event={"ID":"c643b255-c909-4401-b9d3-4f1fa60aae98","Type":"ContainerDied","Data":"7e8967ca66ee1a99c6feed062296d14b522efde12792bc7184838cd29d95feed"} Mar 16 17:52:05 crc kubenswrapper[4736]: I0316 17:52:05.736223 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561392-76xfj" Mar 16 17:52:05 crc kubenswrapper[4736]: I0316 17:52:05.927883 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rghlp\" (UniqueName: \"kubernetes.io/projected/c643b255-c909-4401-b9d3-4f1fa60aae98-kube-api-access-rghlp\") pod \"c643b255-c909-4401-b9d3-4f1fa60aae98\" (UID: \"c643b255-c909-4401-b9d3-4f1fa60aae98\") " Mar 16 17:52:05 crc kubenswrapper[4736]: I0316 17:52:05.934170 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c643b255-c909-4401-b9d3-4f1fa60aae98-kube-api-access-rghlp" (OuterVolumeSpecName: "kube-api-access-rghlp") pod "c643b255-c909-4401-b9d3-4f1fa60aae98" (UID: "c643b255-c909-4401-b9d3-4f1fa60aae98"). InnerVolumeSpecName "kube-api-access-rghlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:52:06 crc kubenswrapper[4736]: I0316 17:52:06.029707 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rghlp\" (UniqueName: \"kubernetes.io/projected/c643b255-c909-4401-b9d3-4f1fa60aae98-kube-api-access-rghlp\") on node \"crc\" DevicePath \"\"" Mar 16 17:52:06 crc kubenswrapper[4736]: I0316 17:52:06.300329 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561392-76xfj" event={"ID":"c643b255-c909-4401-b9d3-4f1fa60aae98","Type":"ContainerDied","Data":"2536223c533c741ead85b6912149396e3a0550848d141083567f4b451e3c3d65"} Mar 16 17:52:06 crc kubenswrapper[4736]: I0316 17:52:06.300997 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2536223c533c741ead85b6912149396e3a0550848d141083567f4b451e3c3d65" Mar 16 17:52:06 crc kubenswrapper[4736]: I0316 17:52:06.300368 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561392-76xfj" Mar 16 17:52:06 crc kubenswrapper[4736]: I0316 17:52:06.366482 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561386-2dd7q"] Mar 16 17:52:06 crc kubenswrapper[4736]: I0316 17:52:06.375179 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561386-2dd7q"] Mar 16 17:52:06 crc kubenswrapper[4736]: I0316 17:52:06.992043 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3f4a478-1082-40bd-9e7e-486e9a6d5ad8" path="/var/lib/kubelet/pods/c3f4a478-1082-40bd-9e7e-486e9a6d5ad8/volumes" Mar 16 17:53:04 crc kubenswrapper[4736]: I0316 17:53:04.409667 4736 scope.go:117] "RemoveContainer" containerID="ddbe79d31002430a6d12c8e075debb35962424c41650b1381f195d6297b58c8b" Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.557954 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l6djk"] Mar 16 17:53:30 crc kubenswrapper[4736]: E0316 17:53:30.560759 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c643b255-c909-4401-b9d3-4f1fa60aae98" containerName="oc" Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.560942 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="c643b255-c909-4401-b9d3-4f1fa60aae98" containerName="oc" Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.561454 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="c643b255-c909-4401-b9d3-4f1fa60aae98" containerName="oc" Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.564062 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.570607 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l6djk"] Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.654998 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thgr8\" (UniqueName: \"kubernetes.io/projected/30595c0d-fcb6-41ad-8d43-253a013a4a46-kube-api-access-thgr8\") pod \"redhat-operators-l6djk\" (UID: \"30595c0d-fcb6-41ad-8d43-253a013a4a46\") " pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.655552 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30595c0d-fcb6-41ad-8d43-253a013a4a46-utilities\") pod \"redhat-operators-l6djk\" (UID: \"30595c0d-fcb6-41ad-8d43-253a013a4a46\") " pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.655807 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30595c0d-fcb6-41ad-8d43-253a013a4a46-catalog-content\") pod \"redhat-operators-l6djk\" (UID: \"30595c0d-fcb6-41ad-8d43-253a013a4a46\") " pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.757195 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thgr8\" (UniqueName: \"kubernetes.io/projected/30595c0d-fcb6-41ad-8d43-253a013a4a46-kube-api-access-thgr8\") pod \"redhat-operators-l6djk\" (UID: \"30595c0d-fcb6-41ad-8d43-253a013a4a46\") " pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.757628 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30595c0d-fcb6-41ad-8d43-253a013a4a46-utilities\") pod \"redhat-operators-l6djk\" (UID: \"30595c0d-fcb6-41ad-8d43-253a013a4a46\") " pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.757696 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30595c0d-fcb6-41ad-8d43-253a013a4a46-catalog-content\") pod \"redhat-operators-l6djk\" (UID: \"30595c0d-fcb6-41ad-8d43-253a013a4a46\") " pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.758241 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30595c0d-fcb6-41ad-8d43-253a013a4a46-catalog-content\") pod \"redhat-operators-l6djk\" (UID: \"30595c0d-fcb6-41ad-8d43-253a013a4a46\") " pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.758868 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30595c0d-fcb6-41ad-8d43-253a013a4a46-utilities\") pod \"redhat-operators-l6djk\" (UID: \"30595c0d-fcb6-41ad-8d43-253a013a4a46\") " pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.783067 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thgr8\" (UniqueName: \"kubernetes.io/projected/30595c0d-fcb6-41ad-8d43-253a013a4a46-kube-api-access-thgr8\") pod \"redhat-operators-l6djk\" (UID: \"30595c0d-fcb6-41ad-8d43-253a013a4a46\") " pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:53:30 crc kubenswrapper[4736]: I0316 17:53:30.930739 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:53:31 crc kubenswrapper[4736]: I0316 17:53:31.434854 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l6djk"] Mar 16 17:53:32 crc kubenswrapper[4736]: I0316 17:53:32.369784 4736 generic.go:334] "Generic (PLEG): container finished" podID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerID="f51e3186a070897e5d4c25c09f29654c6c396754c61494fa65d7146f1b82bdd3" exitCode=0 Mar 16 17:53:32 crc kubenswrapper[4736]: I0316 17:53:32.369863 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6djk" event={"ID":"30595c0d-fcb6-41ad-8d43-253a013a4a46","Type":"ContainerDied","Data":"f51e3186a070897e5d4c25c09f29654c6c396754c61494fa65d7146f1b82bdd3"} Mar 16 17:53:32 crc kubenswrapper[4736]: I0316 17:53:32.370126 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6djk" event={"ID":"30595c0d-fcb6-41ad-8d43-253a013a4a46","Type":"ContainerStarted","Data":"07b388725f68da253d5839c4376e551d23085ee3d651ddb5ea45550a93dad492"} Mar 16 17:53:32 crc kubenswrapper[4736]: I0316 17:53:32.372346 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 17:53:34 crc kubenswrapper[4736]: I0316 17:53:34.389266 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6djk" event={"ID":"30595c0d-fcb6-41ad-8d43-253a013a4a46","Type":"ContainerStarted","Data":"cbec450f6f8d016aaca8592b41d47a702bce4ae4b621c22d49c485f4272c713c"} Mar 16 17:53:38 crc kubenswrapper[4736]: I0316 17:53:38.446645 4736 generic.go:334] "Generic (PLEG): container finished" podID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerID="cbec450f6f8d016aaca8592b41d47a702bce4ae4b621c22d49c485f4272c713c" exitCode=0 Mar 16 17:53:38 crc kubenswrapper[4736]: I0316 17:53:38.447141 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6djk" event={"ID":"30595c0d-fcb6-41ad-8d43-253a013a4a46","Type":"ContainerDied","Data":"cbec450f6f8d016aaca8592b41d47a702bce4ae4b621c22d49c485f4272c713c"} Mar 16 17:53:39 crc kubenswrapper[4736]: I0316 17:53:39.466863 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6djk" event={"ID":"30595c0d-fcb6-41ad-8d43-253a013a4a46","Type":"ContainerStarted","Data":"df8b5bd355de889dddc88bf93ac4afd8f4bf406af4152279429fe5ae2d339087"} Mar 16 17:53:39 crc kubenswrapper[4736]: I0316 17:53:39.497012 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l6djk" podStartSLOduration=2.900663941 podStartE2EDuration="9.496990651s" podCreationTimestamp="2026-03-16 17:53:30 +0000 UTC" firstStartedPulling="2026-03-16 17:53:32.371584791 +0000 UTC m=+9614.098975078" lastFinishedPulling="2026-03-16 17:53:38.967911501 +0000 UTC m=+9620.695301788" observedRunningTime="2026-03-16 17:53:39.488396626 +0000 UTC m=+9621.215786923" watchObservedRunningTime="2026-03-16 17:53:39.496990651 +0000 UTC m=+9621.224380938" Mar 16 17:53:40 crc kubenswrapper[4736]: I0316 17:53:40.931707 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:53:40 crc kubenswrapper[4736]: I0316 17:53:40.932041 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:53:41 crc kubenswrapper[4736]: I0316 17:53:41.976478 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l6djk" podUID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerName="registry-server" probeResult="failure" output=< Mar 16 17:53:41 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:53:41 crc kubenswrapper[4736]: > Mar 16 17:53:51 crc kubenswrapper[4736]: I0316 17:53:51.987524 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l6djk" podUID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerName="registry-server" probeResult="failure" output=< Mar 16 17:53:51 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:53:51 crc kubenswrapper[4736]: > Mar 16 17:54:00 crc kubenswrapper[4736]: I0316 17:54:00.143317 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561394-876bp"] Mar 16 17:54:00 crc kubenswrapper[4736]: I0316 17:54:00.147213 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561394-876bp" Mar 16 17:54:00 crc kubenswrapper[4736]: I0316 17:54:00.151009 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:54:00 crc kubenswrapper[4736]: I0316 17:54:00.151257 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:54:00 crc kubenswrapper[4736]: I0316 17:54:00.151258 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:54:00 crc kubenswrapper[4736]: I0316 17:54:00.164148 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561394-876bp"] Mar 16 17:54:00 crc kubenswrapper[4736]: I0316 17:54:00.249242 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5qr7\" (UniqueName: \"kubernetes.io/projected/39508fd6-9c60-46e3-b4b0-6df56aed9587-kube-api-access-l5qr7\") pod \"auto-csr-approver-29561394-876bp\" (UID: \"39508fd6-9c60-46e3-b4b0-6df56aed9587\") " pod="openshift-infra/auto-csr-approver-29561394-876bp" Mar 16 17:54:00 crc kubenswrapper[4736]: I0316 17:54:00.351685 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5qr7\" (UniqueName: \"kubernetes.io/projected/39508fd6-9c60-46e3-b4b0-6df56aed9587-kube-api-access-l5qr7\") pod \"auto-csr-approver-29561394-876bp\" (UID: \"39508fd6-9c60-46e3-b4b0-6df56aed9587\") " pod="openshift-infra/auto-csr-approver-29561394-876bp" Mar 16 17:54:00 crc kubenswrapper[4736]: I0316 17:54:00.378307 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5qr7\" (UniqueName: \"kubernetes.io/projected/39508fd6-9c60-46e3-b4b0-6df56aed9587-kube-api-access-l5qr7\") pod \"auto-csr-approver-29561394-876bp\" (UID: \"39508fd6-9c60-46e3-b4b0-6df56aed9587\") " pod="openshift-infra/auto-csr-approver-29561394-876bp" Mar 16 17:54:00 crc kubenswrapper[4736]: I0316 17:54:00.483875 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561394-876bp" Mar 16 17:54:01 crc kubenswrapper[4736]: I0316 17:54:01.101996 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561394-876bp"] Mar 16 17:54:01 crc kubenswrapper[4736]: I0316 17:54:01.712492 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561394-876bp" event={"ID":"39508fd6-9c60-46e3-b4b0-6df56aed9587","Type":"ContainerStarted","Data":"e5e875898a6d3a1ef8dbf09759d85924de8806ce7bd550ffd7ef3dc0a5a2583b"} Mar 16 17:54:01 crc kubenswrapper[4736]: I0316 17:54:01.987472 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l6djk" podUID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerName="registry-server" probeResult="failure" output=< Mar 16 17:54:01 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:54:01 crc kubenswrapper[4736]: > Mar 16 17:54:03 crc kubenswrapper[4736]: I0316 17:54:03.738819 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561394-876bp" event={"ID":"39508fd6-9c60-46e3-b4b0-6df56aed9587","Type":"ContainerStarted","Data":"2e0012792158552b2c29bce74a4cd1dc65f5b729105fcbfa14f7d8abce0b459f"} Mar 16 17:54:03 crc kubenswrapper[4736]: I0316 17:54:03.760275 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561394-876bp" podStartSLOduration=2.126272651 podStartE2EDuration="3.760259466s" podCreationTimestamp="2026-03-16 17:54:00 +0000 UTC" firstStartedPulling="2026-03-16 17:54:01.115298989 +0000 UTC m=+9642.842689276" lastFinishedPulling="2026-03-16 17:54:02.749285804 +0000 UTC m=+9644.476676091" observedRunningTime="2026-03-16 17:54:03.754007286 +0000 UTC m=+9645.481397573" watchObservedRunningTime="2026-03-16 17:54:03.760259466 +0000 UTC m=+9645.487649753" Mar 16 17:54:04 crc kubenswrapper[4736]: I0316 17:54:04.558949 4736 scope.go:117] "RemoveContainer" containerID="1d33d99f98f7632b51eb294887cab13e2da55f05ef0e7734c18f7d6b3434acba" Mar 16 17:54:04 crc kubenswrapper[4736]: I0316 17:54:04.614337 4736 scope.go:117] "RemoveContainer" containerID="f90626a5064dd68bbaa64c802d8b58c228dd0996f9d1cae7210464891cce9d21" Mar 16 17:54:04 crc kubenswrapper[4736]: I0316 17:54:04.639831 4736 scope.go:117] "RemoveContainer" containerID="1ad110536af9d64f78a138c19ef2957b4f5982665c805186cd7c8332b8cc12c3" Mar 16 17:54:04 crc kubenswrapper[4736]: I0316 17:54:04.772211 4736 generic.go:334] "Generic (PLEG): container finished" podID="39508fd6-9c60-46e3-b4b0-6df56aed9587" containerID="2e0012792158552b2c29bce74a4cd1dc65f5b729105fcbfa14f7d8abce0b459f" exitCode=0 Mar 16 17:54:04 crc kubenswrapper[4736]: I0316 17:54:04.772335 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561394-876bp" event={"ID":"39508fd6-9c60-46e3-b4b0-6df56aed9587","Type":"ContainerDied","Data":"2e0012792158552b2c29bce74a4cd1dc65f5b729105fcbfa14f7d8abce0b459f"} Mar 16 17:54:06 crc kubenswrapper[4736]: I0316 17:54:06.134648 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561394-876bp" Mar 16 17:54:06 crc kubenswrapper[4736]: I0316 17:54:06.172441 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5qr7\" (UniqueName: \"kubernetes.io/projected/39508fd6-9c60-46e3-b4b0-6df56aed9587-kube-api-access-l5qr7\") pod \"39508fd6-9c60-46e3-b4b0-6df56aed9587\" (UID: \"39508fd6-9c60-46e3-b4b0-6df56aed9587\") " Mar 16 17:54:06 crc kubenswrapper[4736]: I0316 17:54:06.183793 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39508fd6-9c60-46e3-b4b0-6df56aed9587-kube-api-access-l5qr7" (OuterVolumeSpecName: "kube-api-access-l5qr7") pod "39508fd6-9c60-46e3-b4b0-6df56aed9587" (UID: "39508fd6-9c60-46e3-b4b0-6df56aed9587"). InnerVolumeSpecName "kube-api-access-l5qr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:54:06 crc kubenswrapper[4736]: I0316 17:54:06.274649 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5qr7\" (UniqueName: \"kubernetes.io/projected/39508fd6-9c60-46e3-b4b0-6df56aed9587-kube-api-access-l5qr7\") on node \"crc\" DevicePath \"\"" Mar 16 17:54:06 crc kubenswrapper[4736]: I0316 17:54:06.797550 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561394-876bp" event={"ID":"39508fd6-9c60-46e3-b4b0-6df56aed9587","Type":"ContainerDied","Data":"e5e875898a6d3a1ef8dbf09759d85924de8806ce7bd550ffd7ef3dc0a5a2583b"} Mar 16 17:54:06 crc kubenswrapper[4736]: I0316 17:54:06.797598 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5e875898a6d3a1ef8dbf09759d85924de8806ce7bd550ffd7ef3dc0a5a2583b" Mar 16 17:54:06 crc kubenswrapper[4736]: I0316 17:54:06.797664 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561394-876bp" Mar 16 17:54:06 crc kubenswrapper[4736]: I0316 17:54:06.833278 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561388-wz6c2"] Mar 16 17:54:06 crc kubenswrapper[4736]: I0316 17:54:06.841324 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561388-wz6c2"] Mar 16 17:54:07 crc kubenswrapper[4736]: I0316 17:54:07.004978 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3afd2be9-62fc-431e-9d0c-846909035ea6" path="/var/lib/kubelet/pods/3afd2be9-62fc-431e-9d0c-846909035ea6/volumes" Mar 16 17:54:08 crc kubenswrapper[4736]: I0316 17:54:08.507996 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:54:08 crc kubenswrapper[4736]: I0316 17:54:08.508950 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:54:11 crc kubenswrapper[4736]: I0316 17:54:11.996596 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l6djk" podUID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerName="registry-server" probeResult="failure" output=< Mar 16 17:54:11 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:54:11 crc kubenswrapper[4736]: > Mar 16 17:54:21 crc kubenswrapper[4736]: I0316 17:54:21.009506 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:54:21 crc kubenswrapper[4736]: I0316 17:54:21.071372 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:54:21 crc kubenswrapper[4736]: I0316 17:54:21.262923 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l6djk"] Mar 16 17:54:22 crc kubenswrapper[4736]: I0316 17:54:22.980994 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l6djk" podUID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerName="registry-server" containerID="cri-o://df8b5bd355de889dddc88bf93ac4afd8f4bf406af4152279429fe5ae2d339087" gracePeriod=2 Mar 16 17:54:23 crc kubenswrapper[4736]: E0316 17:54:23.163074 4736 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30595c0d_fcb6_41ad_8d43_253a013a4a46.slice/crio-df8b5bd355de889dddc88bf93ac4afd8f4bf406af4152279429fe5ae2d339087.scope\": RecentStats: unable to find data in memory cache]" Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.592586 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.769170 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30595c0d-fcb6-41ad-8d43-253a013a4a46-utilities\") pod \"30595c0d-fcb6-41ad-8d43-253a013a4a46\" (UID: \"30595c0d-fcb6-41ad-8d43-253a013a4a46\") " Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.769502 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thgr8\" (UniqueName: \"kubernetes.io/projected/30595c0d-fcb6-41ad-8d43-253a013a4a46-kube-api-access-thgr8\") pod \"30595c0d-fcb6-41ad-8d43-253a013a4a46\" (UID: \"30595c0d-fcb6-41ad-8d43-253a013a4a46\") " Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.769815 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30595c0d-fcb6-41ad-8d43-253a013a4a46-catalog-content\") pod \"30595c0d-fcb6-41ad-8d43-253a013a4a46\" (UID: \"30595c0d-fcb6-41ad-8d43-253a013a4a46\") " Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.770357 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30595c0d-fcb6-41ad-8d43-253a013a4a46-utilities" (OuterVolumeSpecName: "utilities") pod "30595c0d-fcb6-41ad-8d43-253a013a4a46" (UID: "30595c0d-fcb6-41ad-8d43-253a013a4a46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.770716 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30595c0d-fcb6-41ad-8d43-253a013a4a46-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.776159 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30595c0d-fcb6-41ad-8d43-253a013a4a46-kube-api-access-thgr8" (OuterVolumeSpecName: "kube-api-access-thgr8") pod "30595c0d-fcb6-41ad-8d43-253a013a4a46" (UID: "30595c0d-fcb6-41ad-8d43-253a013a4a46"). InnerVolumeSpecName "kube-api-access-thgr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.873895 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thgr8\" (UniqueName: \"kubernetes.io/projected/30595c0d-fcb6-41ad-8d43-253a013a4a46-kube-api-access-thgr8\") on node \"crc\" DevicePath \"\"" Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.890184 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30595c0d-fcb6-41ad-8d43-253a013a4a46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30595c0d-fcb6-41ad-8d43-253a013a4a46" (UID: "30595c0d-fcb6-41ad-8d43-253a013a4a46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.976798 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30595c0d-fcb6-41ad-8d43-253a013a4a46-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.993054 4736 generic.go:334] "Generic (PLEG): container finished" podID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerID="df8b5bd355de889dddc88bf93ac4afd8f4bf406af4152279429fe5ae2d339087" exitCode=0 Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.993151 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6djk" event={"ID":"30595c0d-fcb6-41ad-8d43-253a013a4a46","Type":"ContainerDied","Data":"df8b5bd355de889dddc88bf93ac4afd8f4bf406af4152279429fe5ae2d339087"} Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.993193 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6djk" event={"ID":"30595c0d-fcb6-41ad-8d43-253a013a4a46","Type":"ContainerDied","Data":"07b388725f68da253d5839c4376e551d23085ee3d651ddb5ea45550a93dad492"} Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.993230 4736 scope.go:117] "RemoveContainer" containerID="df8b5bd355de889dddc88bf93ac4afd8f4bf406af4152279429fe5ae2d339087" Mar 16 17:54:23 crc kubenswrapper[4736]: I0316 17:54:23.993414 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6djk" Mar 16 17:54:24 crc kubenswrapper[4736]: I0316 17:54:24.039481 4736 scope.go:117] "RemoveContainer" containerID="cbec450f6f8d016aaca8592b41d47a702bce4ae4b621c22d49c485f4272c713c" Mar 16 17:54:24 crc kubenswrapper[4736]: I0316 17:54:24.042637 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l6djk"] Mar 16 17:54:24 crc kubenswrapper[4736]: I0316 17:54:24.056988 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l6djk"] Mar 16 17:54:24 crc kubenswrapper[4736]: I0316 17:54:24.075695 4736 scope.go:117] "RemoveContainer" containerID="f51e3186a070897e5d4c25c09f29654c6c396754c61494fa65d7146f1b82bdd3" Mar 16 17:54:24 crc kubenswrapper[4736]: I0316 17:54:24.116828 4736 scope.go:117] "RemoveContainer" containerID="df8b5bd355de889dddc88bf93ac4afd8f4bf406af4152279429fe5ae2d339087" Mar 16 17:54:24 crc kubenswrapper[4736]: E0316 17:54:24.122070 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df8b5bd355de889dddc88bf93ac4afd8f4bf406af4152279429fe5ae2d339087\": container with ID starting with df8b5bd355de889dddc88bf93ac4afd8f4bf406af4152279429fe5ae2d339087 not found: ID does not exist" containerID="df8b5bd355de889dddc88bf93ac4afd8f4bf406af4152279429fe5ae2d339087" Mar 16 17:54:24 crc kubenswrapper[4736]: I0316 17:54:24.122306 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df8b5bd355de889dddc88bf93ac4afd8f4bf406af4152279429fe5ae2d339087"} err="failed to get container status \"df8b5bd355de889dddc88bf93ac4afd8f4bf406af4152279429fe5ae2d339087\": rpc error: code = NotFound desc = could not find container \"df8b5bd355de889dddc88bf93ac4afd8f4bf406af4152279429fe5ae2d339087\": container with ID starting with df8b5bd355de889dddc88bf93ac4afd8f4bf406af4152279429fe5ae2d339087 not found: ID does not exist" Mar 16 17:54:24 crc kubenswrapper[4736]: I0316 17:54:24.122337 4736 scope.go:117] "RemoveContainer" containerID="cbec450f6f8d016aaca8592b41d47a702bce4ae4b621c22d49c485f4272c713c" Mar 16 17:54:24 crc kubenswrapper[4736]: E0316 17:54:24.122860 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbec450f6f8d016aaca8592b41d47a702bce4ae4b621c22d49c485f4272c713c\": container with ID starting with cbec450f6f8d016aaca8592b41d47a702bce4ae4b621c22d49c485f4272c713c not found: ID does not exist" containerID="cbec450f6f8d016aaca8592b41d47a702bce4ae4b621c22d49c485f4272c713c" Mar 16 17:54:24 crc kubenswrapper[4736]: I0316 17:54:24.122901 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbec450f6f8d016aaca8592b41d47a702bce4ae4b621c22d49c485f4272c713c"} err="failed to get container status \"cbec450f6f8d016aaca8592b41d47a702bce4ae4b621c22d49c485f4272c713c\": rpc error: code = NotFound desc = could not find container \"cbec450f6f8d016aaca8592b41d47a702bce4ae4b621c22d49c485f4272c713c\": container with ID starting with cbec450f6f8d016aaca8592b41d47a702bce4ae4b621c22d49c485f4272c713c not found: ID does not exist" Mar 16 17:54:24 crc kubenswrapper[4736]: I0316 17:54:24.122927 4736 scope.go:117] "RemoveContainer" containerID="f51e3186a070897e5d4c25c09f29654c6c396754c61494fa65d7146f1b82bdd3" Mar 16 17:54:24 crc kubenswrapper[4736]: E0316 17:54:24.123462 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f51e3186a070897e5d4c25c09f29654c6c396754c61494fa65d7146f1b82bdd3\": container with ID starting with f51e3186a070897e5d4c25c09f29654c6c396754c61494fa65d7146f1b82bdd3 not found: ID does not exist" containerID="f51e3186a070897e5d4c25c09f29654c6c396754c61494fa65d7146f1b82bdd3" Mar 16 17:54:24 crc kubenswrapper[4736]: I0316 17:54:24.123498 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f51e3186a070897e5d4c25c09f29654c6c396754c61494fa65d7146f1b82bdd3"} err="failed to get container status \"f51e3186a070897e5d4c25c09f29654c6c396754c61494fa65d7146f1b82bdd3\": rpc error: code = NotFound desc = could not find container \"f51e3186a070897e5d4c25c09f29654c6c396754c61494fa65d7146f1b82bdd3\": container with ID starting with f51e3186a070897e5d4c25c09f29654c6c396754c61494fa65d7146f1b82bdd3 not found: ID does not exist" Mar 16 17:54:24 crc kubenswrapper[4736]: I0316 17:54:24.993334 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30595c0d-fcb6-41ad-8d43-253a013a4a46" path="/var/lib/kubelet/pods/30595c0d-fcb6-41ad-8d43-253a013a4a46/volumes" Mar 16 17:54:38 crc kubenswrapper[4736]: I0316 17:54:38.508044 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:54:38 crc kubenswrapper[4736]: I0316 17:54:38.508776 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:55:04 crc kubenswrapper[4736]: I0316 17:55:04.834877 4736 scope.go:117] "RemoveContainer" containerID="a9ec82965dc19a8b724c48ed7a8dac03baa3d023346e32652dbcf978dcb7dc92" Mar 16 17:55:08 crc kubenswrapper[4736]: I0316 17:55:08.508335 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:55:08 crc kubenswrapper[4736]: I0316 17:55:08.509033 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:55:08 crc kubenswrapper[4736]: I0316 17:55:08.509099 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 17:55:08 crc kubenswrapper[4736]: I0316 17:55:08.510707 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30a0d3d21403108f693ee58d25df024e1c56e38cab49db06a2b209796df6e1b5"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 17:55:08 crc kubenswrapper[4736]: I0316 17:55:08.510819 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://30a0d3d21403108f693ee58d25df024e1c56e38cab49db06a2b209796df6e1b5" gracePeriod=600 Mar 16 17:55:09 crc kubenswrapper[4736]: I0316 17:55:09.486698 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="30a0d3d21403108f693ee58d25df024e1c56e38cab49db06a2b209796df6e1b5" exitCode=0 Mar 16 17:55:09 crc kubenswrapper[4736]: I0316 17:55:09.486794 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"30a0d3d21403108f693ee58d25df024e1c56e38cab49db06a2b209796df6e1b5"} Mar 16 17:55:09 crc kubenswrapper[4736]: I0316 17:55:09.487161 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898"} Mar 16 17:55:09 crc kubenswrapper[4736]: I0316 17:55:09.487190 4736 scope.go:117] "RemoveContainer" containerID="df1281bc61347e9074599ae159da7430c85e1f22319697dacc7c822c5f306c43" Mar 16 17:56:00 crc kubenswrapper[4736]: I0316 17:56:00.159800 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561396-mwwsm"] Mar 16 17:56:00 crc kubenswrapper[4736]: E0316 17:56:00.161252 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerName="extract-utilities" Mar 16 17:56:00 crc kubenswrapper[4736]: I0316 17:56:00.161274 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerName="extract-utilities" Mar 16 17:56:00 crc kubenswrapper[4736]: E0316 17:56:00.161331 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerName="extract-content" Mar 16 17:56:00 crc kubenswrapper[4736]: I0316 17:56:00.161346 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerName="extract-content" Mar 16 17:56:00 crc kubenswrapper[4736]: E0316 17:56:00.161365 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerName="registry-server" Mar 16 17:56:00 crc kubenswrapper[4736]: I0316 17:56:00.161380 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerName="registry-server" Mar 16 17:56:00 crc kubenswrapper[4736]: E0316 17:56:00.161410 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39508fd6-9c60-46e3-b4b0-6df56aed9587" containerName="oc" Mar 16 17:56:00 crc kubenswrapper[4736]: I0316 17:56:00.161425 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="39508fd6-9c60-46e3-b4b0-6df56aed9587" containerName="oc" Mar 16 17:56:00 crc kubenswrapper[4736]: I0316 17:56:00.161778 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="39508fd6-9c60-46e3-b4b0-6df56aed9587" containerName="oc" Mar 16 17:56:00 crc kubenswrapper[4736]: I0316 17:56:00.161825 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="30595c0d-fcb6-41ad-8d43-253a013a4a46" containerName="registry-server" Mar 16 17:56:00 crc kubenswrapper[4736]: I0316 17:56:00.162863 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561396-mwwsm" Mar 16 17:56:00 crc kubenswrapper[4736]: I0316 17:56:00.174642 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:56:00 crc kubenswrapper[4736]: I0316 17:56:00.174929 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:56:00 crc kubenswrapper[4736]: I0316 17:56:00.180596 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:56:00 crc kubenswrapper[4736]: I0316 17:56:00.181859 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561396-mwwsm"] Mar 16 17:56:00 crc kubenswrapper[4736]: I0316 17:56:00.257076 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggnt7\" (UniqueName: \"kubernetes.io/projected/2d092be4-2da3-435c-ac03-4a88a6a6294f-kube-api-access-ggnt7\") pod \"auto-csr-approver-29561396-mwwsm\" (UID: \"2d092be4-2da3-435c-ac03-4a88a6a6294f\") " pod="openshift-infra/auto-csr-approver-29561396-mwwsm" Mar 16 17:56:00 crc kubenswrapper[4736]: I0316 17:56:00.359066 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggnt7\" (UniqueName: \"kubernetes.io/projected/2d092be4-2da3-435c-ac03-4a88a6a6294f-kube-api-access-ggnt7\") pod \"auto-csr-approver-29561396-mwwsm\" (UID: \"2d092be4-2da3-435c-ac03-4a88a6a6294f\") " pod="openshift-infra/auto-csr-approver-29561396-mwwsm" Mar 16 17:56:01 crc kubenswrapper[4736]: I0316 17:56:01.200564 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggnt7\" (UniqueName: \"kubernetes.io/projected/2d092be4-2da3-435c-ac03-4a88a6a6294f-kube-api-access-ggnt7\") pod \"auto-csr-approver-29561396-mwwsm\" (UID: \"2d092be4-2da3-435c-ac03-4a88a6a6294f\") " pod="openshift-infra/auto-csr-approver-29561396-mwwsm" Mar 16 17:56:01 crc kubenswrapper[4736]: I0316 17:56:01.393301 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561396-mwwsm" Mar 16 17:56:01 crc kubenswrapper[4736]: I0316 17:56:01.746183 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561396-mwwsm"] Mar 16 17:56:02 crc kubenswrapper[4736]: I0316 17:56:02.026456 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561396-mwwsm" event={"ID":"2d092be4-2da3-435c-ac03-4a88a6a6294f","Type":"ContainerStarted","Data":"3d56d3fe42f94c0cd4dde9c3d48d096d42a93e72f6e6e56b1aa08422617d0199"} Mar 16 17:56:04 crc kubenswrapper[4736]: I0316 17:56:04.045472 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561396-mwwsm" event={"ID":"2d092be4-2da3-435c-ac03-4a88a6a6294f","Type":"ContainerStarted","Data":"b368cdea3518b8351e786bdbddaa618029ae41a04c9e7c9839bb19e2e045a313"} Mar 16 17:56:04 crc kubenswrapper[4736]: I0316 17:56:04.069441 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561396-mwwsm" podStartSLOduration=3.164385775 podStartE2EDuration="4.069419565s" podCreationTimestamp="2026-03-16 17:56:00 +0000 UTC" firstStartedPulling="2026-03-16 17:56:01.761411564 +0000 UTC m=+9763.488801861" lastFinishedPulling="2026-03-16 17:56:02.666445324 +0000 UTC m=+9764.393835651" observedRunningTime="2026-03-16 17:56:04.065096427 +0000 UTC m=+9765.792486754" watchObservedRunningTime="2026-03-16 17:56:04.069419565 +0000 UTC m=+9765.796809862" Mar 16 17:56:05 crc kubenswrapper[4736]: I0316 17:56:05.057998 4736 generic.go:334] "Generic (PLEG): container finished" podID="2d092be4-2da3-435c-ac03-4a88a6a6294f" containerID="b368cdea3518b8351e786bdbddaa618029ae41a04c9e7c9839bb19e2e045a313" exitCode=0 Mar 16 17:56:05 crc kubenswrapper[4736]: I0316 17:56:05.058052 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561396-mwwsm" event={"ID":"2d092be4-2da3-435c-ac03-4a88a6a6294f","Type":"ContainerDied","Data":"b368cdea3518b8351e786bdbddaa618029ae41a04c9e7c9839bb19e2e045a313"} Mar 16 17:56:06 crc kubenswrapper[4736]: I0316 17:56:06.510569 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561396-mwwsm" Mar 16 17:56:06 crc kubenswrapper[4736]: I0316 17:56:06.591006 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggnt7\" (UniqueName: \"kubernetes.io/projected/2d092be4-2da3-435c-ac03-4a88a6a6294f-kube-api-access-ggnt7\") pod \"2d092be4-2da3-435c-ac03-4a88a6a6294f\" (UID: \"2d092be4-2da3-435c-ac03-4a88a6a6294f\") " Mar 16 17:56:06 crc kubenswrapper[4736]: I0316 17:56:06.597575 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d092be4-2da3-435c-ac03-4a88a6a6294f-kube-api-access-ggnt7" (OuterVolumeSpecName: "kube-api-access-ggnt7") pod "2d092be4-2da3-435c-ac03-4a88a6a6294f" (UID: "2d092be4-2da3-435c-ac03-4a88a6a6294f"). InnerVolumeSpecName "kube-api-access-ggnt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:56:06 crc kubenswrapper[4736]: I0316 17:56:06.694181 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggnt7\" (UniqueName: \"kubernetes.io/projected/2d092be4-2da3-435c-ac03-4a88a6a6294f-kube-api-access-ggnt7\") on node \"crc\" DevicePath \"\"" Mar 16 17:56:07 crc kubenswrapper[4736]: I0316 17:56:07.083889 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561396-mwwsm" event={"ID":"2d092be4-2da3-435c-ac03-4a88a6a6294f","Type":"ContainerDied","Data":"3d56d3fe42f94c0cd4dde9c3d48d096d42a93e72f6e6e56b1aa08422617d0199"} Mar 16 17:56:07 crc kubenswrapper[4736]: I0316 17:56:07.083941 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d56d3fe42f94c0cd4dde9c3d48d096d42a93e72f6e6e56b1aa08422617d0199" Mar 16 17:56:07 crc kubenswrapper[4736]: I0316 17:56:07.083983 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561396-mwwsm" Mar 16 17:56:07 crc kubenswrapper[4736]: I0316 17:56:07.159713 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561390-w84mr"] Mar 16 17:56:07 crc kubenswrapper[4736]: I0316 17:56:07.170026 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561390-w84mr"] Mar 16 17:56:08 crc kubenswrapper[4736]: I0316 17:56:08.997871 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859830c2-d94f-4fd8-a5ab-f59ade814dff" path="/var/lib/kubelet/pods/859830c2-d94f-4fd8-a5ab-f59ade814dff/volumes" Mar 16 17:57:04 crc kubenswrapper[4736]: I0316 17:57:04.987953 4736 scope.go:117] "RemoveContainer" containerID="95b976bee28298ff6e6f29c5e9c58f3739c10a4858d849e4c9694e32043d0587" Mar 16 17:57:38 crc kubenswrapper[4736]: I0316 17:57:38.507573 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:57:38 crc kubenswrapper[4736]: I0316 17:57:38.508237 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:58:00 crc kubenswrapper[4736]: I0316 17:58:00.184825 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561398-dqbdk"] Mar 16 17:58:00 crc kubenswrapper[4736]: E0316 17:58:00.185816 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d092be4-2da3-435c-ac03-4a88a6a6294f" containerName="oc" Mar 16 17:58:00 crc kubenswrapper[4736]: I0316 17:58:00.185834 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d092be4-2da3-435c-ac03-4a88a6a6294f" containerName="oc" Mar 16 17:58:00 crc kubenswrapper[4736]: I0316 17:58:00.186194 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d092be4-2da3-435c-ac03-4a88a6a6294f" containerName="oc" Mar 16 17:58:00 crc kubenswrapper[4736]: I0316 17:58:00.186888 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561398-dqbdk" Mar 16 17:58:00 crc kubenswrapper[4736]: I0316 17:58:00.189606 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 17:58:00 crc kubenswrapper[4736]: I0316 17:58:00.189854 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 17:58:00 crc kubenswrapper[4736]: I0316 17:58:00.190025 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 17:58:00 crc kubenswrapper[4736]: I0316 17:58:00.202807 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561398-dqbdk"] Mar 16 17:58:00 crc kubenswrapper[4736]: I0316 17:58:00.253792 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m2n7\" (UniqueName: \"kubernetes.io/projected/34ca4a3b-a1c6-43fb-8740-fa9eaefd7330-kube-api-access-5m2n7\") pod \"auto-csr-approver-29561398-dqbdk\" (UID: \"34ca4a3b-a1c6-43fb-8740-fa9eaefd7330\") " pod="openshift-infra/auto-csr-approver-29561398-dqbdk" Mar 16 17:58:00 crc kubenswrapper[4736]: I0316 17:58:00.356379 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m2n7\" (UniqueName: \"kubernetes.io/projected/34ca4a3b-a1c6-43fb-8740-fa9eaefd7330-kube-api-access-5m2n7\") pod \"auto-csr-approver-29561398-dqbdk\" (UID: \"34ca4a3b-a1c6-43fb-8740-fa9eaefd7330\") " pod="openshift-infra/auto-csr-approver-29561398-dqbdk" Mar 16 17:58:00 crc kubenswrapper[4736]: I0316 17:58:00.378439 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m2n7\" (UniqueName: \"kubernetes.io/projected/34ca4a3b-a1c6-43fb-8740-fa9eaefd7330-kube-api-access-5m2n7\") pod \"auto-csr-approver-29561398-dqbdk\" (UID: \"34ca4a3b-a1c6-43fb-8740-fa9eaefd7330\") " pod="openshift-infra/auto-csr-approver-29561398-dqbdk" Mar 16 17:58:00 crc kubenswrapper[4736]: I0316 17:58:00.508713 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561398-dqbdk" Mar 16 17:58:01 crc kubenswrapper[4736]: I0316 17:58:01.002973 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561398-dqbdk"] Mar 16 17:58:01 crc kubenswrapper[4736]: I0316 17:58:01.325035 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561398-dqbdk" event={"ID":"34ca4a3b-a1c6-43fb-8740-fa9eaefd7330","Type":"ContainerStarted","Data":"78e610a57ba6910c4f3a4596c4f67b483860cece461a99d998fef70932c4b839"} Mar 16 17:58:03 crc kubenswrapper[4736]: I0316 17:58:03.350723 4736 generic.go:334] "Generic (PLEG): container finished" podID="34ca4a3b-a1c6-43fb-8740-fa9eaefd7330" containerID="920a999d52ca2809359f1cdab5c65ec04e093abd9c2865180166158bc8feae17" exitCode=0 Mar 16 17:58:03 crc kubenswrapper[4736]: I0316 17:58:03.350850 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561398-dqbdk" event={"ID":"34ca4a3b-a1c6-43fb-8740-fa9eaefd7330","Type":"ContainerDied","Data":"920a999d52ca2809359f1cdab5c65ec04e093abd9c2865180166158bc8feae17"} Mar 16 17:58:04 crc kubenswrapper[4736]: I0316 17:58:04.735608 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561398-dqbdk" Mar 16 17:58:04 crc kubenswrapper[4736]: I0316 17:58:04.761541 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m2n7\" (UniqueName: \"kubernetes.io/projected/34ca4a3b-a1c6-43fb-8740-fa9eaefd7330-kube-api-access-5m2n7\") pod \"34ca4a3b-a1c6-43fb-8740-fa9eaefd7330\" (UID: \"34ca4a3b-a1c6-43fb-8740-fa9eaefd7330\") " Mar 16 17:58:04 crc kubenswrapper[4736]: I0316 17:58:04.769328 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ca4a3b-a1c6-43fb-8740-fa9eaefd7330-kube-api-access-5m2n7" (OuterVolumeSpecName: "kube-api-access-5m2n7") pod "34ca4a3b-a1c6-43fb-8740-fa9eaefd7330" (UID: "34ca4a3b-a1c6-43fb-8740-fa9eaefd7330"). InnerVolumeSpecName "kube-api-access-5m2n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:58:04 crc kubenswrapper[4736]: I0316 17:58:04.863668 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m2n7\" (UniqueName: \"kubernetes.io/projected/34ca4a3b-a1c6-43fb-8740-fa9eaefd7330-kube-api-access-5m2n7\") on node \"crc\" DevicePath \"\"" Mar 16 17:58:05 crc kubenswrapper[4736]: I0316 17:58:05.377626 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561398-dqbdk" event={"ID":"34ca4a3b-a1c6-43fb-8740-fa9eaefd7330","Type":"ContainerDied","Data":"78e610a57ba6910c4f3a4596c4f67b483860cece461a99d998fef70932c4b839"} Mar 16 17:58:05 crc kubenswrapper[4736]: I0316 17:58:05.378048 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78e610a57ba6910c4f3a4596c4f67b483860cece461a99d998fef70932c4b839" Mar 16 17:58:05 crc kubenswrapper[4736]: I0316 17:58:05.377718 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561398-dqbdk" Mar 16 17:58:05 crc kubenswrapper[4736]: I0316 17:58:05.820647 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561392-76xfj"] Mar 16 17:58:05 crc kubenswrapper[4736]: I0316 17:58:05.830778 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561392-76xfj"] Mar 16 17:58:06 crc kubenswrapper[4736]: I0316 17:58:06.990687 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c643b255-c909-4401-b9d3-4f1fa60aae98" path="/var/lib/kubelet/pods/c643b255-c909-4401-b9d3-4f1fa60aae98/volumes" Mar 16 17:58:08 crc kubenswrapper[4736]: I0316 17:58:08.508144 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:58:08 crc kubenswrapper[4736]: I0316 17:58:08.508504 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:58:17 crc kubenswrapper[4736]: I0316 17:58:17.753719 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s465d"] Mar 16 17:58:17 crc kubenswrapper[4736]: E0316 17:58:17.754715 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ca4a3b-a1c6-43fb-8740-fa9eaefd7330" containerName="oc" Mar 16 17:58:17 crc kubenswrapper[4736]: I0316 17:58:17.754733 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ca4a3b-a1c6-43fb-8740-fa9eaefd7330" containerName="oc" Mar 16 17:58:17 crc kubenswrapper[4736]: I0316 17:58:17.755043 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ca4a3b-a1c6-43fb-8740-fa9eaefd7330" containerName="oc" Mar 16 17:58:17 crc kubenswrapper[4736]: I0316 17:58:17.759627 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:17 crc kubenswrapper[4736]: I0316 17:58:17.777403 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s465d"] Mar 16 17:58:17 crc kubenswrapper[4736]: I0316 17:58:17.905761 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-catalog-content\") pod \"certified-operators-s465d\" (UID: \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\") " pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:17 crc kubenswrapper[4736]: I0316 17:58:17.905878 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-utilities\") pod \"certified-operators-s465d\" (UID: \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\") " pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:17 crc kubenswrapper[4736]: I0316 17:58:17.906303 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpcpt\" (UniqueName: \"kubernetes.io/projected/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-kube-api-access-cpcpt\") pod \"certified-operators-s465d\" (UID: \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\") " pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:18 crc kubenswrapper[4736]: I0316 17:58:18.008026 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-utilities\") pod \"certified-operators-s465d\" (UID: \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\") " pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:18 crc kubenswrapper[4736]: I0316 17:58:18.008629 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpcpt\" (UniqueName: \"kubernetes.io/projected/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-kube-api-access-cpcpt\") pod \"certified-operators-s465d\" (UID: \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\") " pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:18 crc kubenswrapper[4736]: I0316 17:58:18.008890 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-catalog-content\") pod \"certified-operators-s465d\" (UID: \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\") " pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:18 crc kubenswrapper[4736]: I0316 17:58:18.009551 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-catalog-content\") pod \"certified-operators-s465d\" (UID: \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\") " pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:18 crc kubenswrapper[4736]: I0316 17:58:18.009579 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-utilities\") pod \"certified-operators-s465d\" (UID: \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\") " pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:18 crc kubenswrapper[4736]: I0316 17:58:18.036423 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpcpt\" (UniqueName: \"kubernetes.io/projected/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-kube-api-access-cpcpt\") pod \"certified-operators-s465d\" (UID: \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\") " pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:18 crc kubenswrapper[4736]: I0316 17:58:18.104925 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:18 crc kubenswrapper[4736]: I0316 17:58:18.642699 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s465d"] Mar 16 17:58:19 crc kubenswrapper[4736]: I0316 17:58:19.517048 4736 generic.go:334] "Generic (PLEG): container finished" podID="b719f7c1-5e8e-4358-9bf5-c9c1662d2519" containerID="ac56b2d18aa9883200b9f0c7548e71a6acdf5f11d894b7b15321a8733c089d42" exitCode=0 Mar 16 17:58:19 crc kubenswrapper[4736]: I0316 17:58:19.517579 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s465d" event={"ID":"b719f7c1-5e8e-4358-9bf5-c9c1662d2519","Type":"ContainerDied","Data":"ac56b2d18aa9883200b9f0c7548e71a6acdf5f11d894b7b15321a8733c089d42"} Mar 16 17:58:19 crc kubenswrapper[4736]: I0316 17:58:19.517610 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s465d" event={"ID":"b719f7c1-5e8e-4358-9bf5-c9c1662d2519","Type":"ContainerStarted","Data":"fa259fccc7236ff2f07b730a41aef22815b5c47d411a734cbad2a7d973effd09"} Mar 16 17:58:21 crc kubenswrapper[4736]: I0316 17:58:21.538525 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s465d" event={"ID":"b719f7c1-5e8e-4358-9bf5-c9c1662d2519","Type":"ContainerStarted","Data":"4011f75fe59a027635add6816b8829d7e32389b7501812d817b9b6de41adac54"} Mar 16 17:58:23 crc kubenswrapper[4736]: I0316 17:58:23.561666 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s465d" event={"ID":"b719f7c1-5e8e-4358-9bf5-c9c1662d2519","Type":"ContainerDied","Data":"4011f75fe59a027635add6816b8829d7e32389b7501812d817b9b6de41adac54"} Mar 16 17:58:23 crc kubenswrapper[4736]: I0316 17:58:23.561608 4736 generic.go:334] "Generic (PLEG): container finished" podID="b719f7c1-5e8e-4358-9bf5-c9c1662d2519" containerID="4011f75fe59a027635add6816b8829d7e32389b7501812d817b9b6de41adac54" exitCode=0 Mar 16 17:58:24 crc kubenswrapper[4736]: I0316 17:58:24.610151 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s465d" event={"ID":"b719f7c1-5e8e-4358-9bf5-c9c1662d2519","Type":"ContainerStarted","Data":"d2894d8265c2b75f0f3eb1f76756a451c682a966f9f98eb6b8b35664d3e3455c"} Mar 16 17:58:24 crc kubenswrapper[4736]: I0316 17:58:24.629322 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s465d" podStartSLOduration=3.174890964 podStartE2EDuration="7.629305366s" podCreationTimestamp="2026-03-16 17:58:17 +0000 UTC" firstStartedPulling="2026-03-16 17:58:19.520340719 +0000 UTC m=+9901.247731016" lastFinishedPulling="2026-03-16 17:58:23.974755131 +0000 UTC m=+9905.702145418" observedRunningTime="2026-03-16 17:58:24.626620423 +0000 UTC m=+9906.354010700" watchObservedRunningTime="2026-03-16 17:58:24.629305366 +0000 UTC m=+9906.356695653" Mar 16 17:58:28 crc kubenswrapper[4736]: I0316 17:58:28.106262 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:28 crc kubenswrapper[4736]: I0316 17:58:28.106840 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:29 crc kubenswrapper[4736]: I0316 17:58:29.154937 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-s465d" podUID="b719f7c1-5e8e-4358-9bf5-c9c1662d2519" containerName="registry-server" probeResult="failure" output=< Mar 16 17:58:29 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:58:29 crc kubenswrapper[4736]: > Mar 16 17:58:29 crc kubenswrapper[4736]: I0316 17:58:29.327874 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xvwm9"] Mar 16 17:58:29 crc kubenswrapper[4736]: I0316 17:58:29.330677 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:29 crc kubenswrapper[4736]: I0316 17:58:29.408991 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xvwm9"] Mar 16 17:58:29 crc kubenswrapper[4736]: I0316 17:58:29.443384 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdjtr\" (UniqueName: \"kubernetes.io/projected/e43252e2-f02a-4803-b945-9ce6e746e104-kube-api-access-jdjtr\") pod \"community-operators-xvwm9\" (UID: \"e43252e2-f02a-4803-b945-9ce6e746e104\") " pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:29 crc kubenswrapper[4736]: I0316 17:58:29.443427 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43252e2-f02a-4803-b945-9ce6e746e104-utilities\") pod \"community-operators-xvwm9\" (UID: \"e43252e2-f02a-4803-b945-9ce6e746e104\") " pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:29 crc kubenswrapper[4736]: I0316 17:58:29.443466 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43252e2-f02a-4803-b945-9ce6e746e104-catalog-content\") pod \"community-operators-xvwm9\" (UID: \"e43252e2-f02a-4803-b945-9ce6e746e104\") " pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:29 crc kubenswrapper[4736]: I0316 17:58:29.545370 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdjtr\" (UniqueName: \"kubernetes.io/projected/e43252e2-f02a-4803-b945-9ce6e746e104-kube-api-access-jdjtr\") pod \"community-operators-xvwm9\" (UID: \"e43252e2-f02a-4803-b945-9ce6e746e104\") " pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:29 crc kubenswrapper[4736]: I0316 17:58:29.545411 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43252e2-f02a-4803-b945-9ce6e746e104-utilities\") pod \"community-operators-xvwm9\" (UID: \"e43252e2-f02a-4803-b945-9ce6e746e104\") " pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:29 crc kubenswrapper[4736]: I0316 17:58:29.545451 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43252e2-f02a-4803-b945-9ce6e746e104-catalog-content\") pod \"community-operators-xvwm9\" (UID: \"e43252e2-f02a-4803-b945-9ce6e746e104\") " pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:29 crc kubenswrapper[4736]: I0316 17:58:29.545908 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43252e2-f02a-4803-b945-9ce6e746e104-catalog-content\") pod \"community-operators-xvwm9\" (UID: \"e43252e2-f02a-4803-b945-9ce6e746e104\") " pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:29 crc kubenswrapper[4736]: I0316 17:58:29.546038 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43252e2-f02a-4803-b945-9ce6e746e104-utilities\") pod \"community-operators-xvwm9\" (UID: \"e43252e2-f02a-4803-b945-9ce6e746e104\") " pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:29 crc kubenswrapper[4736]: I0316 17:58:29.570040 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdjtr\" (UniqueName: \"kubernetes.io/projected/e43252e2-f02a-4803-b945-9ce6e746e104-kube-api-access-jdjtr\") pod \"community-operators-xvwm9\" (UID: \"e43252e2-f02a-4803-b945-9ce6e746e104\") " pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:29 crc kubenswrapper[4736]: I0316 17:58:29.679731 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:30 crc kubenswrapper[4736]: I0316 17:58:30.367843 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xvwm9"] Mar 16 17:58:30 crc kubenswrapper[4736]: W0316 17:58:30.817145 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode43252e2_f02a_4803_b945_9ce6e746e104.slice/crio-8cd618296222679e356739096beed470242b89a08b3d21276dc5e5424380dbc1 WatchSource:0}: Error finding container 8cd618296222679e356739096beed470242b89a08b3d21276dc5e5424380dbc1: Status 404 returned error can't find the container with id 8cd618296222679e356739096beed470242b89a08b3d21276dc5e5424380dbc1 Mar 16 17:58:31 crc kubenswrapper[4736]: I0316 17:58:31.708582 4736 generic.go:334] "Generic (PLEG): container finished" podID="e43252e2-f02a-4803-b945-9ce6e746e104" containerID="6316c85298b3d0a321611a6c5fe4704eb956d5a40ecc3a30df90cf90cc420ccf" exitCode=0 Mar 16 17:58:31 crc kubenswrapper[4736]: I0316 17:58:31.710156 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvwm9" event={"ID":"e43252e2-f02a-4803-b945-9ce6e746e104","Type":"ContainerDied","Data":"6316c85298b3d0a321611a6c5fe4704eb956d5a40ecc3a30df90cf90cc420ccf"} Mar 16 17:58:31 crc kubenswrapper[4736]: I0316 17:58:31.710201 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvwm9" event={"ID":"e43252e2-f02a-4803-b945-9ce6e746e104","Type":"ContainerStarted","Data":"8cd618296222679e356739096beed470242b89a08b3d21276dc5e5424380dbc1"} Mar 16 17:58:38 crc kubenswrapper[4736]: I0316 17:58:38.508193 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 17:58:38 crc kubenswrapper[4736]: I0316 17:58:38.509052 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 17:58:38 crc kubenswrapper[4736]: I0316 17:58:38.509130 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 17:58:38 crc kubenswrapper[4736]: I0316 17:58:38.510527 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 17:58:38 crc kubenswrapper[4736]: I0316 17:58:38.510590 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" gracePeriod=600 Mar 16 17:58:38 crc kubenswrapper[4736]: E0316 17:58:38.630408 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:58:38 crc kubenswrapper[4736]: I0316 17:58:38.793620 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" exitCode=0 Mar 16 17:58:38 crc kubenswrapper[4736]: I0316 17:58:38.793712 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898"} Mar 16 17:58:38 crc kubenswrapper[4736]: I0316 17:58:38.793750 4736 scope.go:117] "RemoveContainer" containerID="30a0d3d21403108f693ee58d25df024e1c56e38cab49db06a2b209796df6e1b5" Mar 16 17:58:38 crc kubenswrapper[4736]: I0316 17:58:38.794483 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 17:58:38 crc kubenswrapper[4736]: E0316 17:58:38.794924 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:58:38 crc kubenswrapper[4736]: I0316 17:58:38.797290 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvwm9" event={"ID":"e43252e2-f02a-4803-b945-9ce6e746e104","Type":"ContainerStarted","Data":"c164ca58845da298bb4eaa03d7a6ff365af0930c0bfdb13bccf826c54e20cf62"} Mar 16 17:58:39 crc kubenswrapper[4736]: I0316 17:58:39.152036 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-s465d" podUID="b719f7c1-5e8e-4358-9bf5-c9c1662d2519" containerName="registry-server" probeResult="failure" output=< Mar 16 17:58:39 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:58:39 crc kubenswrapper[4736]: > Mar 16 17:58:40 crc kubenswrapper[4736]: I0316 17:58:40.824983 4736 generic.go:334] "Generic (PLEG): container finished" podID="e43252e2-f02a-4803-b945-9ce6e746e104" containerID="c164ca58845da298bb4eaa03d7a6ff365af0930c0bfdb13bccf826c54e20cf62" exitCode=0 Mar 16 17:58:40 crc kubenswrapper[4736]: I0316 17:58:40.825061 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvwm9" event={"ID":"e43252e2-f02a-4803-b945-9ce6e746e104","Type":"ContainerDied","Data":"c164ca58845da298bb4eaa03d7a6ff365af0930c0bfdb13bccf826c54e20cf62"} Mar 16 17:58:40 crc kubenswrapper[4736]: I0316 17:58:40.830318 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 17:58:41 crc kubenswrapper[4736]: I0316 17:58:41.835060 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvwm9" event={"ID":"e43252e2-f02a-4803-b945-9ce6e746e104","Type":"ContainerStarted","Data":"2f832717c8f9f1d97e80def2b9c87c0f86d36258536fe174866cee570bf740cd"} Mar 16 17:58:41 crc kubenswrapper[4736]: I0316 17:58:41.858548 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xvwm9" podStartSLOduration=3.339072425 podStartE2EDuration="12.858530605s" podCreationTimestamp="2026-03-16 17:58:29 +0000 UTC" firstStartedPulling="2026-03-16 17:58:31.712263257 +0000 UTC m=+9913.439653544" lastFinishedPulling="2026-03-16 17:58:41.231721447 +0000 UTC m=+9922.959111724" observedRunningTime="2026-03-16 17:58:41.852649124 +0000 UTC m=+9923.580039411" watchObservedRunningTime="2026-03-16 17:58:41.858530605 +0000 UTC m=+9923.585920892" Mar 16 17:58:48 crc kubenswrapper[4736]: I0316 17:58:48.176751 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:48 crc kubenswrapper[4736]: I0316 17:58:48.266517 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:48 crc kubenswrapper[4736]: I0316 17:58:48.976155 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s465d"] Mar 16 17:58:49 crc kubenswrapper[4736]: I0316 17:58:49.680652 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:49 crc kubenswrapper[4736]: I0316 17:58:49.680715 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:49 crc kubenswrapper[4736]: I0316 17:58:49.740500 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:49 crc kubenswrapper[4736]: I0316 17:58:49.938915 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s465d" podUID="b719f7c1-5e8e-4358-9bf5-c9c1662d2519" containerName="registry-server" containerID="cri-o://d2894d8265c2b75f0f3eb1f76756a451c682a966f9f98eb6b8b35664d3e3455c" gracePeriod=2 Mar 16 17:58:49 crc kubenswrapper[4736]: I0316 17:58:49.994624 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xvwm9" Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.765357 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.854641 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-catalog-content\") pod \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\" (UID: \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\") " Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.854754 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpcpt\" (UniqueName: \"kubernetes.io/projected/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-kube-api-access-cpcpt\") pod \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\" (UID: \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\") " Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.854792 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-utilities\") pod \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\" (UID: \"b719f7c1-5e8e-4358-9bf5-c9c1662d2519\") " Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.856294 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-utilities" (OuterVolumeSpecName: "utilities") pod "b719f7c1-5e8e-4358-9bf5-c9c1662d2519" (UID: "b719f7c1-5e8e-4358-9bf5-c9c1662d2519"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.867323 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-kube-api-access-cpcpt" (OuterVolumeSpecName: "kube-api-access-cpcpt") pod "b719f7c1-5e8e-4358-9bf5-c9c1662d2519" (UID: "b719f7c1-5e8e-4358-9bf5-c9c1662d2519"). InnerVolumeSpecName "kube-api-access-cpcpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.923067 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b719f7c1-5e8e-4358-9bf5-c9c1662d2519" (UID: "b719f7c1-5e8e-4358-9bf5-c9c1662d2519"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.952924 4736 generic.go:334] "Generic (PLEG): container finished" podID="b719f7c1-5e8e-4358-9bf5-c9c1662d2519" containerID="d2894d8265c2b75f0f3eb1f76756a451c682a966f9f98eb6b8b35664d3e3455c" exitCode=0 Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.954222 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s465d" Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.955405 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s465d" event={"ID":"b719f7c1-5e8e-4358-9bf5-c9c1662d2519","Type":"ContainerDied","Data":"d2894d8265c2b75f0f3eb1f76756a451c682a966f9f98eb6b8b35664d3e3455c"} Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.955475 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s465d" event={"ID":"b719f7c1-5e8e-4358-9bf5-c9c1662d2519","Type":"ContainerDied","Data":"fa259fccc7236ff2f07b730a41aef22815b5c47d411a734cbad2a7d973effd09"} Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.955499 4736 scope.go:117] "RemoveContainer" containerID="d2894d8265c2b75f0f3eb1f76756a451c682a966f9f98eb6b8b35664d3e3455c" Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.956709 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.956817 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpcpt\" (UniqueName: \"kubernetes.io/projected/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-kube-api-access-cpcpt\") on node \"crc\" DevicePath \"\"" Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.956904 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b719f7c1-5e8e-4358-9bf5-c9c1662d2519-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.980576 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 17:58:50 crc kubenswrapper[4736]: E0316 17:58:50.980837 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:58:50 crc kubenswrapper[4736]: I0316 17:58:50.986421 4736 scope.go:117] "RemoveContainer" containerID="4011f75fe59a027635add6816b8829d7e32389b7501812d817b9b6de41adac54" Mar 16 17:58:51 crc kubenswrapper[4736]: I0316 17:58:51.008050 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s465d"] Mar 16 17:58:51 crc kubenswrapper[4736]: I0316 17:58:51.015472 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s465d"] Mar 16 17:58:51 crc kubenswrapper[4736]: I0316 17:58:51.017679 4736 scope.go:117] "RemoveContainer" containerID="ac56b2d18aa9883200b9f0c7548e71a6acdf5f11d894b7b15321a8733c089d42" Mar 16 17:58:51 crc kubenswrapper[4736]: I0316 17:58:51.077919 4736 scope.go:117] "RemoveContainer" containerID="d2894d8265c2b75f0f3eb1f76756a451c682a966f9f98eb6b8b35664d3e3455c" Mar 16 17:58:51 crc kubenswrapper[4736]: E0316 17:58:51.078438 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2894d8265c2b75f0f3eb1f76756a451c682a966f9f98eb6b8b35664d3e3455c\": container with ID starting with d2894d8265c2b75f0f3eb1f76756a451c682a966f9f98eb6b8b35664d3e3455c not found: ID does not exist" containerID="d2894d8265c2b75f0f3eb1f76756a451c682a966f9f98eb6b8b35664d3e3455c" Mar 16 17:58:51 crc kubenswrapper[4736]: I0316 17:58:51.078465 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2894d8265c2b75f0f3eb1f76756a451c682a966f9f98eb6b8b35664d3e3455c"} err="failed to get container status \"d2894d8265c2b75f0f3eb1f76756a451c682a966f9f98eb6b8b35664d3e3455c\": rpc error: code = NotFound desc = could not find container \"d2894d8265c2b75f0f3eb1f76756a451c682a966f9f98eb6b8b35664d3e3455c\": container with ID starting with d2894d8265c2b75f0f3eb1f76756a451c682a966f9f98eb6b8b35664d3e3455c not found: ID does not exist" Mar 16 17:58:51 crc kubenswrapper[4736]: I0316 17:58:51.078486 4736 scope.go:117] "RemoveContainer" containerID="4011f75fe59a027635add6816b8829d7e32389b7501812d817b9b6de41adac54" Mar 16 17:58:51 crc kubenswrapper[4736]: E0316 17:58:51.078808 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4011f75fe59a027635add6816b8829d7e32389b7501812d817b9b6de41adac54\": container with ID starting with 4011f75fe59a027635add6816b8829d7e32389b7501812d817b9b6de41adac54 not found: ID does not exist" containerID="4011f75fe59a027635add6816b8829d7e32389b7501812d817b9b6de41adac54" Mar 16 17:58:51 crc kubenswrapper[4736]: I0316 17:58:51.078884 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4011f75fe59a027635add6816b8829d7e32389b7501812d817b9b6de41adac54"} err="failed to get container status \"4011f75fe59a027635add6816b8829d7e32389b7501812d817b9b6de41adac54\": rpc error: code = NotFound desc = could not find container \"4011f75fe59a027635add6816b8829d7e32389b7501812d817b9b6de41adac54\": container with ID starting with 4011f75fe59a027635add6816b8829d7e32389b7501812d817b9b6de41adac54 not found: ID does not exist" Mar 16 17:58:51 crc kubenswrapper[4736]: I0316 17:58:51.078917 4736 scope.go:117] "RemoveContainer" containerID="ac56b2d18aa9883200b9f0c7548e71a6acdf5f11d894b7b15321a8733c089d42" Mar 16 17:58:51 crc kubenswrapper[4736]: E0316 17:58:51.079279 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac56b2d18aa9883200b9f0c7548e71a6acdf5f11d894b7b15321a8733c089d42\": container with ID starting with ac56b2d18aa9883200b9f0c7548e71a6acdf5f11d894b7b15321a8733c089d42 not found: ID does not exist" containerID="ac56b2d18aa9883200b9f0c7548e71a6acdf5f11d894b7b15321a8733c089d42" Mar 16 17:58:51 crc kubenswrapper[4736]: I0316 17:58:51.079312 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac56b2d18aa9883200b9f0c7548e71a6acdf5f11d894b7b15321a8733c089d42"} err="failed to get container status \"ac56b2d18aa9883200b9f0c7548e71a6acdf5f11d894b7b15321a8733c089d42\": rpc error: code = NotFound desc = could not find container \"ac56b2d18aa9883200b9f0c7548e71a6acdf5f11d894b7b15321a8733c089d42\": container with ID starting with ac56b2d18aa9883200b9f0c7548e71a6acdf5f11d894b7b15321a8733c089d42 not found: ID does not exist" Mar 16 17:58:51 crc kubenswrapper[4736]: I0316 17:58:51.470695 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xvwm9"] Mar 16 17:58:51 crc kubenswrapper[4736]: I0316 17:58:51.779282 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g877r"] Mar 16 17:58:51 crc kubenswrapper[4736]: I0316 17:58:51.787348 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g877r" podUID="775260d4-2928-45a0-8196-e286f09f748a" containerName="registry-server" containerID="cri-o://c365f9a8a06a59da1232f92dd3b3ef790c042fe8520eac9a52aeedf8e2d0bd03" gracePeriod=2 Mar 16 17:58:51 crc kubenswrapper[4736]: I0316 17:58:51.973755 4736 generic.go:334] "Generic (PLEG): container finished" podID="775260d4-2928-45a0-8196-e286f09f748a" containerID="c365f9a8a06a59da1232f92dd3b3ef790c042fe8520eac9a52aeedf8e2d0bd03" exitCode=0 Mar 16 17:58:51 crc kubenswrapper[4736]: I0316 17:58:51.973835 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g877r" event={"ID":"775260d4-2928-45a0-8196-e286f09f748a","Type":"ContainerDied","Data":"c365f9a8a06a59da1232f92dd3b3ef790c042fe8520eac9a52aeedf8e2d0bd03"} Mar 16 17:58:52 crc kubenswrapper[4736]: I0316 17:58:52.313416 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g877r" Mar 16 17:58:52 crc kubenswrapper[4736]: I0316 17:58:52.387738 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/775260d4-2928-45a0-8196-e286f09f748a-catalog-content\") pod \"775260d4-2928-45a0-8196-e286f09f748a\" (UID: \"775260d4-2928-45a0-8196-e286f09f748a\") " Mar 16 17:58:52 crc kubenswrapper[4736]: I0316 17:58:52.387926 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/775260d4-2928-45a0-8196-e286f09f748a-utilities\") pod \"775260d4-2928-45a0-8196-e286f09f748a\" (UID: \"775260d4-2928-45a0-8196-e286f09f748a\") " Mar 16 17:58:52 crc kubenswrapper[4736]: I0316 17:58:52.387989 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdsn4\" (UniqueName: \"kubernetes.io/projected/775260d4-2928-45a0-8196-e286f09f748a-kube-api-access-zdsn4\") pod \"775260d4-2928-45a0-8196-e286f09f748a\" (UID: \"775260d4-2928-45a0-8196-e286f09f748a\") " Mar 16 17:58:52 crc kubenswrapper[4736]: I0316 17:58:52.388491 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/775260d4-2928-45a0-8196-e286f09f748a-utilities" (OuterVolumeSpecName: "utilities") pod "775260d4-2928-45a0-8196-e286f09f748a" (UID: "775260d4-2928-45a0-8196-e286f09f748a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:58:52 crc kubenswrapper[4736]: I0316 17:58:52.403306 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/775260d4-2928-45a0-8196-e286f09f748a-kube-api-access-zdsn4" (OuterVolumeSpecName: "kube-api-access-zdsn4") pod "775260d4-2928-45a0-8196-e286f09f748a" (UID: "775260d4-2928-45a0-8196-e286f09f748a"). InnerVolumeSpecName "kube-api-access-zdsn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:58:52 crc kubenswrapper[4736]: I0316 17:58:52.452831 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/775260d4-2928-45a0-8196-e286f09f748a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "775260d4-2928-45a0-8196-e286f09f748a" (UID: "775260d4-2928-45a0-8196-e286f09f748a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:58:52 crc kubenswrapper[4736]: I0316 17:58:52.489837 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdsn4\" (UniqueName: \"kubernetes.io/projected/775260d4-2928-45a0-8196-e286f09f748a-kube-api-access-zdsn4\") on node \"crc\" DevicePath \"\"" Mar 16 17:58:52 crc kubenswrapper[4736]: I0316 17:58:52.489870 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/775260d4-2928-45a0-8196-e286f09f748a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:58:52 crc kubenswrapper[4736]: I0316 17:58:52.489884 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/775260d4-2928-45a0-8196-e286f09f748a-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:58:53 crc kubenswrapper[4736]: I0316 17:58:53.004964 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b719f7c1-5e8e-4358-9bf5-c9c1662d2519" path="/var/lib/kubelet/pods/b719f7c1-5e8e-4358-9bf5-c9c1662d2519/volumes" Mar 16 17:58:53 crc kubenswrapper[4736]: I0316 17:58:53.012978 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g877r" event={"ID":"775260d4-2928-45a0-8196-e286f09f748a","Type":"ContainerDied","Data":"9b25ac3ba79ef35988fdc6057a43f306f62841be71775f0e04f8f12479a2d11e"} Mar 16 17:58:53 crc kubenswrapper[4736]: I0316 17:58:53.013033 4736 scope.go:117] "RemoveContainer" containerID="c365f9a8a06a59da1232f92dd3b3ef790c042fe8520eac9a52aeedf8e2d0bd03" Mar 16 17:58:53 crc kubenswrapper[4736]: I0316 17:58:53.013160 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g877r" Mar 16 17:58:53 crc kubenswrapper[4736]: I0316 17:58:53.068180 4736 scope.go:117] "RemoveContainer" containerID="fa80f7e0d017571f25bc2dccd278c5bd26d350e7e98a0fcfc4b131d7a5dfaad0" Mar 16 17:58:53 crc kubenswrapper[4736]: I0316 17:58:53.092229 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g877r"] Mar 16 17:58:53 crc kubenswrapper[4736]: I0316 17:58:53.107910 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g877r"] Mar 16 17:58:53 crc kubenswrapper[4736]: I0316 17:58:53.308254 4736 scope.go:117] "RemoveContainer" containerID="77a5185f978e8e46bdab94b84c16cb0415e2a387c1b0e12a11bf76cf87aca7dd" Mar 16 17:58:54 crc kubenswrapper[4736]: I0316 17:58:54.989359 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="775260d4-2928-45a0-8196-e286f09f748a" path="/var/lib/kubelet/pods/775260d4-2928-45a0-8196-e286f09f748a/volumes" Mar 16 17:59:05 crc kubenswrapper[4736]: I0316 17:59:05.097021 4736 scope.go:117] "RemoveContainer" containerID="7e8967ca66ee1a99c6feed062296d14b522efde12792bc7184838cd29d95feed" Mar 16 17:59:05 crc kubenswrapper[4736]: I0316 17:59:05.979190 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 17:59:05 crc kubenswrapper[4736]: E0316 17:59:05.979499 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:59:20 crc kubenswrapper[4736]: I0316 17:59:20.977562 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 17:59:20 crc kubenswrapper[4736]: E0316 17:59:20.978368 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.262250 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j95fh"] Mar 16 17:59:23 crc kubenswrapper[4736]: E0316 17:59:23.264240 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b719f7c1-5e8e-4358-9bf5-c9c1662d2519" containerName="extract-utilities" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.264393 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b719f7c1-5e8e-4358-9bf5-c9c1662d2519" containerName="extract-utilities" Mar 16 17:59:23 crc kubenswrapper[4736]: E0316 17:59:23.264481 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b719f7c1-5e8e-4358-9bf5-c9c1662d2519" containerName="extract-content" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.264494 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b719f7c1-5e8e-4358-9bf5-c9c1662d2519" containerName="extract-content" Mar 16 17:59:23 crc kubenswrapper[4736]: E0316 17:59:23.264518 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b719f7c1-5e8e-4358-9bf5-c9c1662d2519" containerName="registry-server" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.264524 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="b719f7c1-5e8e-4358-9bf5-c9c1662d2519" containerName="registry-server" Mar 16 17:59:23 crc kubenswrapper[4736]: E0316 17:59:23.264548 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775260d4-2928-45a0-8196-e286f09f748a" containerName="extract-content" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.264555 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="775260d4-2928-45a0-8196-e286f09f748a" containerName="extract-content" Mar 16 17:59:23 crc kubenswrapper[4736]: E0316 17:59:23.264579 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775260d4-2928-45a0-8196-e286f09f748a" containerName="registry-server" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.264586 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="775260d4-2928-45a0-8196-e286f09f748a" containerName="registry-server" Mar 16 17:59:23 crc kubenswrapper[4736]: E0316 17:59:23.264607 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="775260d4-2928-45a0-8196-e286f09f748a" containerName="extract-utilities" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.264613 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="775260d4-2928-45a0-8196-e286f09f748a" containerName="extract-utilities" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.264993 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="b719f7c1-5e8e-4358-9bf5-c9c1662d2519" containerName="registry-server" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.265015 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="775260d4-2928-45a0-8196-e286f09f748a" containerName="registry-server" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.267954 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.277514 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j95fh"] Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.421277 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8rz7\" (UniqueName: \"kubernetes.io/projected/6c5bf984-cf7e-40be-9505-ce23fb1aca36-kube-api-access-s8rz7\") pod \"redhat-marketplace-j95fh\" (UID: \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\") " pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.421569 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5bf984-cf7e-40be-9505-ce23fb1aca36-catalog-content\") pod \"redhat-marketplace-j95fh\" (UID: \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\") " pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.421718 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5bf984-cf7e-40be-9505-ce23fb1aca36-utilities\") pod \"redhat-marketplace-j95fh\" (UID: \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\") " pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.523477 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5bf984-cf7e-40be-9505-ce23fb1aca36-utilities\") pod \"redhat-marketplace-j95fh\" (UID: \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\") " pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.523565 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8rz7\" (UniqueName: \"kubernetes.io/projected/6c5bf984-cf7e-40be-9505-ce23fb1aca36-kube-api-access-s8rz7\") pod \"redhat-marketplace-j95fh\" (UID: \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\") " pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.523617 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5bf984-cf7e-40be-9505-ce23fb1aca36-catalog-content\") pod \"redhat-marketplace-j95fh\" (UID: \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\") " pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.524043 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5bf984-cf7e-40be-9505-ce23fb1aca36-utilities\") pod \"redhat-marketplace-j95fh\" (UID: \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\") " pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.524062 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5bf984-cf7e-40be-9505-ce23fb1aca36-catalog-content\") pod \"redhat-marketplace-j95fh\" (UID: \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\") " pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.548268 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8rz7\" (UniqueName: \"kubernetes.io/projected/6c5bf984-cf7e-40be-9505-ce23fb1aca36-kube-api-access-s8rz7\") pod \"redhat-marketplace-j95fh\" (UID: \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\") " pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:23 crc kubenswrapper[4736]: I0316 17:59:23.627860 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:24 crc kubenswrapper[4736]: I0316 17:59:24.162261 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j95fh"] Mar 16 17:59:24 crc kubenswrapper[4736]: I0316 17:59:24.324625 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j95fh" event={"ID":"6c5bf984-cf7e-40be-9505-ce23fb1aca36","Type":"ContainerStarted","Data":"ec28492e296ee03e29c0299c8fe38bf2b66c3deb311d427b87b35e87554db35f"} Mar 16 17:59:25 crc kubenswrapper[4736]: I0316 17:59:25.333754 4736 generic.go:334] "Generic (PLEG): container finished" podID="6c5bf984-cf7e-40be-9505-ce23fb1aca36" containerID="eb5169f37da082923e6de6d050fd46677a4d62afc9891684cee9ee8d9f46bc78" exitCode=0 Mar 16 17:59:25 crc kubenswrapper[4736]: I0316 17:59:25.333812 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j95fh" event={"ID":"6c5bf984-cf7e-40be-9505-ce23fb1aca36","Type":"ContainerDied","Data":"eb5169f37da082923e6de6d050fd46677a4d62afc9891684cee9ee8d9f46bc78"} Mar 16 17:59:27 crc kubenswrapper[4736]: I0316 17:59:27.352756 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j95fh" event={"ID":"6c5bf984-cf7e-40be-9505-ce23fb1aca36","Type":"ContainerStarted","Data":"e0a34c3fe6351729cd7aad8fcd58181aae43f216ed779e2a6d5f712c7cfa074c"} Mar 16 17:59:28 crc kubenswrapper[4736]: I0316 17:59:28.364568 4736 generic.go:334] "Generic (PLEG): container finished" podID="6c5bf984-cf7e-40be-9505-ce23fb1aca36" containerID="e0a34c3fe6351729cd7aad8fcd58181aae43f216ed779e2a6d5f712c7cfa074c" exitCode=0 Mar 16 17:59:28 crc kubenswrapper[4736]: I0316 17:59:28.364613 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j95fh" event={"ID":"6c5bf984-cf7e-40be-9505-ce23fb1aca36","Type":"ContainerDied","Data":"e0a34c3fe6351729cd7aad8fcd58181aae43f216ed779e2a6d5f712c7cfa074c"} Mar 16 17:59:29 crc kubenswrapper[4736]: I0316 17:59:29.380690 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j95fh" event={"ID":"6c5bf984-cf7e-40be-9505-ce23fb1aca36","Type":"ContainerStarted","Data":"6fe6c6519f1c530a2f44be0cacdcad55c114a4cfe60ddd204c9b62e73f9d3a3a"} Mar 16 17:59:29 crc kubenswrapper[4736]: I0316 17:59:29.404135 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j95fh" podStartSLOduration=2.731415466 podStartE2EDuration="6.404108903s" podCreationTimestamp="2026-03-16 17:59:23 +0000 UTC" firstStartedPulling="2026-03-16 17:59:25.33546778 +0000 UTC m=+9967.062858067" lastFinishedPulling="2026-03-16 17:59:29.008161217 +0000 UTC m=+9970.735551504" observedRunningTime="2026-03-16 17:59:29.400618898 +0000 UTC m=+9971.128009205" watchObservedRunningTime="2026-03-16 17:59:29.404108903 +0000 UTC m=+9971.131499190" Mar 16 17:59:33 crc kubenswrapper[4736]: I0316 17:59:33.628835 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:33 crc kubenswrapper[4736]: I0316 17:59:33.630164 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:34 crc kubenswrapper[4736]: I0316 17:59:34.699609 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-j95fh" podUID="6c5bf984-cf7e-40be-9505-ce23fb1aca36" containerName="registry-server" probeResult="failure" output=< Mar 16 17:59:34 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 17:59:34 crc kubenswrapper[4736]: > Mar 16 17:59:34 crc kubenswrapper[4736]: I0316 17:59:34.980163 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 17:59:34 crc kubenswrapper[4736]: E0316 17:59:34.980408 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:59:43 crc kubenswrapper[4736]: I0316 17:59:43.702310 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:43 crc kubenswrapper[4736]: I0316 17:59:43.761093 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:43 crc kubenswrapper[4736]: I0316 17:59:43.948990 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j95fh"] Mar 16 17:59:45 crc kubenswrapper[4736]: I0316 17:59:45.551981 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j95fh" podUID="6c5bf984-cf7e-40be-9505-ce23fb1aca36" containerName="registry-server" containerID="cri-o://6fe6c6519f1c530a2f44be0cacdcad55c114a4cfe60ddd204c9b62e73f9d3a3a" gracePeriod=2 Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.215486 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.380429 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5bf984-cf7e-40be-9505-ce23fb1aca36-utilities\") pod \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\" (UID: \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\") " Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.380614 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5bf984-cf7e-40be-9505-ce23fb1aca36-catalog-content\") pod \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\" (UID: \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\") " Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.380717 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8rz7\" (UniqueName: \"kubernetes.io/projected/6c5bf984-cf7e-40be-9505-ce23fb1aca36-kube-api-access-s8rz7\") pod \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\" (UID: \"6c5bf984-cf7e-40be-9505-ce23fb1aca36\") " Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.381063 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c5bf984-cf7e-40be-9505-ce23fb1aca36-utilities" (OuterVolumeSpecName: "utilities") pod "6c5bf984-cf7e-40be-9505-ce23fb1aca36" (UID: "6c5bf984-cf7e-40be-9505-ce23fb1aca36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.393825 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5bf984-cf7e-40be-9505-ce23fb1aca36-kube-api-access-s8rz7" (OuterVolumeSpecName: "kube-api-access-s8rz7") pod "6c5bf984-cf7e-40be-9505-ce23fb1aca36" (UID: "6c5bf984-cf7e-40be-9505-ce23fb1aca36"). InnerVolumeSpecName "kube-api-access-s8rz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.404571 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c5bf984-cf7e-40be-9505-ce23fb1aca36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c5bf984-cf7e-40be-9505-ce23fb1aca36" (UID: "6c5bf984-cf7e-40be-9505-ce23fb1aca36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.482924 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c5bf984-cf7e-40be-9505-ce23fb1aca36-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.482969 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8rz7\" (UniqueName: \"kubernetes.io/projected/6c5bf984-cf7e-40be-9505-ce23fb1aca36-kube-api-access-s8rz7\") on node \"crc\" DevicePath \"\"" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.482982 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c5bf984-cf7e-40be-9505-ce23fb1aca36-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.562869 4736 generic.go:334] "Generic (PLEG): container finished" podID="6c5bf984-cf7e-40be-9505-ce23fb1aca36" containerID="6fe6c6519f1c530a2f44be0cacdcad55c114a4cfe60ddd204c9b62e73f9d3a3a" exitCode=0 Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.562936 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j95fh" event={"ID":"6c5bf984-cf7e-40be-9505-ce23fb1aca36","Type":"ContainerDied","Data":"6fe6c6519f1c530a2f44be0cacdcad55c114a4cfe60ddd204c9b62e73f9d3a3a"} Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.562954 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j95fh" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.562967 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j95fh" event={"ID":"6c5bf984-cf7e-40be-9505-ce23fb1aca36","Type":"ContainerDied","Data":"ec28492e296ee03e29c0299c8fe38bf2b66c3deb311d427b87b35e87554db35f"} Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.562990 4736 scope.go:117] "RemoveContainer" containerID="6fe6c6519f1c530a2f44be0cacdcad55c114a4cfe60ddd204c9b62e73f9d3a3a" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.597586 4736 scope.go:117] "RemoveContainer" containerID="e0a34c3fe6351729cd7aad8fcd58181aae43f216ed779e2a6d5f712c7cfa074c" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.617751 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j95fh"] Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.629033 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j95fh"] Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.679025 4736 scope.go:117] "RemoveContainer" containerID="eb5169f37da082923e6de6d050fd46677a4d62afc9891684cee9ee8d9f46bc78" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.723756 4736 scope.go:117] "RemoveContainer" containerID="6fe6c6519f1c530a2f44be0cacdcad55c114a4cfe60ddd204c9b62e73f9d3a3a" Mar 16 17:59:46 crc kubenswrapper[4736]: E0316 17:59:46.724771 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fe6c6519f1c530a2f44be0cacdcad55c114a4cfe60ddd204c9b62e73f9d3a3a\": container with ID starting with 6fe6c6519f1c530a2f44be0cacdcad55c114a4cfe60ddd204c9b62e73f9d3a3a not found: ID does not exist" containerID="6fe6c6519f1c530a2f44be0cacdcad55c114a4cfe60ddd204c9b62e73f9d3a3a" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.724813 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe6c6519f1c530a2f44be0cacdcad55c114a4cfe60ddd204c9b62e73f9d3a3a"} err="failed to get container status \"6fe6c6519f1c530a2f44be0cacdcad55c114a4cfe60ddd204c9b62e73f9d3a3a\": rpc error: code = NotFound desc = could not find container \"6fe6c6519f1c530a2f44be0cacdcad55c114a4cfe60ddd204c9b62e73f9d3a3a\": container with ID starting with 6fe6c6519f1c530a2f44be0cacdcad55c114a4cfe60ddd204c9b62e73f9d3a3a not found: ID does not exist" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.724840 4736 scope.go:117] "RemoveContainer" containerID="e0a34c3fe6351729cd7aad8fcd58181aae43f216ed779e2a6d5f712c7cfa074c" Mar 16 17:59:46 crc kubenswrapper[4736]: E0316 17:59:46.725178 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a34c3fe6351729cd7aad8fcd58181aae43f216ed779e2a6d5f712c7cfa074c\": container with ID starting with e0a34c3fe6351729cd7aad8fcd58181aae43f216ed779e2a6d5f712c7cfa074c not found: ID does not exist" containerID="e0a34c3fe6351729cd7aad8fcd58181aae43f216ed779e2a6d5f712c7cfa074c" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.725226 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a34c3fe6351729cd7aad8fcd58181aae43f216ed779e2a6d5f712c7cfa074c"} err="failed to get container status \"e0a34c3fe6351729cd7aad8fcd58181aae43f216ed779e2a6d5f712c7cfa074c\": rpc error: code = NotFound desc = could not find container \"e0a34c3fe6351729cd7aad8fcd58181aae43f216ed779e2a6d5f712c7cfa074c\": container with ID starting with e0a34c3fe6351729cd7aad8fcd58181aae43f216ed779e2a6d5f712c7cfa074c not found: ID does not exist" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.725256 4736 scope.go:117] "RemoveContainer" containerID="eb5169f37da082923e6de6d050fd46677a4d62afc9891684cee9ee8d9f46bc78" Mar 16 17:59:46 crc kubenswrapper[4736]: E0316 17:59:46.728195 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb5169f37da082923e6de6d050fd46677a4d62afc9891684cee9ee8d9f46bc78\": container with ID starting with eb5169f37da082923e6de6d050fd46677a4d62afc9891684cee9ee8d9f46bc78 not found: ID does not exist" containerID="eb5169f37da082923e6de6d050fd46677a4d62afc9891684cee9ee8d9f46bc78" Mar 16 17:59:46 crc kubenswrapper[4736]: I0316 17:59:46.728227 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb5169f37da082923e6de6d050fd46677a4d62afc9891684cee9ee8d9f46bc78"} err="failed to get container status \"eb5169f37da082923e6de6d050fd46677a4d62afc9891684cee9ee8d9f46bc78\": rpc error: code = NotFound desc = could not find container \"eb5169f37da082923e6de6d050fd46677a4d62afc9891684cee9ee8d9f46bc78\": container with ID starting with eb5169f37da082923e6de6d050fd46677a4d62afc9891684cee9ee8d9f46bc78 not found: ID does not exist" Mar 16 17:59:47 crc kubenswrapper[4736]: I0316 17:59:47.014423 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c5bf984-cf7e-40be-9505-ce23fb1aca36" path="/var/lib/kubelet/pods/6c5bf984-cf7e-40be-9505-ce23fb1aca36/volumes" Mar 16 17:59:47 crc kubenswrapper[4736]: I0316 17:59:47.978327 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 17:59:47 crc kubenswrapper[4736]: E0316 17:59:47.978691 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 17:59:58 crc kubenswrapper[4736]: I0316 17:59:58.978043 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 17:59:58 crc kubenswrapper[4736]: E0316 17:59:58.979986 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.172004 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561400-d6qn6"] Mar 16 18:00:00 crc kubenswrapper[4736]: E0316 18:00:00.172803 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5bf984-cf7e-40be-9505-ce23fb1aca36" containerName="registry-server" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.172819 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5bf984-cf7e-40be-9505-ce23fb1aca36" containerName="registry-server" Mar 16 18:00:00 crc kubenswrapper[4736]: E0316 18:00:00.172843 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5bf984-cf7e-40be-9505-ce23fb1aca36" containerName="extract-content" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.172849 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5bf984-cf7e-40be-9505-ce23fb1aca36" containerName="extract-content" Mar 16 18:00:00 crc kubenswrapper[4736]: E0316 18:00:00.172876 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5bf984-cf7e-40be-9505-ce23fb1aca36" containerName="extract-utilities" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.172882 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5bf984-cf7e-40be-9505-ce23fb1aca36" containerName="extract-utilities" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.173066 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c5bf984-cf7e-40be-9505-ce23fb1aca36" containerName="registry-server" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.173798 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561400-d6qn6" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.187052 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm"] Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.189171 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.197712 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.197705 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.197723 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.197716 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.199411 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.206571 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561400-d6qn6"] Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.223461 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm"] Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.258747 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29211b8-900b-411e-8762-3dfaf7b4a740-config-volume\") pod \"collect-profiles-29561400-xd6vm\" (UID: \"e29211b8-900b-411e-8762-3dfaf7b4a740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.258796 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbgdd\" (UniqueName: \"kubernetes.io/projected/e29211b8-900b-411e-8762-3dfaf7b4a740-kube-api-access-hbgdd\") pod \"collect-profiles-29561400-xd6vm\" (UID: \"e29211b8-900b-411e-8762-3dfaf7b4a740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.258909 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxhg8\" (UniqueName: \"kubernetes.io/projected/a58d008a-0a87-40d2-933f-f756a3adb684-kube-api-access-xxhg8\") pod \"auto-csr-approver-29561400-d6qn6\" (UID: \"a58d008a-0a87-40d2-933f-f756a3adb684\") " pod="openshift-infra/auto-csr-approver-29561400-d6qn6" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.258927 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29211b8-900b-411e-8762-3dfaf7b4a740-secret-volume\") pod \"collect-profiles-29561400-xd6vm\" (UID: \"e29211b8-900b-411e-8762-3dfaf7b4a740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.360671 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxhg8\" (UniqueName: \"kubernetes.io/projected/a58d008a-0a87-40d2-933f-f756a3adb684-kube-api-access-xxhg8\") pod \"auto-csr-approver-29561400-d6qn6\" (UID: \"a58d008a-0a87-40d2-933f-f756a3adb684\") " pod="openshift-infra/auto-csr-approver-29561400-d6qn6" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.360873 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29211b8-900b-411e-8762-3dfaf7b4a740-secret-volume\") pod \"collect-profiles-29561400-xd6vm\" (UID: \"e29211b8-900b-411e-8762-3dfaf7b4a740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.360935 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29211b8-900b-411e-8762-3dfaf7b4a740-config-volume\") pod \"collect-profiles-29561400-xd6vm\" (UID: \"e29211b8-900b-411e-8762-3dfaf7b4a740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.360962 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbgdd\" (UniqueName: \"kubernetes.io/projected/e29211b8-900b-411e-8762-3dfaf7b4a740-kube-api-access-hbgdd\") pod \"collect-profiles-29561400-xd6vm\" (UID: \"e29211b8-900b-411e-8762-3dfaf7b4a740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.362519 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29211b8-900b-411e-8762-3dfaf7b4a740-config-volume\") pod \"collect-profiles-29561400-xd6vm\" (UID: \"e29211b8-900b-411e-8762-3dfaf7b4a740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.594742 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29211b8-900b-411e-8762-3dfaf7b4a740-secret-volume\") pod \"collect-profiles-29561400-xd6vm\" (UID: \"e29211b8-900b-411e-8762-3dfaf7b4a740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.595357 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbgdd\" (UniqueName: \"kubernetes.io/projected/e29211b8-900b-411e-8762-3dfaf7b4a740-kube-api-access-hbgdd\") pod \"collect-profiles-29561400-xd6vm\" (UID: \"e29211b8-900b-411e-8762-3dfaf7b4a740\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.596002 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxhg8\" (UniqueName: \"kubernetes.io/projected/a58d008a-0a87-40d2-933f-f756a3adb684-kube-api-access-xxhg8\") pod \"auto-csr-approver-29561400-d6qn6\" (UID: \"a58d008a-0a87-40d2-933f-f756a3adb684\") " pod="openshift-infra/auto-csr-approver-29561400-d6qn6" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.810213 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561400-d6qn6" Mar 16 18:00:00 crc kubenswrapper[4736]: I0316 18:00:00.823783 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" Mar 16 18:00:01 crc kubenswrapper[4736]: I0316 18:00:01.622712 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm"] Mar 16 18:00:01 crc kubenswrapper[4736]: I0316 18:00:01.790842 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561400-d6qn6"] Mar 16 18:00:02 crc kubenswrapper[4736]: I0316 18:00:02.723293 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" event={"ID":"e29211b8-900b-411e-8762-3dfaf7b4a740","Type":"ContainerStarted","Data":"fbcec20ba52e34a9e4ba1f354c206dc3f5f6640f730578972249ccce0cf533f6"} Mar 16 18:00:02 crc kubenswrapper[4736]: I0316 18:00:02.723729 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" event={"ID":"e29211b8-900b-411e-8762-3dfaf7b4a740","Type":"ContainerStarted","Data":"e0e7507fcde15f093d6d55cd3e95d91141f4eb4ac04367e718c79bb58761ba06"} Mar 16 18:00:02 crc kubenswrapper[4736]: I0316 18:00:02.726266 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561400-d6qn6" event={"ID":"a58d008a-0a87-40d2-933f-f756a3adb684","Type":"ContainerStarted","Data":"b3cbb8857c59f08566be7b035b04d66221f09d710fabf2b74b0098797d3ef95f"} Mar 16 18:00:03 crc kubenswrapper[4736]: I0316 18:00:03.739853 4736 generic.go:334] "Generic (PLEG): container finished" podID="e29211b8-900b-411e-8762-3dfaf7b4a740" containerID="fbcec20ba52e34a9e4ba1f354c206dc3f5f6640f730578972249ccce0cf533f6" exitCode=0 Mar 16 18:00:03 crc kubenswrapper[4736]: I0316 18:00:03.739979 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" event={"ID":"e29211b8-900b-411e-8762-3dfaf7b4a740","Type":"ContainerDied","Data":"fbcec20ba52e34a9e4ba1f354c206dc3f5f6640f730578972249ccce0cf533f6"} Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.350572 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.472836 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29211b8-900b-411e-8762-3dfaf7b4a740-config-volume\") pod \"e29211b8-900b-411e-8762-3dfaf7b4a740\" (UID: \"e29211b8-900b-411e-8762-3dfaf7b4a740\") " Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.472931 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29211b8-900b-411e-8762-3dfaf7b4a740-secret-volume\") pod \"e29211b8-900b-411e-8762-3dfaf7b4a740\" (UID: \"e29211b8-900b-411e-8762-3dfaf7b4a740\") " Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.473005 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbgdd\" (UniqueName: \"kubernetes.io/projected/e29211b8-900b-411e-8762-3dfaf7b4a740-kube-api-access-hbgdd\") pod \"e29211b8-900b-411e-8762-3dfaf7b4a740\" (UID: \"e29211b8-900b-411e-8762-3dfaf7b4a740\") " Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.474325 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e29211b8-900b-411e-8762-3dfaf7b4a740-config-volume" (OuterVolumeSpecName: "config-volume") pod "e29211b8-900b-411e-8762-3dfaf7b4a740" (UID: "e29211b8-900b-411e-8762-3dfaf7b4a740"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.485666 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29211b8-900b-411e-8762-3dfaf7b4a740-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e29211b8-900b-411e-8762-3dfaf7b4a740" (UID: "e29211b8-900b-411e-8762-3dfaf7b4a740"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.486013 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29211b8-900b-411e-8762-3dfaf7b4a740-kube-api-access-hbgdd" (OuterVolumeSpecName: "kube-api-access-hbgdd") pod "e29211b8-900b-411e-8762-3dfaf7b4a740" (UID: "e29211b8-900b-411e-8762-3dfaf7b4a740"). InnerVolumeSpecName "kube-api-access-hbgdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.574593 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29211b8-900b-411e-8762-3dfaf7b4a740-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.574894 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29211b8-900b-411e-8762-3dfaf7b4a740-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.574905 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbgdd\" (UniqueName: \"kubernetes.io/projected/e29211b8-900b-411e-8762-3dfaf7b4a740-kube-api-access-hbgdd\") on node \"crc\" DevicePath \"\"" Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.764548 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" event={"ID":"e29211b8-900b-411e-8762-3dfaf7b4a740","Type":"ContainerDied","Data":"e0e7507fcde15f093d6d55cd3e95d91141f4eb4ac04367e718c79bb58761ba06"} Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.764589 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0e7507fcde15f093d6d55cd3e95d91141f4eb4ac04367e718c79bb58761ba06" Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.765146 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm" Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.852781 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4"] Mar 16 18:00:05 crc kubenswrapper[4736]: I0316 18:00:05.863507 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561355-9m7n4"] Mar 16 18:00:06 crc kubenswrapper[4736]: I0316 18:00:06.778203 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561400-d6qn6" event={"ID":"a58d008a-0a87-40d2-933f-f756a3adb684","Type":"ContainerStarted","Data":"34b53e525248d97471522787ecbe0ada6c0e7eda2163c159e4382883b4e5706c"} Mar 16 18:00:06 crc kubenswrapper[4736]: I0316 18:00:06.800641 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561400-d6qn6" podStartSLOduration=3.668123025 podStartE2EDuration="6.800615138s" podCreationTimestamp="2026-03-16 18:00:00 +0000 UTC" firstStartedPulling="2026-03-16 18:00:02.108452208 +0000 UTC m=+10003.835842515" lastFinishedPulling="2026-03-16 18:00:05.240944341 +0000 UTC m=+10006.968334628" observedRunningTime="2026-03-16 18:00:06.799562529 +0000 UTC m=+10008.526952846" watchObservedRunningTime="2026-03-16 18:00:06.800615138 +0000 UTC m=+10008.528005465" Mar 16 18:00:06 crc kubenswrapper[4736]: I0316 18:00:06.995863 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af5e6705-3151-42d7-a383-e6f0a25c3fb0" path="/var/lib/kubelet/pods/af5e6705-3151-42d7-a383-e6f0a25c3fb0/volumes" Mar 16 18:00:07 crc kubenswrapper[4736]: I0316 18:00:07.789201 4736 generic.go:334] "Generic (PLEG): container finished" podID="a58d008a-0a87-40d2-933f-f756a3adb684" containerID="34b53e525248d97471522787ecbe0ada6c0e7eda2163c159e4382883b4e5706c" exitCode=0 Mar 16 18:00:07 crc kubenswrapper[4736]: I0316 18:00:07.789249 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561400-d6qn6" event={"ID":"a58d008a-0a87-40d2-933f-f756a3adb684","Type":"ContainerDied","Data":"34b53e525248d97471522787ecbe0ada6c0e7eda2163c159e4382883b4e5706c"} Mar 16 18:00:09 crc kubenswrapper[4736]: I0316 18:00:09.289777 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561400-d6qn6" Mar 16 18:00:09 crc kubenswrapper[4736]: I0316 18:00:09.349335 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxhg8\" (UniqueName: \"kubernetes.io/projected/a58d008a-0a87-40d2-933f-f756a3adb684-kube-api-access-xxhg8\") pod \"a58d008a-0a87-40d2-933f-f756a3adb684\" (UID: \"a58d008a-0a87-40d2-933f-f756a3adb684\") " Mar 16 18:00:09 crc kubenswrapper[4736]: I0316 18:00:09.356239 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a58d008a-0a87-40d2-933f-f756a3adb684-kube-api-access-xxhg8" (OuterVolumeSpecName: "kube-api-access-xxhg8") pod "a58d008a-0a87-40d2-933f-f756a3adb684" (UID: "a58d008a-0a87-40d2-933f-f756a3adb684"). InnerVolumeSpecName "kube-api-access-xxhg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:00:09 crc kubenswrapper[4736]: I0316 18:00:09.452855 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxhg8\" (UniqueName: \"kubernetes.io/projected/a58d008a-0a87-40d2-933f-f756a3adb684-kube-api-access-xxhg8\") on node \"crc\" DevicePath \"\"" Mar 16 18:00:09 crc kubenswrapper[4736]: I0316 18:00:09.809392 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561400-d6qn6" event={"ID":"a58d008a-0a87-40d2-933f-f756a3adb684","Type":"ContainerDied","Data":"b3cbb8857c59f08566be7b035b04d66221f09d710fabf2b74b0098797d3ef95f"} Mar 16 18:00:09 crc kubenswrapper[4736]: I0316 18:00:09.809790 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cbb8857c59f08566be7b035b04d66221f09d710fabf2b74b0098797d3ef95f" Mar 16 18:00:09 crc kubenswrapper[4736]: I0316 18:00:09.809418 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561400-d6qn6" Mar 16 18:00:09 crc kubenswrapper[4736]: I0316 18:00:09.871224 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561394-876bp"] Mar 16 18:00:09 crc kubenswrapper[4736]: I0316 18:00:09.881811 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561394-876bp"] Mar 16 18:00:10 crc kubenswrapper[4736]: I0316 18:00:10.978370 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:00:10 crc kubenswrapper[4736]: E0316 18:00:10.979512 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:00:10 crc kubenswrapper[4736]: I0316 18:00:10.987813 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39508fd6-9c60-46e3-b4b0-6df56aed9587" path="/var/lib/kubelet/pods/39508fd6-9c60-46e3-b4b0-6df56aed9587/volumes" Mar 16 18:00:25 crc kubenswrapper[4736]: I0316 18:00:25.978175 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:00:25 crc kubenswrapper[4736]: E0316 18:00:25.979007 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:00:38 crc kubenswrapper[4736]: I0316 18:00:38.993461 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:00:38 crc kubenswrapper[4736]: E0316 18:00:38.994440 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:00:51 crc kubenswrapper[4736]: I0316 18:00:51.979267 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:00:51 crc kubenswrapper[4736]: E0316 18:00:51.983677 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.160704 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29561401-q7h6f"] Mar 16 18:01:00 crc kubenswrapper[4736]: E0316 18:01:00.162298 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a58d008a-0a87-40d2-933f-f756a3adb684" containerName="oc" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.162318 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a58d008a-0a87-40d2-933f-f756a3adb684" containerName="oc" Mar 16 18:01:00 crc kubenswrapper[4736]: E0316 18:01:00.162332 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29211b8-900b-411e-8762-3dfaf7b4a740" containerName="collect-profiles" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.162341 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29211b8-900b-411e-8762-3dfaf7b4a740" containerName="collect-profiles" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.162612 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29211b8-900b-411e-8762-3dfaf7b4a740" containerName="collect-profiles" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.162658 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a58d008a-0a87-40d2-933f-f756a3adb684" containerName="oc" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.164849 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.180846 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29561401-q7h6f"] Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.255838 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-combined-ca-bundle\") pod \"keystone-cron-29561401-q7h6f\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.256143 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lgx5\" (UniqueName: \"kubernetes.io/projected/0b25b5f7-f358-40e8-91df-9398c8719033-kube-api-access-6lgx5\") pod \"keystone-cron-29561401-q7h6f\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.256436 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-config-data\") pod \"keystone-cron-29561401-q7h6f\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.256569 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-fernet-keys\") pod \"keystone-cron-29561401-q7h6f\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.358806 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-fernet-keys\") pod \"keystone-cron-29561401-q7h6f\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.359062 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-combined-ca-bundle\") pod \"keystone-cron-29561401-q7h6f\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.359151 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lgx5\" (UniqueName: \"kubernetes.io/projected/0b25b5f7-f358-40e8-91df-9398c8719033-kube-api-access-6lgx5\") pod \"keystone-cron-29561401-q7h6f\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.359260 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-config-data\") pod \"keystone-cron-29561401-q7h6f\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.366460 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-combined-ca-bundle\") pod \"keystone-cron-29561401-q7h6f\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.367129 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-fernet-keys\") pod \"keystone-cron-29561401-q7h6f\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.367192 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-config-data\") pod \"keystone-cron-29561401-q7h6f\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.384474 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lgx5\" (UniqueName: \"kubernetes.io/projected/0b25b5f7-f358-40e8-91df-9398c8719033-kube-api-access-6lgx5\") pod \"keystone-cron-29561401-q7h6f\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:00 crc kubenswrapper[4736]: I0316 18:01:00.501986 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:01 crc kubenswrapper[4736]: I0316 18:01:01.025341 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29561401-q7h6f"] Mar 16 18:01:01 crc kubenswrapper[4736]: I0316 18:01:01.330716 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29561401-q7h6f" event={"ID":"0b25b5f7-f358-40e8-91df-9398c8719033","Type":"ContainerStarted","Data":"bf6061bc610c49ab9231d5859863b0902f290fb6d4e09112a14e22504f975269"} Mar 16 18:01:02 crc kubenswrapper[4736]: I0316 18:01:02.339511 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29561401-q7h6f" event={"ID":"0b25b5f7-f358-40e8-91df-9398c8719033","Type":"ContainerStarted","Data":"c58b56138f74bb738a74ec98b80a6a4610fc5a57814580ff0eaa00b7bb9fd0c0"} Mar 16 18:01:05 crc kubenswrapper[4736]: I0316 18:01:05.371827 4736 generic.go:334] "Generic (PLEG): container finished" podID="0b25b5f7-f358-40e8-91df-9398c8719033" containerID="c58b56138f74bb738a74ec98b80a6a4610fc5a57814580ff0eaa00b7bb9fd0c0" exitCode=0 Mar 16 18:01:05 crc kubenswrapper[4736]: I0316 18:01:05.371922 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29561401-q7h6f" event={"ID":"0b25b5f7-f358-40e8-91df-9398c8719033","Type":"ContainerDied","Data":"c58b56138f74bb738a74ec98b80a6a4610fc5a57814580ff0eaa00b7bb9fd0c0"} Mar 16 18:01:05 crc kubenswrapper[4736]: I0316 18:01:05.974939 4736 scope.go:117] "RemoveContainer" containerID="2e0012792158552b2c29bce74a4cd1dc65f5b729105fcbfa14f7d8abce0b459f" Mar 16 18:01:05 crc kubenswrapper[4736]: I0316 18:01:05.978692 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:01:05 crc kubenswrapper[4736]: E0316 18:01:05.978984 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:01:06 crc kubenswrapper[4736]: I0316 18:01:06.063065 4736 scope.go:117] "RemoveContainer" containerID="cb5bb0689213238df52be44ba4e173a78cdc54a11436474f7d8475bdd020db89" Mar 16 18:01:06 crc kubenswrapper[4736]: I0316 18:01:06.831194 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:06 crc kubenswrapper[4736]: I0316 18:01:06.995697 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-fernet-keys\") pod \"0b25b5f7-f358-40e8-91df-9398c8719033\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " Mar 16 18:01:06 crc kubenswrapper[4736]: I0316 18:01:06.995798 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-config-data\") pod \"0b25b5f7-f358-40e8-91df-9398c8719033\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " Mar 16 18:01:06 crc kubenswrapper[4736]: I0316 18:01:06.996071 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-combined-ca-bundle\") pod \"0b25b5f7-f358-40e8-91df-9398c8719033\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " Mar 16 18:01:06 crc kubenswrapper[4736]: I0316 18:01:06.996275 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lgx5\" (UniqueName: \"kubernetes.io/projected/0b25b5f7-f358-40e8-91df-9398c8719033-kube-api-access-6lgx5\") pod \"0b25b5f7-f358-40e8-91df-9398c8719033\" (UID: \"0b25b5f7-f358-40e8-91df-9398c8719033\") " Mar 16 18:01:07 crc kubenswrapper[4736]: I0316 18:01:07.005952 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b25b5f7-f358-40e8-91df-9398c8719033-kube-api-access-6lgx5" (OuterVolumeSpecName: "kube-api-access-6lgx5") pod "0b25b5f7-f358-40e8-91df-9398c8719033" (UID: "0b25b5f7-f358-40e8-91df-9398c8719033"). InnerVolumeSpecName "kube-api-access-6lgx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:01:07 crc kubenswrapper[4736]: I0316 18:01:07.020023 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0b25b5f7-f358-40e8-91df-9398c8719033" (UID: "0b25b5f7-f358-40e8-91df-9398c8719033"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:01:07 crc kubenswrapper[4736]: I0316 18:01:07.032883 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b25b5f7-f358-40e8-91df-9398c8719033" (UID: "0b25b5f7-f358-40e8-91df-9398c8719033"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:01:07 crc kubenswrapper[4736]: I0316 18:01:07.049997 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-config-data" (OuterVolumeSpecName: "config-data") pod "0b25b5f7-f358-40e8-91df-9398c8719033" (UID: "0b25b5f7-f358-40e8-91df-9398c8719033"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:01:07 crc kubenswrapper[4736]: I0316 18:01:07.098972 4736 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 16 18:01:07 crc kubenswrapper[4736]: I0316 18:01:07.099025 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 18:01:07 crc kubenswrapper[4736]: I0316 18:01:07.099046 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b25b5f7-f358-40e8-91df-9398c8719033-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 18:01:07 crc kubenswrapper[4736]: I0316 18:01:07.099073 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lgx5\" (UniqueName: \"kubernetes.io/projected/0b25b5f7-f358-40e8-91df-9398c8719033-kube-api-access-6lgx5\") on node \"crc\" DevicePath \"\"" Mar 16 18:01:07 crc kubenswrapper[4736]: I0316 18:01:07.398767 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29561401-q7h6f" event={"ID":"0b25b5f7-f358-40e8-91df-9398c8719033","Type":"ContainerDied","Data":"bf6061bc610c49ab9231d5859863b0902f290fb6d4e09112a14e22504f975269"} Mar 16 18:01:07 crc kubenswrapper[4736]: I0316 18:01:07.399088 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6061bc610c49ab9231d5859863b0902f290fb6d4e09112a14e22504f975269" Mar 16 18:01:07 crc kubenswrapper[4736]: I0316 18:01:07.399032 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29561401-q7h6f" Mar 16 18:01:19 crc kubenswrapper[4736]: I0316 18:01:19.003162 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:01:19 crc kubenswrapper[4736]: E0316 18:01:19.004704 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:01:33 crc kubenswrapper[4736]: I0316 18:01:33.978970 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:01:33 crc kubenswrapper[4736]: E0316 18:01:33.979755 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:01:47 crc kubenswrapper[4736]: I0316 18:01:47.978161 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:01:47 crc kubenswrapper[4736]: E0316 18:01:47.979020 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:01:59 crc kubenswrapper[4736]: I0316 18:01:59.977962 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:01:59 crc kubenswrapper[4736]: E0316 18:01:59.979072 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:02:00 crc kubenswrapper[4736]: I0316 18:02:00.170389 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561402-cgzrj"] Mar 16 18:02:00 crc kubenswrapper[4736]: E0316 18:02:00.170906 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b25b5f7-f358-40e8-91df-9398c8719033" containerName="keystone-cron" Mar 16 18:02:00 crc kubenswrapper[4736]: I0316 18:02:00.170928 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b25b5f7-f358-40e8-91df-9398c8719033" containerName="keystone-cron" Mar 16 18:02:00 crc kubenswrapper[4736]: I0316 18:02:00.171260 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b25b5f7-f358-40e8-91df-9398c8719033" containerName="keystone-cron" Mar 16 18:02:00 crc kubenswrapper[4736]: I0316 18:02:00.172638 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561402-cgzrj" Mar 16 18:02:00 crc kubenswrapper[4736]: I0316 18:02:00.178239 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:02:00 crc kubenswrapper[4736]: I0316 18:02:00.179159 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:02:00 crc kubenswrapper[4736]: I0316 18:02:00.179371 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:02:00 crc kubenswrapper[4736]: I0316 18:02:00.189987 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561402-cgzrj"] Mar 16 18:02:00 crc kubenswrapper[4736]: I0316 18:02:00.355045 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv46m\" (UniqueName: \"kubernetes.io/projected/7fb6a897-4d25-4370-a54a-0ce0ea52a89a-kube-api-access-vv46m\") pod \"auto-csr-approver-29561402-cgzrj\" (UID: \"7fb6a897-4d25-4370-a54a-0ce0ea52a89a\") " pod="openshift-infra/auto-csr-approver-29561402-cgzrj" Mar 16 18:02:00 crc kubenswrapper[4736]: I0316 18:02:00.458226 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv46m\" (UniqueName: \"kubernetes.io/projected/7fb6a897-4d25-4370-a54a-0ce0ea52a89a-kube-api-access-vv46m\") pod \"auto-csr-approver-29561402-cgzrj\" (UID: \"7fb6a897-4d25-4370-a54a-0ce0ea52a89a\") " pod="openshift-infra/auto-csr-approver-29561402-cgzrj" Mar 16 18:02:00 crc kubenswrapper[4736]: I0316 18:02:00.490194 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv46m\" (UniqueName: \"kubernetes.io/projected/7fb6a897-4d25-4370-a54a-0ce0ea52a89a-kube-api-access-vv46m\") pod \"auto-csr-approver-29561402-cgzrj\" (UID: \"7fb6a897-4d25-4370-a54a-0ce0ea52a89a\") " pod="openshift-infra/auto-csr-approver-29561402-cgzrj" Mar 16 18:02:00 crc kubenswrapper[4736]: I0316 18:02:00.505120 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561402-cgzrj" Mar 16 18:02:00 crc kubenswrapper[4736]: I0316 18:02:00.975762 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561402-cgzrj"] Mar 16 18:02:01 crc kubenswrapper[4736]: I0316 18:02:01.985030 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561402-cgzrj" event={"ID":"7fb6a897-4d25-4370-a54a-0ce0ea52a89a","Type":"ContainerStarted","Data":"e64c0edfaf5fc82a70fa2817c9c96db649b4967d0891906c21361731ae015981"} Mar 16 18:02:04 crc kubenswrapper[4736]: I0316 18:02:04.009880 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561402-cgzrj" event={"ID":"7fb6a897-4d25-4370-a54a-0ce0ea52a89a","Type":"ContainerStarted","Data":"797491e33f39159c15d7b6fa57645635becfae6ef6fcbb582e432de4d95e0e8a"} Mar 16 18:02:04 crc kubenswrapper[4736]: I0316 18:02:04.029561 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561402-cgzrj" podStartSLOduration=2.78206125 podStartE2EDuration="4.029538368s" podCreationTimestamp="2026-03-16 18:02:00 +0000 UTC" firstStartedPulling="2026-03-16 18:02:00.979371631 +0000 UTC m=+10122.706761918" lastFinishedPulling="2026-03-16 18:02:02.226848709 +0000 UTC m=+10123.954239036" observedRunningTime="2026-03-16 18:02:04.02448442 +0000 UTC m=+10125.751874777" watchObservedRunningTime="2026-03-16 18:02:04.029538368 +0000 UTC m=+10125.756928655" Mar 16 18:02:05 crc kubenswrapper[4736]: I0316 18:02:05.019076 4736 generic.go:334] "Generic (PLEG): container finished" podID="7fb6a897-4d25-4370-a54a-0ce0ea52a89a" containerID="797491e33f39159c15d7b6fa57645635becfae6ef6fcbb582e432de4d95e0e8a" exitCode=0 Mar 16 18:02:05 crc kubenswrapper[4736]: I0316 18:02:05.019363 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561402-cgzrj" event={"ID":"7fb6a897-4d25-4370-a54a-0ce0ea52a89a","Type":"ContainerDied","Data":"797491e33f39159c15d7b6fa57645635becfae6ef6fcbb582e432de4d95e0e8a"} Mar 16 18:02:06 crc kubenswrapper[4736]: I0316 18:02:06.521952 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561402-cgzrj" Mar 16 18:02:06 crc kubenswrapper[4736]: I0316 18:02:06.686780 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv46m\" (UniqueName: \"kubernetes.io/projected/7fb6a897-4d25-4370-a54a-0ce0ea52a89a-kube-api-access-vv46m\") pod \"7fb6a897-4d25-4370-a54a-0ce0ea52a89a\" (UID: \"7fb6a897-4d25-4370-a54a-0ce0ea52a89a\") " Mar 16 18:02:06 crc kubenswrapper[4736]: I0316 18:02:06.693192 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb6a897-4d25-4370-a54a-0ce0ea52a89a-kube-api-access-vv46m" (OuterVolumeSpecName: "kube-api-access-vv46m") pod "7fb6a897-4d25-4370-a54a-0ce0ea52a89a" (UID: "7fb6a897-4d25-4370-a54a-0ce0ea52a89a"). InnerVolumeSpecName "kube-api-access-vv46m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:02:06 crc kubenswrapper[4736]: I0316 18:02:06.789807 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv46m\" (UniqueName: \"kubernetes.io/projected/7fb6a897-4d25-4370-a54a-0ce0ea52a89a-kube-api-access-vv46m\") on node \"crc\" DevicePath \"\"" Mar 16 18:02:07 crc kubenswrapper[4736]: I0316 18:02:07.042479 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561402-cgzrj" event={"ID":"7fb6a897-4d25-4370-a54a-0ce0ea52a89a","Type":"ContainerDied","Data":"e64c0edfaf5fc82a70fa2817c9c96db649b4967d0891906c21361731ae015981"} Mar 16 18:02:07 crc kubenswrapper[4736]: I0316 18:02:07.042539 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e64c0edfaf5fc82a70fa2817c9c96db649b4967d0891906c21361731ae015981" Mar 16 18:02:07 crc kubenswrapper[4736]: I0316 18:02:07.042644 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561402-cgzrj" Mar 16 18:02:07 crc kubenswrapper[4736]: I0316 18:02:07.108542 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561396-mwwsm"] Mar 16 18:02:07 crc kubenswrapper[4736]: I0316 18:02:07.119339 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561396-mwwsm"] Mar 16 18:02:08 crc kubenswrapper[4736]: I0316 18:02:08.992357 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d092be4-2da3-435c-ac03-4a88a6a6294f" path="/var/lib/kubelet/pods/2d092be4-2da3-435c-ac03-4a88a6a6294f/volumes" Mar 16 18:02:12 crc kubenswrapper[4736]: I0316 18:02:12.979868 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:02:12 crc kubenswrapper[4736]: E0316 18:02:12.981065 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:02:26 crc kubenswrapper[4736]: I0316 18:02:26.979594 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:02:26 crc kubenswrapper[4736]: E0316 18:02:26.981419 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:02:40 crc kubenswrapper[4736]: I0316 18:02:40.978331 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:02:40 crc kubenswrapper[4736]: E0316 18:02:40.979177 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:02:53 crc kubenswrapper[4736]: I0316 18:02:53.978278 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:02:53 crc kubenswrapper[4736]: E0316 18:02:53.979540 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:03:05 crc kubenswrapper[4736]: I0316 18:03:05.978245 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:03:05 crc kubenswrapper[4736]: E0316 18:03:05.978978 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:03:06 crc kubenswrapper[4736]: I0316 18:03:06.178438 4736 scope.go:117] "RemoveContainer" containerID="b368cdea3518b8351e786bdbddaa618029ae41a04c9e7c9839bb19e2e045a313" Mar 16 18:03:17 crc kubenswrapper[4736]: I0316 18:03:17.978879 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:03:17 crc kubenswrapper[4736]: E0316 18:03:17.979679 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:03:28 crc kubenswrapper[4736]: I0316 18:03:28.990597 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:03:28 crc kubenswrapper[4736]: E0316 18:03:28.991579 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:03:40 crc kubenswrapper[4736]: I0316 18:03:40.978959 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:03:42 crc kubenswrapper[4736]: I0316 18:03:42.090764 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"e6cbd714d2780fc58536aa22c4e07012375aac3b67cad375ec626f65c37d1f92"} Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.444997 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8fgqr"] Mar 16 18:03:45 crc kubenswrapper[4736]: E0316 18:03:45.447810 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb6a897-4d25-4370-a54a-0ce0ea52a89a" containerName="oc" Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.447842 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb6a897-4d25-4370-a54a-0ce0ea52a89a" containerName="oc" Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.448199 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb6a897-4d25-4370-a54a-0ce0ea52a89a" containerName="oc" Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.453243 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.472464 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8fgqr"] Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.552549 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96baaa8e-91d5-4998-9ced-b4d88a155dc0-utilities\") pod \"redhat-operators-8fgqr\" (UID: \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\") " pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.552726 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96baaa8e-91d5-4998-9ced-b4d88a155dc0-catalog-content\") pod \"redhat-operators-8fgqr\" (UID: \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\") " pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.552764 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxzl4\" (UniqueName: \"kubernetes.io/projected/96baaa8e-91d5-4998-9ced-b4d88a155dc0-kube-api-access-gxzl4\") pod \"redhat-operators-8fgqr\" (UID: \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\") " pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.655097 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96baaa8e-91d5-4998-9ced-b4d88a155dc0-catalog-content\") pod \"redhat-operators-8fgqr\" (UID: \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\") " pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.655156 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxzl4\" (UniqueName: \"kubernetes.io/projected/96baaa8e-91d5-4998-9ced-b4d88a155dc0-kube-api-access-gxzl4\") pod \"redhat-operators-8fgqr\" (UID: \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\") " pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.655262 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96baaa8e-91d5-4998-9ced-b4d88a155dc0-utilities\") pod \"redhat-operators-8fgqr\" (UID: \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\") " pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.655696 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96baaa8e-91d5-4998-9ced-b4d88a155dc0-utilities\") pod \"redhat-operators-8fgqr\" (UID: \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\") " pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.655691 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96baaa8e-91d5-4998-9ced-b4d88a155dc0-catalog-content\") pod \"redhat-operators-8fgqr\" (UID: \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\") " pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.684862 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxzl4\" (UniqueName: \"kubernetes.io/projected/96baaa8e-91d5-4998-9ced-b4d88a155dc0-kube-api-access-gxzl4\") pod \"redhat-operators-8fgqr\" (UID: \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\") " pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:03:45 crc kubenswrapper[4736]: I0316 18:03:45.778372 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:03:46 crc kubenswrapper[4736]: I0316 18:03:46.916386 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8fgqr"] Mar 16 18:03:47 crc kubenswrapper[4736]: I0316 18:03:47.140033 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fgqr" event={"ID":"96baaa8e-91d5-4998-9ced-b4d88a155dc0","Type":"ContainerStarted","Data":"cb279a64b7b06df80f796b681dc0ca95278b8d3ee2cb9d98883566818a79e2d0"} Mar 16 18:03:47 crc kubenswrapper[4736]: I0316 18:03:47.140297 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fgqr" event={"ID":"96baaa8e-91d5-4998-9ced-b4d88a155dc0","Type":"ContainerStarted","Data":"88ac13a2cde0b2446ce6edcec4908034a7549103231f70075b3159407119ea47"} Mar 16 18:03:48 crc kubenswrapper[4736]: I0316 18:03:48.150481 4736 generic.go:334] "Generic (PLEG): container finished" podID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerID="cb279a64b7b06df80f796b681dc0ca95278b8d3ee2cb9d98883566818a79e2d0" exitCode=0 Mar 16 18:03:48 crc kubenswrapper[4736]: I0316 18:03:48.150556 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fgqr" event={"ID":"96baaa8e-91d5-4998-9ced-b4d88a155dc0","Type":"ContainerDied","Data":"cb279a64b7b06df80f796b681dc0ca95278b8d3ee2cb9d98883566818a79e2d0"} Mar 16 18:03:48 crc kubenswrapper[4736]: I0316 18:03:48.160027 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 18:03:50 crc kubenswrapper[4736]: I0316 18:03:50.184128 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fgqr" event={"ID":"96baaa8e-91d5-4998-9ced-b4d88a155dc0","Type":"ContainerStarted","Data":"d1d570eed3432e919789b2f2a527f422cb3557a8b005354a70b212da205a3782"} Mar 16 18:03:55 crc kubenswrapper[4736]: I0316 18:03:55.229492 4736 generic.go:334] "Generic (PLEG): container finished" podID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerID="d1d570eed3432e919789b2f2a527f422cb3557a8b005354a70b212da205a3782" exitCode=0 Mar 16 18:03:55 crc kubenswrapper[4736]: I0316 18:03:55.229709 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fgqr" event={"ID":"96baaa8e-91d5-4998-9ced-b4d88a155dc0","Type":"ContainerDied","Data":"d1d570eed3432e919789b2f2a527f422cb3557a8b005354a70b212da205a3782"} Mar 16 18:03:57 crc kubenswrapper[4736]: I0316 18:03:57.249351 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fgqr" event={"ID":"96baaa8e-91d5-4998-9ced-b4d88a155dc0","Type":"ContainerStarted","Data":"c4892807a1df4edfdf635de479edbb28204c716ba1aafc49c483c0a4ae587a34"} Mar 16 18:03:57 crc kubenswrapper[4736]: I0316 18:03:57.272539 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8fgqr" podStartSLOduration=4.297224538 podStartE2EDuration="12.272522954s" podCreationTimestamp="2026-03-16 18:03:45 +0000 UTC" firstStartedPulling="2026-03-16 18:03:48.154836829 +0000 UTC m=+10229.882227126" lastFinishedPulling="2026-03-16 18:03:56.130135245 +0000 UTC m=+10237.857525542" observedRunningTime="2026-03-16 18:03:57.268430223 +0000 UTC m=+10238.995820510" watchObservedRunningTime="2026-03-16 18:03:57.272522954 +0000 UTC m=+10238.999913241" Mar 16 18:04:00 crc kubenswrapper[4736]: I0316 18:04:00.146513 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561404-7fc7k"] Mar 16 18:04:00 crc kubenswrapper[4736]: I0316 18:04:00.148294 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561404-7fc7k" Mar 16 18:04:00 crc kubenswrapper[4736]: I0316 18:04:00.150741 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:04:00 crc kubenswrapper[4736]: I0316 18:04:00.150950 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:04:00 crc kubenswrapper[4736]: I0316 18:04:00.151551 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:04:00 crc kubenswrapper[4736]: I0316 18:04:00.157154 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561404-7fc7k"] Mar 16 18:04:00 crc kubenswrapper[4736]: I0316 18:04:00.252630 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgdpr\" (UniqueName: \"kubernetes.io/projected/f479397a-a884-422d-a0a1-11584e019834-kube-api-access-qgdpr\") pod \"auto-csr-approver-29561404-7fc7k\" (UID: \"f479397a-a884-422d-a0a1-11584e019834\") " pod="openshift-infra/auto-csr-approver-29561404-7fc7k" Mar 16 18:04:00 crc kubenswrapper[4736]: I0316 18:04:00.354214 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgdpr\" (UniqueName: \"kubernetes.io/projected/f479397a-a884-422d-a0a1-11584e019834-kube-api-access-qgdpr\") pod \"auto-csr-approver-29561404-7fc7k\" (UID: \"f479397a-a884-422d-a0a1-11584e019834\") " pod="openshift-infra/auto-csr-approver-29561404-7fc7k" Mar 16 18:04:00 crc kubenswrapper[4736]: I0316 18:04:00.383763 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgdpr\" (UniqueName: \"kubernetes.io/projected/f479397a-a884-422d-a0a1-11584e019834-kube-api-access-qgdpr\") pod \"auto-csr-approver-29561404-7fc7k\" (UID: \"f479397a-a884-422d-a0a1-11584e019834\") " pod="openshift-infra/auto-csr-approver-29561404-7fc7k" Mar 16 18:04:00 crc kubenswrapper[4736]: I0316 18:04:00.464614 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561404-7fc7k" Mar 16 18:04:01 crc kubenswrapper[4736]: I0316 18:04:01.408352 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561404-7fc7k"] Mar 16 18:04:01 crc kubenswrapper[4736]: W0316 18:04:01.419909 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf479397a_a884_422d_a0a1_11584e019834.slice/crio-b8674e94acd20df99e188d1c11a5bbce61f323ca6ddcf5a5849fcd78469d8ece WatchSource:0}: Error finding container b8674e94acd20df99e188d1c11a5bbce61f323ca6ddcf5a5849fcd78469d8ece: Status 404 returned error can't find the container with id b8674e94acd20df99e188d1c11a5bbce61f323ca6ddcf5a5849fcd78469d8ece Mar 16 18:04:02 crc kubenswrapper[4736]: I0316 18:04:02.288127 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561404-7fc7k" event={"ID":"f479397a-a884-422d-a0a1-11584e019834","Type":"ContainerStarted","Data":"b8674e94acd20df99e188d1c11a5bbce61f323ca6ddcf5a5849fcd78469d8ece"} Mar 16 18:04:03 crc kubenswrapper[4736]: I0316 18:04:03.298302 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561404-7fc7k" event={"ID":"f479397a-a884-422d-a0a1-11584e019834","Type":"ContainerStarted","Data":"8060089675f0eb49cbb3f267656e6287b9d7e66e3b05d85dcf188ffe63703ced"} Mar 16 18:04:03 crc kubenswrapper[4736]: I0316 18:04:03.313825 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561404-7fc7k" podStartSLOduration=2.342460415 podStartE2EDuration="3.313795966s" podCreationTimestamp="2026-03-16 18:04:00 +0000 UTC" firstStartedPulling="2026-03-16 18:04:01.422322192 +0000 UTC m=+10243.149712479" lastFinishedPulling="2026-03-16 18:04:02.393657743 +0000 UTC m=+10244.121048030" observedRunningTime="2026-03-16 18:04:03.310203707 +0000 UTC m=+10245.037594004" watchObservedRunningTime="2026-03-16 18:04:03.313795966 +0000 UTC m=+10245.041186263" Mar 16 18:04:04 crc kubenswrapper[4736]: I0316 18:04:04.307807 4736 generic.go:334] "Generic (PLEG): container finished" podID="f479397a-a884-422d-a0a1-11584e019834" containerID="8060089675f0eb49cbb3f267656e6287b9d7e66e3b05d85dcf188ffe63703ced" exitCode=0 Mar 16 18:04:04 crc kubenswrapper[4736]: I0316 18:04:04.307987 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561404-7fc7k" event={"ID":"f479397a-a884-422d-a0a1-11584e019834","Type":"ContainerDied","Data":"8060089675f0eb49cbb3f267656e6287b9d7e66e3b05d85dcf188ffe63703ced"} Mar 16 18:04:05 crc kubenswrapper[4736]: I0316 18:04:05.696582 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561404-7fc7k" Mar 16 18:04:05 crc kubenswrapper[4736]: I0316 18:04:05.765278 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgdpr\" (UniqueName: \"kubernetes.io/projected/f479397a-a884-422d-a0a1-11584e019834-kube-api-access-qgdpr\") pod \"f479397a-a884-422d-a0a1-11584e019834\" (UID: \"f479397a-a884-422d-a0a1-11584e019834\") " Mar 16 18:04:05 crc kubenswrapper[4736]: I0316 18:04:05.773736 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f479397a-a884-422d-a0a1-11584e019834-kube-api-access-qgdpr" (OuterVolumeSpecName: "kube-api-access-qgdpr") pod "f479397a-a884-422d-a0a1-11584e019834" (UID: "f479397a-a884-422d-a0a1-11584e019834"). InnerVolumeSpecName "kube-api-access-qgdpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:04:05 crc kubenswrapper[4736]: I0316 18:04:05.779511 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:04:05 crc kubenswrapper[4736]: I0316 18:04:05.780868 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:04:05 crc kubenswrapper[4736]: I0316 18:04:05.867250 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgdpr\" (UniqueName: \"kubernetes.io/projected/f479397a-a884-422d-a0a1-11584e019834-kube-api-access-qgdpr\") on node \"crc\" DevicePath \"\"" Mar 16 18:04:06 crc kubenswrapper[4736]: I0316 18:04:06.355480 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561404-7fc7k" event={"ID":"f479397a-a884-422d-a0a1-11584e019834","Type":"ContainerDied","Data":"b8674e94acd20df99e188d1c11a5bbce61f323ca6ddcf5a5849fcd78469d8ece"} Mar 16 18:04:06 crc kubenswrapper[4736]: I0316 18:04:06.356062 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8674e94acd20df99e188d1c11a5bbce61f323ca6ddcf5a5849fcd78469d8ece" Mar 16 18:04:06 crc kubenswrapper[4736]: I0316 18:04:06.356242 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561404-7fc7k" Mar 16 18:04:06 crc kubenswrapper[4736]: I0316 18:04:06.462158 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561398-dqbdk"] Mar 16 18:04:06 crc kubenswrapper[4736]: I0316 18:04:06.482115 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561398-dqbdk"] Mar 16 18:04:06 crc kubenswrapper[4736]: I0316 18:04:06.846744 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fgqr" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerName="registry-server" probeResult="failure" output=< Mar 16 18:04:06 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:04:06 crc kubenswrapper[4736]: > Mar 16 18:04:06 crc kubenswrapper[4736]: I0316 18:04:06.997262 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ca4a3b-a1c6-43fb-8740-fa9eaefd7330" path="/var/lib/kubelet/pods/34ca4a3b-a1c6-43fb-8740-fa9eaefd7330/volumes" Mar 16 18:04:16 crc kubenswrapper[4736]: I0316 18:04:16.823044 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fgqr" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerName="registry-server" probeResult="failure" output=< Mar 16 18:04:16 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:04:16 crc kubenswrapper[4736]: > Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.447698 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75f7dd5797-28cnx"] Mar 16 18:04:24 crc kubenswrapper[4736]: E0316 18:04:24.450675 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f479397a-a884-422d-a0a1-11584e019834" containerName="oc" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.450702 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="f479397a-a884-422d-a0a1-11584e019834" containerName="oc" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.451561 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="f479397a-a884-422d-a0a1-11584e019834" containerName="oc" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.454832 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.550469 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75f7dd5797-28cnx"] Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.570250 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-public-tls-certs\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.570463 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-ovndb-tls-certs\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.570524 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-internal-tls-certs\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.570585 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcsjl\" (UniqueName: \"kubernetes.io/projected/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-kube-api-access-rcsjl\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.570631 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-combined-ca-bundle\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.570961 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-httpd-config\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.571206 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-config\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.673542 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-ovndb-tls-certs\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.673950 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-internal-tls-certs\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.673986 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcsjl\" (UniqueName: \"kubernetes.io/projected/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-kube-api-access-rcsjl\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.674011 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-combined-ca-bundle\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.674117 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-httpd-config\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.674189 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-config\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.674245 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-public-tls-certs\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.693405 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-httpd-config\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.693404 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-internal-tls-certs\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.694065 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-public-tls-certs\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.694353 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-config\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.705667 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-combined-ca-bundle\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.713541 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-ovndb-tls-certs\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.714980 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcsjl\" (UniqueName: \"kubernetes.io/projected/a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c-kube-api-access-rcsjl\") pod \"neutron-75f7dd5797-28cnx\" (UID: \"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c\") " pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:24 crc kubenswrapper[4736]: I0316 18:04:24.775367 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:26 crc kubenswrapper[4736]: I0316 18:04:26.132000 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75f7dd5797-28cnx"] Mar 16 18:04:26 crc kubenswrapper[4736]: I0316 18:04:26.705238 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75f7dd5797-28cnx" event={"ID":"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c","Type":"ContainerStarted","Data":"16e7e8b1d3eb3e31af8577553891f727efc7b8188219c95b76e293bf822d06bd"} Mar 16 18:04:26 crc kubenswrapper[4736]: I0316 18:04:26.705495 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75f7dd5797-28cnx" event={"ID":"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c","Type":"ContainerStarted","Data":"4561342d8afd8293c073b3ca9c1c72aa2a272a637a45c24ea2005adb5243442a"} Mar 16 18:04:26 crc kubenswrapper[4736]: I0316 18:04:26.705651 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:26 crc kubenswrapper[4736]: I0316 18:04:26.705665 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75f7dd5797-28cnx" event={"ID":"a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c","Type":"ContainerStarted","Data":"31bac2fe8a5fb210f6e4a70f7149f7c327d5f8468708191b0d4421302265bb63"} Mar 16 18:04:26 crc kubenswrapper[4736]: I0316 18:04:26.829166 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fgqr" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerName="registry-server" probeResult="failure" output=< Mar 16 18:04:26 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:04:26 crc kubenswrapper[4736]: > Mar 16 18:04:36 crc kubenswrapper[4736]: I0316 18:04:36.832505 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fgqr" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerName="registry-server" probeResult="failure" output=< Mar 16 18:04:36 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:04:36 crc kubenswrapper[4736]: > Mar 16 18:04:46 crc kubenswrapper[4736]: I0316 18:04:46.886246 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fgqr" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerName="registry-server" probeResult="failure" output=< Mar 16 18:04:46 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:04:46 crc kubenswrapper[4736]: > Mar 16 18:04:54 crc kubenswrapper[4736]: I0316 18:04:54.820312 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75f7dd5797-28cnx" Mar 16 18:04:54 crc kubenswrapper[4736]: I0316 18:04:54.930849 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75f7dd5797-28cnx" podStartSLOduration=30.923481524 podStartE2EDuration="30.923481524s" podCreationTimestamp="2026-03-16 18:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 18:04:26.746948856 +0000 UTC m=+10268.474339143" watchObservedRunningTime="2026-03-16 18:04:54.923481524 +0000 UTC m=+10296.650871811" Mar 16 18:04:55 crc kubenswrapper[4736]: I0316 18:04:55.519252 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d578f4777-v7g9k"] Mar 16 18:04:55 crc kubenswrapper[4736]: I0316 18:04:55.536833 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d578f4777-v7g9k" podUID="58e24fbd-004d-44bd-a19a-3ab77b210e84" containerName="neutron-httpd" containerID="cri-o://7647f5f97855fd348234de5f521ad8b3bd622297e950bb3b347854dbfca4d881" gracePeriod=30 Mar 16 18:04:55 crc kubenswrapper[4736]: I0316 18:04:55.536773 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d578f4777-v7g9k" podUID="58e24fbd-004d-44bd-a19a-3ab77b210e84" containerName="neutron-api" containerID="cri-o://bb7591e2394d320f8150cf1fee9dd42381493664411f7e46e51145425497b4d8" gracePeriod=30 Mar 16 18:04:56 crc kubenswrapper[4736]: I0316 18:04:56.021858 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d578f4777-v7g9k" event={"ID":"58e24fbd-004d-44bd-a19a-3ab77b210e84","Type":"ContainerDied","Data":"7647f5f97855fd348234de5f521ad8b3bd622297e950bb3b347854dbfca4d881"} Mar 16 18:04:56 crc kubenswrapper[4736]: I0316 18:04:56.022271 4736 generic.go:334] "Generic (PLEG): container finished" podID="58e24fbd-004d-44bd-a19a-3ab77b210e84" containerID="7647f5f97855fd348234de5f521ad8b3bd622297e950bb3b347854dbfca4d881" exitCode=0 Mar 16 18:04:56 crc kubenswrapper[4736]: I0316 18:04:56.847079 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8fgqr" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerName="registry-server" probeResult="failure" output=< Mar 16 18:04:56 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:04:56 crc kubenswrapper[4736]: > Mar 16 18:05:01 crc kubenswrapper[4736]: I0316 18:05:01.088053 4736 generic.go:334] "Generic (PLEG): container finished" podID="58e24fbd-004d-44bd-a19a-3ab77b210e84" containerID="bb7591e2394d320f8150cf1fee9dd42381493664411f7e46e51145425497b4d8" exitCode=0 Mar 16 18:05:01 crc kubenswrapper[4736]: I0316 18:05:01.088152 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d578f4777-v7g9k" event={"ID":"58e24fbd-004d-44bd-a19a-3ab77b210e84","Type":"ContainerDied","Data":"bb7591e2394d320f8150cf1fee9dd42381493664411f7e46e51145425497b4d8"} Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.102768 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d578f4777-v7g9k" event={"ID":"58e24fbd-004d-44bd-a19a-3ab77b210e84","Type":"ContainerDied","Data":"c4a8d22b4870fff10b520c979c6be084edbb2256fa2132e4148fe4cfd3de0db6"} Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.104463 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4a8d22b4870fff10b520c979c6be084edbb2256fa2132e4148fe4cfd3de0db6" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.145461 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.286812 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-combined-ca-bundle\") pod \"58e24fbd-004d-44bd-a19a-3ab77b210e84\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.286932 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-public-tls-certs\") pod \"58e24fbd-004d-44bd-a19a-3ab77b210e84\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.287039 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-internal-tls-certs\") pod \"58e24fbd-004d-44bd-a19a-3ab77b210e84\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.287084 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-config\") pod \"58e24fbd-004d-44bd-a19a-3ab77b210e84\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.287165 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-ovndb-tls-certs\") pod \"58e24fbd-004d-44bd-a19a-3ab77b210e84\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.287190 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-httpd-config\") pod \"58e24fbd-004d-44bd-a19a-3ab77b210e84\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.287219 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lclw8\" (UniqueName: \"kubernetes.io/projected/58e24fbd-004d-44bd-a19a-3ab77b210e84-kube-api-access-lclw8\") pod \"58e24fbd-004d-44bd-a19a-3ab77b210e84\" (UID: \"58e24fbd-004d-44bd-a19a-3ab77b210e84\") " Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.310704 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "58e24fbd-004d-44bd-a19a-3ab77b210e84" (UID: "58e24fbd-004d-44bd-a19a-3ab77b210e84"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.330655 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e24fbd-004d-44bd-a19a-3ab77b210e84-kube-api-access-lclw8" (OuterVolumeSpecName: "kube-api-access-lclw8") pod "58e24fbd-004d-44bd-a19a-3ab77b210e84" (UID: "58e24fbd-004d-44bd-a19a-3ab77b210e84"). InnerVolumeSpecName "kube-api-access-lclw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.375203 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "58e24fbd-004d-44bd-a19a-3ab77b210e84" (UID: "58e24fbd-004d-44bd-a19a-3ab77b210e84"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.376210 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "58e24fbd-004d-44bd-a19a-3ab77b210e84" (UID: "58e24fbd-004d-44bd-a19a-3ab77b210e84"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.390447 4736 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.390479 4736 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.390491 4736 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.390503 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lclw8\" (UniqueName: \"kubernetes.io/projected/58e24fbd-004d-44bd-a19a-3ab77b210e84-kube-api-access-lclw8\") on node \"crc\" DevicePath \"\"" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.391890 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58e24fbd-004d-44bd-a19a-3ab77b210e84" (UID: "58e24fbd-004d-44bd-a19a-3ab77b210e84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.399955 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-config" (OuterVolumeSpecName: "config") pod "58e24fbd-004d-44bd-a19a-3ab77b210e84" (UID: "58e24fbd-004d-44bd-a19a-3ab77b210e84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.419955 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "58e24fbd-004d-44bd-a19a-3ab77b210e84" (UID: "58e24fbd-004d-44bd-a19a-3ab77b210e84"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.492750 4736 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-config\") on node \"crc\" DevicePath \"\"" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.492785 4736 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 16 18:05:02 crc kubenswrapper[4736]: I0316 18:05:02.492798 4736 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e24fbd-004d-44bd-a19a-3ab77b210e84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 16 18:05:03 crc kubenswrapper[4736]: I0316 18:05:03.114304 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d578f4777-v7g9k" Mar 16 18:05:03 crc kubenswrapper[4736]: I0316 18:05:03.145716 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d578f4777-v7g9k"] Mar 16 18:05:03 crc kubenswrapper[4736]: I0316 18:05:03.158537 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6d578f4777-v7g9k"] Mar 16 18:05:04 crc kubenswrapper[4736]: I0316 18:05:04.991435 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e24fbd-004d-44bd-a19a-3ab77b210e84" path="/var/lib/kubelet/pods/58e24fbd-004d-44bd-a19a-3ab77b210e84/volumes" Mar 16 18:05:05 crc kubenswrapper[4736]: I0316 18:05:05.898006 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:05:05 crc kubenswrapper[4736]: I0316 18:05:05.976000 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:05:06 crc kubenswrapper[4736]: I0316 18:05:06.169375 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8fgqr"] Mar 16 18:05:06 crc kubenswrapper[4736]: I0316 18:05:06.879166 4736 scope.go:117] "RemoveContainer" containerID="7647f5f97855fd348234de5f521ad8b3bd622297e950bb3b347854dbfca4d881" Mar 16 18:05:06 crc kubenswrapper[4736]: I0316 18:05:06.944316 4736 scope.go:117] "RemoveContainer" containerID="bb7591e2394d320f8150cf1fee9dd42381493664411f7e46e51145425497b4d8" Mar 16 18:05:06 crc kubenswrapper[4736]: I0316 18:05:06.997312 4736 scope.go:117] "RemoveContainer" containerID="920a999d52ca2809359f1cdab5c65ec04e093abd9c2865180166158bc8feae17" Mar 16 18:05:07 crc kubenswrapper[4736]: I0316 18:05:07.180039 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8fgqr" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerName="registry-server" containerID="cri-o://c4892807a1df4edfdf635de479edbb28204c716ba1aafc49c483c0a4ae587a34" gracePeriod=2 Mar 16 18:05:07 crc kubenswrapper[4736]: I0316 18:05:07.665296 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:05:07 crc kubenswrapper[4736]: I0316 18:05:07.812822 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96baaa8e-91d5-4998-9ced-b4d88a155dc0-utilities\") pod \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\" (UID: \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\") " Mar 16 18:05:07 crc kubenswrapper[4736]: I0316 18:05:07.812957 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96baaa8e-91d5-4998-9ced-b4d88a155dc0-catalog-content\") pod \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\" (UID: \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\") " Mar 16 18:05:07 crc kubenswrapper[4736]: I0316 18:05:07.813094 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxzl4\" (UniqueName: \"kubernetes.io/projected/96baaa8e-91d5-4998-9ced-b4d88a155dc0-kube-api-access-gxzl4\") pod \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\" (UID: \"96baaa8e-91d5-4998-9ced-b4d88a155dc0\") " Mar 16 18:05:07 crc kubenswrapper[4736]: I0316 18:05:07.819525 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96baaa8e-91d5-4998-9ced-b4d88a155dc0-utilities" (OuterVolumeSpecName: "utilities") pod "96baaa8e-91d5-4998-9ced-b4d88a155dc0" (UID: "96baaa8e-91d5-4998-9ced-b4d88a155dc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:05:07 crc kubenswrapper[4736]: I0316 18:05:07.829884 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96baaa8e-91d5-4998-9ced-b4d88a155dc0-kube-api-access-gxzl4" (OuterVolumeSpecName: "kube-api-access-gxzl4") pod "96baaa8e-91d5-4998-9ced-b4d88a155dc0" (UID: "96baaa8e-91d5-4998-9ced-b4d88a155dc0"). InnerVolumeSpecName "kube-api-access-gxzl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:05:07 crc kubenswrapper[4736]: I0316 18:05:07.921447 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96baaa8e-91d5-4998-9ced-b4d88a155dc0-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:05:07 crc kubenswrapper[4736]: I0316 18:05:07.921762 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxzl4\" (UniqueName: \"kubernetes.io/projected/96baaa8e-91d5-4998-9ced-b4d88a155dc0-kube-api-access-gxzl4\") on node \"crc\" DevicePath \"\"" Mar 16 18:05:07 crc kubenswrapper[4736]: I0316 18:05:07.975517 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96baaa8e-91d5-4998-9ced-b4d88a155dc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96baaa8e-91d5-4998-9ced-b4d88a155dc0" (UID: "96baaa8e-91d5-4998-9ced-b4d88a155dc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.023576 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96baaa8e-91d5-4998-9ced-b4d88a155dc0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.193482 4736 generic.go:334] "Generic (PLEG): container finished" podID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerID="c4892807a1df4edfdf635de479edbb28204c716ba1aafc49c483c0a4ae587a34" exitCode=0 Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.193559 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fgqr" event={"ID":"96baaa8e-91d5-4998-9ced-b4d88a155dc0","Type":"ContainerDied","Data":"c4892807a1df4edfdf635de479edbb28204c716ba1aafc49c483c0a4ae587a34"} Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.193611 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8fgqr" event={"ID":"96baaa8e-91d5-4998-9ced-b4d88a155dc0","Type":"ContainerDied","Data":"88ac13a2cde0b2446ce6edcec4908034a7549103231f70075b3159407119ea47"} Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.193645 4736 scope.go:117] "RemoveContainer" containerID="c4892807a1df4edfdf635de479edbb28204c716ba1aafc49c483c0a4ae587a34" Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.193970 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8fgqr" Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.222636 4736 scope.go:117] "RemoveContainer" containerID="d1d570eed3432e919789b2f2a527f422cb3557a8b005354a70b212da205a3782" Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.243724 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8fgqr"] Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.258030 4736 scope.go:117] "RemoveContainer" containerID="cb279a64b7b06df80f796b681dc0ca95278b8d3ee2cb9d98883566818a79e2d0" Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.262334 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8fgqr"] Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.310358 4736 scope.go:117] "RemoveContainer" containerID="c4892807a1df4edfdf635de479edbb28204c716ba1aafc49c483c0a4ae587a34" Mar 16 18:05:08 crc kubenswrapper[4736]: E0316 18:05:08.311791 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4892807a1df4edfdf635de479edbb28204c716ba1aafc49c483c0a4ae587a34\": container with ID starting with c4892807a1df4edfdf635de479edbb28204c716ba1aafc49c483c0a4ae587a34 not found: ID does not exist" containerID="c4892807a1df4edfdf635de479edbb28204c716ba1aafc49c483c0a4ae587a34" Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.312047 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4892807a1df4edfdf635de479edbb28204c716ba1aafc49c483c0a4ae587a34"} err="failed to get container status \"c4892807a1df4edfdf635de479edbb28204c716ba1aafc49c483c0a4ae587a34\": rpc error: code = NotFound desc = could not find container \"c4892807a1df4edfdf635de479edbb28204c716ba1aafc49c483c0a4ae587a34\": container with ID starting with c4892807a1df4edfdf635de479edbb28204c716ba1aafc49c483c0a4ae587a34 not found: ID does not exist" Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.312069 4736 scope.go:117] "RemoveContainer" containerID="d1d570eed3432e919789b2f2a527f422cb3557a8b005354a70b212da205a3782" Mar 16 18:05:08 crc kubenswrapper[4736]: E0316 18:05:08.312516 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1d570eed3432e919789b2f2a527f422cb3557a8b005354a70b212da205a3782\": container with ID starting with d1d570eed3432e919789b2f2a527f422cb3557a8b005354a70b212da205a3782 not found: ID does not exist" containerID="d1d570eed3432e919789b2f2a527f422cb3557a8b005354a70b212da205a3782" Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.312535 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1d570eed3432e919789b2f2a527f422cb3557a8b005354a70b212da205a3782"} err="failed to get container status \"d1d570eed3432e919789b2f2a527f422cb3557a8b005354a70b212da205a3782\": rpc error: code = NotFound desc = could not find container \"d1d570eed3432e919789b2f2a527f422cb3557a8b005354a70b212da205a3782\": container with ID starting with d1d570eed3432e919789b2f2a527f422cb3557a8b005354a70b212da205a3782 not found: ID does not exist" Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.312548 4736 scope.go:117] "RemoveContainer" containerID="cb279a64b7b06df80f796b681dc0ca95278b8d3ee2cb9d98883566818a79e2d0" Mar 16 18:05:08 crc kubenswrapper[4736]: E0316 18:05:08.313081 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb279a64b7b06df80f796b681dc0ca95278b8d3ee2cb9d98883566818a79e2d0\": container with ID starting with cb279a64b7b06df80f796b681dc0ca95278b8d3ee2cb9d98883566818a79e2d0 not found: ID does not exist" containerID="cb279a64b7b06df80f796b681dc0ca95278b8d3ee2cb9d98883566818a79e2d0" Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.313127 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb279a64b7b06df80f796b681dc0ca95278b8d3ee2cb9d98883566818a79e2d0"} err="failed to get container status \"cb279a64b7b06df80f796b681dc0ca95278b8d3ee2cb9d98883566818a79e2d0\": rpc error: code = NotFound desc = could not find container \"cb279a64b7b06df80f796b681dc0ca95278b8d3ee2cb9d98883566818a79e2d0\": container with ID starting with cb279a64b7b06df80f796b681dc0ca95278b8d3ee2cb9d98883566818a79e2d0 not found: ID does not exist" Mar 16 18:05:08 crc kubenswrapper[4736]: I0316 18:05:08.992241 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" path="/var/lib/kubelet/pods/96baaa8e-91d5-4998-9ced-b4d88a155dc0/volumes" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.257942 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561406-wxb9x"] Mar 16 18:06:00 crc kubenswrapper[4736]: E0316 18:06:00.260459 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerName="extract-utilities" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.260484 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerName="extract-utilities" Mar 16 18:06:00 crc kubenswrapper[4736]: E0316 18:06:00.260499 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerName="extract-content" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.260507 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerName="extract-content" Mar 16 18:06:00 crc kubenswrapper[4736]: E0316 18:06:00.260514 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e24fbd-004d-44bd-a19a-3ab77b210e84" containerName="neutron-api" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.260522 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e24fbd-004d-44bd-a19a-3ab77b210e84" containerName="neutron-api" Mar 16 18:06:00 crc kubenswrapper[4736]: E0316 18:06:00.260537 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerName="registry-server" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.260543 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerName="registry-server" Mar 16 18:06:00 crc kubenswrapper[4736]: E0316 18:06:00.260553 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e24fbd-004d-44bd-a19a-3ab77b210e84" containerName="neutron-httpd" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.260559 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e24fbd-004d-44bd-a19a-3ab77b210e84" containerName="neutron-httpd" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.261830 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e24fbd-004d-44bd-a19a-3ab77b210e84" containerName="neutron-httpd" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.261856 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="96baaa8e-91d5-4998-9ced-b4d88a155dc0" containerName="registry-server" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.261871 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e24fbd-004d-44bd-a19a-3ab77b210e84" containerName="neutron-api" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.268015 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561406-wxb9x" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.276272 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.276283 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.276280 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.345338 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561406-wxb9x"] Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.388687 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzbqr\" (UniqueName: \"kubernetes.io/projected/8253ef30-ce5e-42ae-8c8a-d687c009eb6d-kube-api-access-zzbqr\") pod \"auto-csr-approver-29561406-wxb9x\" (UID: \"8253ef30-ce5e-42ae-8c8a-d687c009eb6d\") " pod="openshift-infra/auto-csr-approver-29561406-wxb9x" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.504382 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzbqr\" (UniqueName: \"kubernetes.io/projected/8253ef30-ce5e-42ae-8c8a-d687c009eb6d-kube-api-access-zzbqr\") pod \"auto-csr-approver-29561406-wxb9x\" (UID: \"8253ef30-ce5e-42ae-8c8a-d687c009eb6d\") " pod="openshift-infra/auto-csr-approver-29561406-wxb9x" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.540401 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzbqr\" (UniqueName: \"kubernetes.io/projected/8253ef30-ce5e-42ae-8c8a-d687c009eb6d-kube-api-access-zzbqr\") pod \"auto-csr-approver-29561406-wxb9x\" (UID: \"8253ef30-ce5e-42ae-8c8a-d687c009eb6d\") " pod="openshift-infra/auto-csr-approver-29561406-wxb9x" Mar 16 18:06:00 crc kubenswrapper[4736]: I0316 18:06:00.587705 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561406-wxb9x" Mar 16 18:06:01 crc kubenswrapper[4736]: I0316 18:06:01.219224 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561406-wxb9x"] Mar 16 18:06:01 crc kubenswrapper[4736]: I0316 18:06:01.760769 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561406-wxb9x" event={"ID":"8253ef30-ce5e-42ae-8c8a-d687c009eb6d","Type":"ContainerStarted","Data":"4151745783b35fc4fb5c7360d251dc83e91b3ad3d19d454a189a6d32b7adf535"} Mar 16 18:06:03 crc kubenswrapper[4736]: I0316 18:06:03.780093 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561406-wxb9x" event={"ID":"8253ef30-ce5e-42ae-8c8a-d687c009eb6d","Type":"ContainerStarted","Data":"2708ab8130dbca4541463dfb7fef40af4955bbb4f5f3bf68ff2ec3a398dd407a"} Mar 16 18:06:03 crc kubenswrapper[4736]: I0316 18:06:03.810841 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561406-wxb9x" podStartSLOduration=2.556788747 podStartE2EDuration="3.810475104s" podCreationTimestamp="2026-03-16 18:06:00 +0000 UTC" firstStartedPulling="2026-03-16 18:06:01.416342661 +0000 UTC m=+10363.143732948" lastFinishedPulling="2026-03-16 18:06:02.670029008 +0000 UTC m=+10364.397419305" observedRunningTime="2026-03-16 18:06:03.810257458 +0000 UTC m=+10365.537647755" watchObservedRunningTime="2026-03-16 18:06:03.810475104 +0000 UTC m=+10365.537865391" Mar 16 18:06:04 crc kubenswrapper[4736]: I0316 18:06:04.788380 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561406-wxb9x" event={"ID":"8253ef30-ce5e-42ae-8c8a-d687c009eb6d","Type":"ContainerDied","Data":"2708ab8130dbca4541463dfb7fef40af4955bbb4f5f3bf68ff2ec3a398dd407a"} Mar 16 18:06:04 crc kubenswrapper[4736]: I0316 18:06:04.788061 4736 generic.go:334] "Generic (PLEG): container finished" podID="8253ef30-ce5e-42ae-8c8a-d687c009eb6d" containerID="2708ab8130dbca4541463dfb7fef40af4955bbb4f5f3bf68ff2ec3a398dd407a" exitCode=0 Mar 16 18:06:06 crc kubenswrapper[4736]: I0316 18:06:06.275335 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561406-wxb9x" Mar 16 18:06:06 crc kubenswrapper[4736]: I0316 18:06:06.430700 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzbqr\" (UniqueName: \"kubernetes.io/projected/8253ef30-ce5e-42ae-8c8a-d687c009eb6d-kube-api-access-zzbqr\") pod \"8253ef30-ce5e-42ae-8c8a-d687c009eb6d\" (UID: \"8253ef30-ce5e-42ae-8c8a-d687c009eb6d\") " Mar 16 18:06:06 crc kubenswrapper[4736]: I0316 18:06:06.436872 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8253ef30-ce5e-42ae-8c8a-d687c009eb6d-kube-api-access-zzbqr" (OuterVolumeSpecName: "kube-api-access-zzbqr") pod "8253ef30-ce5e-42ae-8c8a-d687c009eb6d" (UID: "8253ef30-ce5e-42ae-8c8a-d687c009eb6d"). InnerVolumeSpecName "kube-api-access-zzbqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:06:06 crc kubenswrapper[4736]: I0316 18:06:06.533413 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzbqr\" (UniqueName: \"kubernetes.io/projected/8253ef30-ce5e-42ae-8c8a-d687c009eb6d-kube-api-access-zzbqr\") on node \"crc\" DevicePath \"\"" Mar 16 18:06:06 crc kubenswrapper[4736]: I0316 18:06:06.806592 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561406-wxb9x" event={"ID":"8253ef30-ce5e-42ae-8c8a-d687c009eb6d","Type":"ContainerDied","Data":"4151745783b35fc4fb5c7360d251dc83e91b3ad3d19d454a189a6d32b7adf535"} Mar 16 18:06:06 crc kubenswrapper[4736]: I0316 18:06:06.806634 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4151745783b35fc4fb5c7360d251dc83e91b3ad3d19d454a189a6d32b7adf535" Mar 16 18:06:06 crc kubenswrapper[4736]: I0316 18:06:06.806662 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561406-wxb9x" Mar 16 18:06:07 crc kubenswrapper[4736]: I0316 18:06:07.378011 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561400-d6qn6"] Mar 16 18:06:07 crc kubenswrapper[4736]: I0316 18:06:07.386042 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561400-d6qn6"] Mar 16 18:06:08 crc kubenswrapper[4736]: I0316 18:06:08.507850 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:06:08 crc kubenswrapper[4736]: I0316 18:06:08.508186 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:06:09 crc kubenswrapper[4736]: I0316 18:06:09.006748 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a58d008a-0a87-40d2-933f-f756a3adb684" path="/var/lib/kubelet/pods/a58d008a-0a87-40d2-933f-f756a3adb684/volumes" Mar 16 18:06:38 crc kubenswrapper[4736]: I0316 18:06:38.508514 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:06:38 crc kubenswrapper[4736]: I0316 18:06:38.509655 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:07:07 crc kubenswrapper[4736]: I0316 18:07:07.229554 4736 scope.go:117] "RemoveContainer" containerID="34b53e525248d97471522787ecbe0ada6c0e7eda2163c159e4382883b4e5706c" Mar 16 18:07:08 crc kubenswrapper[4736]: I0316 18:07:08.507705 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:07:08 crc kubenswrapper[4736]: I0316 18:07:08.508233 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:07:08 crc kubenswrapper[4736]: I0316 18:07:08.508320 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 18:07:08 crc kubenswrapper[4736]: I0316 18:07:08.510672 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6cbd714d2780fc58536aa22c4e07012375aac3b67cad375ec626f65c37d1f92"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 18:07:08 crc kubenswrapper[4736]: I0316 18:07:08.512444 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://e6cbd714d2780fc58536aa22c4e07012375aac3b67cad375ec626f65c37d1f92" gracePeriod=600 Mar 16 18:07:09 crc kubenswrapper[4736]: I0316 18:07:09.479952 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="e6cbd714d2780fc58536aa22c4e07012375aac3b67cad375ec626f65c37d1f92" exitCode=0 Mar 16 18:07:09 crc kubenswrapper[4736]: I0316 18:07:09.480023 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"e6cbd714d2780fc58536aa22c4e07012375aac3b67cad375ec626f65c37d1f92"} Mar 16 18:07:09 crc kubenswrapper[4736]: I0316 18:07:09.480616 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973"} Mar 16 18:07:09 crc kubenswrapper[4736]: I0316 18:07:09.480663 4736 scope.go:117] "RemoveContainer" containerID="412be1988496fe449be4ae7695fca71b2ca47ff6b63a8b62d1bbddc29e407898" Mar 16 18:08:00 crc kubenswrapper[4736]: I0316 18:08:00.146363 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561408-x2c9f"] Mar 16 18:08:00 crc kubenswrapper[4736]: E0316 18:08:00.148699 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8253ef30-ce5e-42ae-8c8a-d687c009eb6d" containerName="oc" Mar 16 18:08:00 crc kubenswrapper[4736]: I0316 18:08:00.148725 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8253ef30-ce5e-42ae-8c8a-d687c009eb6d" containerName="oc" Mar 16 18:08:00 crc kubenswrapper[4736]: I0316 18:08:00.149089 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8253ef30-ce5e-42ae-8c8a-d687c009eb6d" containerName="oc" Mar 16 18:08:00 crc kubenswrapper[4736]: I0316 18:08:00.151649 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561408-x2c9f" Mar 16 18:08:00 crc kubenswrapper[4736]: I0316 18:08:00.157689 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:08:00 crc kubenswrapper[4736]: I0316 18:08:00.157829 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:08:00 crc kubenswrapper[4736]: I0316 18:08:00.158299 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:08:00 crc kubenswrapper[4736]: I0316 18:08:00.178604 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561408-x2c9f"] Mar 16 18:08:00 crc kubenswrapper[4736]: I0316 18:08:00.269172 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sggw\" (UniqueName: \"kubernetes.io/projected/02858064-4490-4511-9348-c1444d5e8539-kube-api-access-7sggw\") pod \"auto-csr-approver-29561408-x2c9f\" (UID: \"02858064-4490-4511-9348-c1444d5e8539\") " pod="openshift-infra/auto-csr-approver-29561408-x2c9f" Mar 16 18:08:00 crc kubenswrapper[4736]: I0316 18:08:00.370898 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sggw\" (UniqueName: \"kubernetes.io/projected/02858064-4490-4511-9348-c1444d5e8539-kube-api-access-7sggw\") pod \"auto-csr-approver-29561408-x2c9f\" (UID: \"02858064-4490-4511-9348-c1444d5e8539\") " pod="openshift-infra/auto-csr-approver-29561408-x2c9f" Mar 16 18:08:00 crc kubenswrapper[4736]: I0316 18:08:00.395060 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sggw\" (UniqueName: \"kubernetes.io/projected/02858064-4490-4511-9348-c1444d5e8539-kube-api-access-7sggw\") pod \"auto-csr-approver-29561408-x2c9f\" (UID: \"02858064-4490-4511-9348-c1444d5e8539\") " pod="openshift-infra/auto-csr-approver-29561408-x2c9f" Mar 16 18:08:00 crc kubenswrapper[4736]: I0316 18:08:00.482214 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561408-x2c9f" Mar 16 18:08:01 crc kubenswrapper[4736]: I0316 18:08:01.458593 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561408-x2c9f"] Mar 16 18:08:02 crc kubenswrapper[4736]: I0316 18:08:02.098348 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561408-x2c9f" event={"ID":"02858064-4490-4511-9348-c1444d5e8539","Type":"ContainerStarted","Data":"7f9f35ad85fe637d2d003603bfa2456fa5ac5653909c3dfb9623795645a6d56c"} Mar 16 18:08:03 crc kubenswrapper[4736]: I0316 18:08:03.110196 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561408-x2c9f" event={"ID":"02858064-4490-4511-9348-c1444d5e8539","Type":"ContainerStarted","Data":"79293c63ce84dbd9b4bd7d7a30aea4a3ab70fdfe2ded26cda9aa9c390619ceaa"} Mar 16 18:08:03 crc kubenswrapper[4736]: I0316 18:08:03.133551 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561408-x2c9f" podStartSLOduration=1.949398312 podStartE2EDuration="3.13353351s" podCreationTimestamp="2026-03-16 18:08:00 +0000 UTC" firstStartedPulling="2026-03-16 18:08:01.471307744 +0000 UTC m=+10483.198698031" lastFinishedPulling="2026-03-16 18:08:02.655442902 +0000 UTC m=+10484.382833229" observedRunningTime="2026-03-16 18:08:03.126690333 +0000 UTC m=+10484.854080660" watchObservedRunningTime="2026-03-16 18:08:03.13353351 +0000 UTC m=+10484.860923797" Mar 16 18:08:05 crc kubenswrapper[4736]: I0316 18:08:05.135003 4736 generic.go:334] "Generic (PLEG): container finished" podID="02858064-4490-4511-9348-c1444d5e8539" containerID="79293c63ce84dbd9b4bd7d7a30aea4a3ab70fdfe2ded26cda9aa9c390619ceaa" exitCode=0 Mar 16 18:08:05 crc kubenswrapper[4736]: I0316 18:08:05.135060 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561408-x2c9f" event={"ID":"02858064-4490-4511-9348-c1444d5e8539","Type":"ContainerDied","Data":"79293c63ce84dbd9b4bd7d7a30aea4a3ab70fdfe2ded26cda9aa9c390619ceaa"} Mar 16 18:08:06 crc kubenswrapper[4736]: I0316 18:08:06.662580 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561408-x2c9f" Mar 16 18:08:06 crc kubenswrapper[4736]: I0316 18:08:06.707917 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sggw\" (UniqueName: \"kubernetes.io/projected/02858064-4490-4511-9348-c1444d5e8539-kube-api-access-7sggw\") pod \"02858064-4490-4511-9348-c1444d5e8539\" (UID: \"02858064-4490-4511-9348-c1444d5e8539\") " Mar 16 18:08:06 crc kubenswrapper[4736]: I0316 18:08:06.723204 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02858064-4490-4511-9348-c1444d5e8539-kube-api-access-7sggw" (OuterVolumeSpecName: "kube-api-access-7sggw") pod "02858064-4490-4511-9348-c1444d5e8539" (UID: "02858064-4490-4511-9348-c1444d5e8539"). InnerVolumeSpecName "kube-api-access-7sggw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:08:06 crc kubenswrapper[4736]: I0316 18:08:06.810234 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sggw\" (UniqueName: \"kubernetes.io/projected/02858064-4490-4511-9348-c1444d5e8539-kube-api-access-7sggw\") on node \"crc\" DevicePath \"\"" Mar 16 18:08:07 crc kubenswrapper[4736]: I0316 18:08:07.157720 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561408-x2c9f" event={"ID":"02858064-4490-4511-9348-c1444d5e8539","Type":"ContainerDied","Data":"7f9f35ad85fe637d2d003603bfa2456fa5ac5653909c3dfb9623795645a6d56c"} Mar 16 18:08:07 crc kubenswrapper[4736]: I0316 18:08:07.157775 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f9f35ad85fe637d2d003603bfa2456fa5ac5653909c3dfb9623795645a6d56c" Mar 16 18:08:07 crc kubenswrapper[4736]: I0316 18:08:07.157791 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561408-x2c9f" Mar 16 18:08:07 crc kubenswrapper[4736]: I0316 18:08:07.226968 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561402-cgzrj"] Mar 16 18:08:07 crc kubenswrapper[4736]: I0316 18:08:07.239217 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561402-cgzrj"] Mar 16 18:08:09 crc kubenswrapper[4736]: I0316 18:08:09.006303 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fb6a897-4d25-4370-a54a-0ce0ea52a89a" path="/var/lib/kubelet/pods/7fb6a897-4d25-4370-a54a-0ce0ea52a89a/volumes" Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.498407 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qtz8q"] Mar 16 18:08:43 crc kubenswrapper[4736]: E0316 18:08:43.499349 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02858064-4490-4511-9348-c1444d5e8539" containerName="oc" Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.499364 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="02858064-4490-4511-9348-c1444d5e8539" containerName="oc" Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.499586 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="02858064-4490-4511-9348-c1444d5e8539" containerName="oc" Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.503894 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.515001 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qtz8q"] Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.613487 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4db44c42-0295-43a6-90ba-5a9c5535c303-utilities\") pod \"certified-operators-qtz8q\" (UID: \"4db44c42-0295-43a6-90ba-5a9c5535c303\") " pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.613557 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4db44c42-0295-43a6-90ba-5a9c5535c303-catalog-content\") pod \"certified-operators-qtz8q\" (UID: \"4db44c42-0295-43a6-90ba-5a9c5535c303\") " pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.613839 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvlgx\" (UniqueName: \"kubernetes.io/projected/4db44c42-0295-43a6-90ba-5a9c5535c303-kube-api-access-xvlgx\") pod \"certified-operators-qtz8q\" (UID: \"4db44c42-0295-43a6-90ba-5a9c5535c303\") " pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.715903 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4db44c42-0295-43a6-90ba-5a9c5535c303-utilities\") pod \"certified-operators-qtz8q\" (UID: \"4db44c42-0295-43a6-90ba-5a9c5535c303\") " pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.716006 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4db44c42-0295-43a6-90ba-5a9c5535c303-catalog-content\") pod \"certified-operators-qtz8q\" (UID: \"4db44c42-0295-43a6-90ba-5a9c5535c303\") " pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.716171 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvlgx\" (UniqueName: \"kubernetes.io/projected/4db44c42-0295-43a6-90ba-5a9c5535c303-kube-api-access-xvlgx\") pod \"certified-operators-qtz8q\" (UID: \"4db44c42-0295-43a6-90ba-5a9c5535c303\") " pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.717017 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4db44c42-0295-43a6-90ba-5a9c5535c303-utilities\") pod \"certified-operators-qtz8q\" (UID: \"4db44c42-0295-43a6-90ba-5a9c5535c303\") " pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.717032 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4db44c42-0295-43a6-90ba-5a9c5535c303-catalog-content\") pod \"certified-operators-qtz8q\" (UID: \"4db44c42-0295-43a6-90ba-5a9c5535c303\") " pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.735625 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvlgx\" (UniqueName: \"kubernetes.io/projected/4db44c42-0295-43a6-90ba-5a9c5535c303-kube-api-access-xvlgx\") pod \"certified-operators-qtz8q\" (UID: \"4db44c42-0295-43a6-90ba-5a9c5535c303\") " pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:08:43 crc kubenswrapper[4736]: I0316 18:08:43.829258 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:08:44 crc kubenswrapper[4736]: I0316 18:08:44.403899 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qtz8q"] Mar 16 18:08:44 crc kubenswrapper[4736]: I0316 18:08:44.566342 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtz8q" event={"ID":"4db44c42-0295-43a6-90ba-5a9c5535c303","Type":"ContainerStarted","Data":"e655022f0d0a034b7381c4d99cd2efec59b742462f3e355dc80618396ac8ace9"} Mar 16 18:08:45 crc kubenswrapper[4736]: I0316 18:08:45.595100 4736 generic.go:334] "Generic (PLEG): container finished" podID="4db44c42-0295-43a6-90ba-5a9c5535c303" containerID="bdb33bad38837cc9964cb275172d53e07481f00be1f95f0f679d972a33502f0f" exitCode=0 Mar 16 18:08:45 crc kubenswrapper[4736]: I0316 18:08:45.595801 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtz8q" event={"ID":"4db44c42-0295-43a6-90ba-5a9c5535c303","Type":"ContainerDied","Data":"bdb33bad38837cc9964cb275172d53e07481f00be1f95f0f679d972a33502f0f"} Mar 16 18:08:46 crc kubenswrapper[4736]: I0316 18:08:46.609580 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtz8q" event={"ID":"4db44c42-0295-43a6-90ba-5a9c5535c303","Type":"ContainerStarted","Data":"5b8eed3bbb57cd0c2f2370d767fcc633e91392193720650fe492aeb7e2ef819b"} Mar 16 18:08:48 crc kubenswrapper[4736]: I0316 18:08:48.629519 4736 generic.go:334] "Generic (PLEG): container finished" podID="4db44c42-0295-43a6-90ba-5a9c5535c303" containerID="5b8eed3bbb57cd0c2f2370d767fcc633e91392193720650fe492aeb7e2ef819b" exitCode=0 Mar 16 18:08:48 crc kubenswrapper[4736]: I0316 18:08:48.629634 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtz8q" event={"ID":"4db44c42-0295-43a6-90ba-5a9c5535c303","Type":"ContainerDied","Data":"5b8eed3bbb57cd0c2f2370d767fcc633e91392193720650fe492aeb7e2ef819b"} Mar 16 18:08:48 crc kubenswrapper[4736]: I0316 18:08:48.633849 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 18:08:48 crc kubenswrapper[4736]: I0316 18:08:48.863344 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vwfj8"] Mar 16 18:08:48 crc kubenswrapper[4736]: I0316 18:08:48.865376 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:08:48 crc kubenswrapper[4736]: I0316 18:08:48.879903 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vwfj8"] Mar 16 18:08:48 crc kubenswrapper[4736]: I0316 18:08:48.923944 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3299c641-a09f-4a65-af53-c9650361fa0d-catalog-content\") pod \"community-operators-vwfj8\" (UID: \"3299c641-a09f-4a65-af53-c9650361fa0d\") " pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:08:48 crc kubenswrapper[4736]: I0316 18:08:48.924041 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3299c641-a09f-4a65-af53-c9650361fa0d-utilities\") pod \"community-operators-vwfj8\" (UID: \"3299c641-a09f-4a65-af53-c9650361fa0d\") " pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:08:48 crc kubenswrapper[4736]: I0316 18:08:48.924306 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72cbf\" (UniqueName: \"kubernetes.io/projected/3299c641-a09f-4a65-af53-c9650361fa0d-kube-api-access-72cbf\") pod \"community-operators-vwfj8\" (UID: \"3299c641-a09f-4a65-af53-c9650361fa0d\") " pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:08:49 crc kubenswrapper[4736]: I0316 18:08:49.029010 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3299c641-a09f-4a65-af53-c9650361fa0d-catalog-content\") pod \"community-operators-vwfj8\" (UID: \"3299c641-a09f-4a65-af53-c9650361fa0d\") " pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:08:49 crc kubenswrapper[4736]: I0316 18:08:49.029142 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3299c641-a09f-4a65-af53-c9650361fa0d-utilities\") pod \"community-operators-vwfj8\" (UID: \"3299c641-a09f-4a65-af53-c9650361fa0d\") " pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:08:49 crc kubenswrapper[4736]: I0316 18:08:49.029335 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72cbf\" (UniqueName: \"kubernetes.io/projected/3299c641-a09f-4a65-af53-c9650361fa0d-kube-api-access-72cbf\") pod \"community-operators-vwfj8\" (UID: \"3299c641-a09f-4a65-af53-c9650361fa0d\") " pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:08:49 crc kubenswrapper[4736]: I0316 18:08:49.030242 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3299c641-a09f-4a65-af53-c9650361fa0d-catalog-content\") pod \"community-operators-vwfj8\" (UID: \"3299c641-a09f-4a65-af53-c9650361fa0d\") " pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:08:49 crc kubenswrapper[4736]: I0316 18:08:49.030565 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3299c641-a09f-4a65-af53-c9650361fa0d-utilities\") pod \"community-operators-vwfj8\" (UID: \"3299c641-a09f-4a65-af53-c9650361fa0d\") " pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:08:49 crc kubenswrapper[4736]: I0316 18:08:49.077642 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72cbf\" (UniqueName: \"kubernetes.io/projected/3299c641-a09f-4a65-af53-c9650361fa0d-kube-api-access-72cbf\") pod \"community-operators-vwfj8\" (UID: \"3299c641-a09f-4a65-af53-c9650361fa0d\") " pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:08:49 crc kubenswrapper[4736]: I0316 18:08:49.201354 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:08:49 crc kubenswrapper[4736]: I0316 18:08:49.643440 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtz8q" event={"ID":"4db44c42-0295-43a6-90ba-5a9c5535c303","Type":"ContainerStarted","Data":"5138251623dfa2158a94adede725cf456ea118deb8c7c3aead97ce0f8e2e8ce5"} Mar 16 18:08:49 crc kubenswrapper[4736]: I0316 18:08:49.667408 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qtz8q" podStartSLOduration=3.107109325 podStartE2EDuration="6.667385214s" podCreationTimestamp="2026-03-16 18:08:43 +0000 UTC" firstStartedPulling="2026-03-16 18:08:45.598807123 +0000 UTC m=+10527.326197410" lastFinishedPulling="2026-03-16 18:08:49.159083012 +0000 UTC m=+10530.886473299" observedRunningTime="2026-03-16 18:08:49.664898686 +0000 UTC m=+10531.392288993" watchObservedRunningTime="2026-03-16 18:08:49.667385214 +0000 UTC m=+10531.394775531" Mar 16 18:08:49 crc kubenswrapper[4736]: I0316 18:08:49.726576 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vwfj8"] Mar 16 18:08:49 crc kubenswrapper[4736]: W0316 18:08:49.732237 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3299c641_a09f_4a65_af53_c9650361fa0d.slice/crio-ab57743749e5c89a404b618e2a11258c7eb40c530d88007523f337ae6f6863dc WatchSource:0}: Error finding container ab57743749e5c89a404b618e2a11258c7eb40c530d88007523f337ae6f6863dc: Status 404 returned error can't find the container with id ab57743749e5c89a404b618e2a11258c7eb40c530d88007523f337ae6f6863dc Mar 16 18:08:50 crc kubenswrapper[4736]: I0316 18:08:50.652811 4736 generic.go:334] "Generic (PLEG): container finished" podID="3299c641-a09f-4a65-af53-c9650361fa0d" containerID="0a2e3cf1b04778ea7ab10e432dca603cf60e6a45858256bca4beeecb3a4eb2f1" exitCode=0 Mar 16 18:08:50 crc kubenswrapper[4736]: I0316 18:08:50.652948 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwfj8" event={"ID":"3299c641-a09f-4a65-af53-c9650361fa0d","Type":"ContainerDied","Data":"0a2e3cf1b04778ea7ab10e432dca603cf60e6a45858256bca4beeecb3a4eb2f1"} Mar 16 18:08:50 crc kubenswrapper[4736]: I0316 18:08:50.653158 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwfj8" event={"ID":"3299c641-a09f-4a65-af53-c9650361fa0d","Type":"ContainerStarted","Data":"ab57743749e5c89a404b618e2a11258c7eb40c530d88007523f337ae6f6863dc"} Mar 16 18:08:52 crc kubenswrapper[4736]: I0316 18:08:52.671510 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwfj8" event={"ID":"3299c641-a09f-4a65-af53-c9650361fa0d","Type":"ContainerStarted","Data":"9a291f8696cf86927b159b1022c1f113b0c28a1c9cb32a364b2fd91a6c2b282a"} Mar 16 18:08:53 crc kubenswrapper[4736]: I0316 18:08:53.684877 4736 generic.go:334] "Generic (PLEG): container finished" podID="3299c641-a09f-4a65-af53-c9650361fa0d" containerID="9a291f8696cf86927b159b1022c1f113b0c28a1c9cb32a364b2fd91a6c2b282a" exitCode=0 Mar 16 18:08:53 crc kubenswrapper[4736]: I0316 18:08:53.685087 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwfj8" event={"ID":"3299c641-a09f-4a65-af53-c9650361fa0d","Type":"ContainerDied","Data":"9a291f8696cf86927b159b1022c1f113b0c28a1c9cb32a364b2fd91a6c2b282a"} Mar 16 18:08:53 crc kubenswrapper[4736]: I0316 18:08:53.830049 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:08:53 crc kubenswrapper[4736]: I0316 18:08:53.830123 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:08:54 crc kubenswrapper[4736]: I0316 18:08:54.701680 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwfj8" event={"ID":"3299c641-a09f-4a65-af53-c9650361fa0d","Type":"ContainerStarted","Data":"af647d92a261e62fc9412739e8c870ac279d4863b2c60af404cb3916434d5772"} Mar 16 18:08:54 crc kubenswrapper[4736]: I0316 18:08:54.727811 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vwfj8" podStartSLOduration=3.278174448 podStartE2EDuration="6.727788956s" podCreationTimestamp="2026-03-16 18:08:48 +0000 UTC" firstStartedPulling="2026-03-16 18:08:50.654904657 +0000 UTC m=+10532.382294944" lastFinishedPulling="2026-03-16 18:08:54.104519165 +0000 UTC m=+10535.831909452" observedRunningTime="2026-03-16 18:08:54.719485769 +0000 UTC m=+10536.446876046" watchObservedRunningTime="2026-03-16 18:08:54.727788956 +0000 UTC m=+10536.455179243" Mar 16 18:08:54 crc kubenswrapper[4736]: I0316 18:08:54.883323 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qtz8q" podUID="4db44c42-0295-43a6-90ba-5a9c5535c303" containerName="registry-server" probeResult="failure" output=< Mar 16 18:08:54 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:08:54 crc kubenswrapper[4736]: > Mar 16 18:08:59 crc kubenswrapper[4736]: I0316 18:08:59.201986 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:08:59 crc kubenswrapper[4736]: I0316 18:08:59.202590 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:09:00 crc kubenswrapper[4736]: I0316 18:09:00.254340 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vwfj8" podUID="3299c641-a09f-4a65-af53-c9650361fa0d" containerName="registry-server" probeResult="failure" output=< Mar 16 18:09:00 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:09:00 crc kubenswrapper[4736]: > Mar 16 18:09:03 crc kubenswrapper[4736]: I0316 18:09:03.918809 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:09:04 crc kubenswrapper[4736]: I0316 18:09:04.013001 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:09:04 crc kubenswrapper[4736]: I0316 18:09:04.186870 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qtz8q"] Mar 16 18:09:05 crc kubenswrapper[4736]: I0316 18:09:05.808318 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qtz8q" podUID="4db44c42-0295-43a6-90ba-5a9c5535c303" containerName="registry-server" containerID="cri-o://5138251623dfa2158a94adede725cf456ea118deb8c7c3aead97ce0f8e2e8ce5" gracePeriod=2 Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.420827 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.472234 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvlgx\" (UniqueName: \"kubernetes.io/projected/4db44c42-0295-43a6-90ba-5a9c5535c303-kube-api-access-xvlgx\") pod \"4db44c42-0295-43a6-90ba-5a9c5535c303\" (UID: \"4db44c42-0295-43a6-90ba-5a9c5535c303\") " Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.472402 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4db44c42-0295-43a6-90ba-5a9c5535c303-catalog-content\") pod \"4db44c42-0295-43a6-90ba-5a9c5535c303\" (UID: \"4db44c42-0295-43a6-90ba-5a9c5535c303\") " Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.472460 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4db44c42-0295-43a6-90ba-5a9c5535c303-utilities\") pod \"4db44c42-0295-43a6-90ba-5a9c5535c303\" (UID: \"4db44c42-0295-43a6-90ba-5a9c5535c303\") " Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.475211 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4db44c42-0295-43a6-90ba-5a9c5535c303-utilities" (OuterVolumeSpecName: "utilities") pod "4db44c42-0295-43a6-90ba-5a9c5535c303" (UID: "4db44c42-0295-43a6-90ba-5a9c5535c303"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.491975 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4db44c42-0295-43a6-90ba-5a9c5535c303-kube-api-access-xvlgx" (OuterVolumeSpecName: "kube-api-access-xvlgx") pod "4db44c42-0295-43a6-90ba-5a9c5535c303" (UID: "4db44c42-0295-43a6-90ba-5a9c5535c303"). InnerVolumeSpecName "kube-api-access-xvlgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.574916 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvlgx\" (UniqueName: \"kubernetes.io/projected/4db44c42-0295-43a6-90ba-5a9c5535c303-kube-api-access-xvlgx\") on node \"crc\" DevicePath \"\"" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.574952 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4db44c42-0295-43a6-90ba-5a9c5535c303-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.608750 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4db44c42-0295-43a6-90ba-5a9c5535c303-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4db44c42-0295-43a6-90ba-5a9c5535c303" (UID: "4db44c42-0295-43a6-90ba-5a9c5535c303"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.676756 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4db44c42-0295-43a6-90ba-5a9c5535c303-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.814640 4736 generic.go:334] "Generic (PLEG): container finished" podID="4db44c42-0295-43a6-90ba-5a9c5535c303" containerID="5138251623dfa2158a94adede725cf456ea118deb8c7c3aead97ce0f8e2e8ce5" exitCode=0 Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.814681 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtz8q" event={"ID":"4db44c42-0295-43a6-90ba-5a9c5535c303","Type":"ContainerDied","Data":"5138251623dfa2158a94adede725cf456ea118deb8c7c3aead97ce0f8e2e8ce5"} Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.814706 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qtz8q" event={"ID":"4db44c42-0295-43a6-90ba-5a9c5535c303","Type":"ContainerDied","Data":"e655022f0d0a034b7381c4d99cd2efec59b742462f3e355dc80618396ac8ace9"} Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.814787 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qtz8q" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.816362 4736 scope.go:117] "RemoveContainer" containerID="5138251623dfa2158a94adede725cf456ea118deb8c7c3aead97ce0f8e2e8ce5" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.860381 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qtz8q"] Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.873165 4736 scope.go:117] "RemoveContainer" containerID="5b8eed3bbb57cd0c2f2370d767fcc633e91392193720650fe492aeb7e2ef819b" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.882328 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qtz8q"] Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.902500 4736 scope.go:117] "RemoveContainer" containerID="bdb33bad38837cc9964cb275172d53e07481f00be1f95f0f679d972a33502f0f" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.950803 4736 scope.go:117] "RemoveContainer" containerID="5138251623dfa2158a94adede725cf456ea118deb8c7c3aead97ce0f8e2e8ce5" Mar 16 18:09:06 crc kubenswrapper[4736]: E0316 18:09:06.956289 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5138251623dfa2158a94adede725cf456ea118deb8c7c3aead97ce0f8e2e8ce5\": container with ID starting with 5138251623dfa2158a94adede725cf456ea118deb8c7c3aead97ce0f8e2e8ce5 not found: ID does not exist" containerID="5138251623dfa2158a94adede725cf456ea118deb8c7c3aead97ce0f8e2e8ce5" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.956340 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5138251623dfa2158a94adede725cf456ea118deb8c7c3aead97ce0f8e2e8ce5"} err="failed to get container status \"5138251623dfa2158a94adede725cf456ea118deb8c7c3aead97ce0f8e2e8ce5\": rpc error: code = NotFound desc = could not find container \"5138251623dfa2158a94adede725cf456ea118deb8c7c3aead97ce0f8e2e8ce5\": container with ID starting with 5138251623dfa2158a94adede725cf456ea118deb8c7c3aead97ce0f8e2e8ce5 not found: ID does not exist" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.956366 4736 scope.go:117] "RemoveContainer" containerID="5b8eed3bbb57cd0c2f2370d767fcc633e91392193720650fe492aeb7e2ef819b" Mar 16 18:09:06 crc kubenswrapper[4736]: E0316 18:09:06.956790 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b8eed3bbb57cd0c2f2370d767fcc633e91392193720650fe492aeb7e2ef819b\": container with ID starting with 5b8eed3bbb57cd0c2f2370d767fcc633e91392193720650fe492aeb7e2ef819b not found: ID does not exist" containerID="5b8eed3bbb57cd0c2f2370d767fcc633e91392193720650fe492aeb7e2ef819b" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.956810 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b8eed3bbb57cd0c2f2370d767fcc633e91392193720650fe492aeb7e2ef819b"} err="failed to get container status \"5b8eed3bbb57cd0c2f2370d767fcc633e91392193720650fe492aeb7e2ef819b\": rpc error: code = NotFound desc = could not find container \"5b8eed3bbb57cd0c2f2370d767fcc633e91392193720650fe492aeb7e2ef819b\": container with ID starting with 5b8eed3bbb57cd0c2f2370d767fcc633e91392193720650fe492aeb7e2ef819b not found: ID does not exist" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.956822 4736 scope.go:117] "RemoveContainer" containerID="bdb33bad38837cc9964cb275172d53e07481f00be1f95f0f679d972a33502f0f" Mar 16 18:09:06 crc kubenswrapper[4736]: E0316 18:09:06.957160 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdb33bad38837cc9964cb275172d53e07481f00be1f95f0f679d972a33502f0f\": container with ID starting with bdb33bad38837cc9964cb275172d53e07481f00be1f95f0f679d972a33502f0f not found: ID does not exist" containerID="bdb33bad38837cc9964cb275172d53e07481f00be1f95f0f679d972a33502f0f" Mar 16 18:09:06 crc kubenswrapper[4736]: I0316 18:09:06.957214 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdb33bad38837cc9964cb275172d53e07481f00be1f95f0f679d972a33502f0f"} err="failed to get container status \"bdb33bad38837cc9964cb275172d53e07481f00be1f95f0f679d972a33502f0f\": rpc error: code = NotFound desc = could not find container \"bdb33bad38837cc9964cb275172d53e07481f00be1f95f0f679d972a33502f0f\": container with ID starting with bdb33bad38837cc9964cb275172d53e07481f00be1f95f0f679d972a33502f0f not found: ID does not exist" Mar 16 18:09:07 crc kubenswrapper[4736]: I0316 18:09:07.014042 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4db44c42-0295-43a6-90ba-5a9c5535c303" path="/var/lib/kubelet/pods/4db44c42-0295-43a6-90ba-5a9c5535c303/volumes" Mar 16 18:09:07 crc kubenswrapper[4736]: I0316 18:09:07.361145 4736 scope.go:117] "RemoveContainer" containerID="797491e33f39159c15d7b6fa57645635becfae6ef6fcbb582e432de4d95e0e8a" Mar 16 18:09:08 crc kubenswrapper[4736]: I0316 18:09:08.508082 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:09:08 crc kubenswrapper[4736]: I0316 18:09:08.508561 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:09:09 crc kubenswrapper[4736]: I0316 18:09:09.285061 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:09:09 crc kubenswrapper[4736]: I0316 18:09:09.354752 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:09:10 crc kubenswrapper[4736]: I0316 18:09:10.593077 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vwfj8"] Mar 16 18:09:10 crc kubenswrapper[4736]: I0316 18:09:10.855319 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vwfj8" podUID="3299c641-a09f-4a65-af53-c9650361fa0d" containerName="registry-server" containerID="cri-o://af647d92a261e62fc9412739e8c870ac279d4863b2c60af404cb3916434d5772" gracePeriod=2 Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.692397 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.865122 4736 generic.go:334] "Generic (PLEG): container finished" podID="3299c641-a09f-4a65-af53-c9650361fa0d" containerID="af647d92a261e62fc9412739e8c870ac279d4863b2c60af404cb3916434d5772" exitCode=0 Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.865163 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwfj8" event={"ID":"3299c641-a09f-4a65-af53-c9650361fa0d","Type":"ContainerDied","Data":"af647d92a261e62fc9412739e8c870ac279d4863b2c60af404cb3916434d5772"} Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.865188 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwfj8" event={"ID":"3299c641-a09f-4a65-af53-c9650361fa0d","Type":"ContainerDied","Data":"ab57743749e5c89a404b618e2a11258c7eb40c530d88007523f337ae6f6863dc"} Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.865205 4736 scope.go:117] "RemoveContainer" containerID="af647d92a261e62fc9412739e8c870ac279d4863b2c60af404cb3916434d5772" Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.865340 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwfj8" Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.875244 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3299c641-a09f-4a65-af53-c9650361fa0d-catalog-content\") pod \"3299c641-a09f-4a65-af53-c9650361fa0d\" (UID: \"3299c641-a09f-4a65-af53-c9650361fa0d\") " Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.875353 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3299c641-a09f-4a65-af53-c9650361fa0d-utilities\") pod \"3299c641-a09f-4a65-af53-c9650361fa0d\" (UID: \"3299c641-a09f-4a65-af53-c9650361fa0d\") " Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.876057 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3299c641-a09f-4a65-af53-c9650361fa0d-utilities" (OuterVolumeSpecName: "utilities") pod "3299c641-a09f-4a65-af53-c9650361fa0d" (UID: "3299c641-a09f-4a65-af53-c9650361fa0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.877014 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72cbf\" (UniqueName: \"kubernetes.io/projected/3299c641-a09f-4a65-af53-c9650361fa0d-kube-api-access-72cbf\") pod \"3299c641-a09f-4a65-af53-c9650361fa0d\" (UID: \"3299c641-a09f-4a65-af53-c9650361fa0d\") " Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.877954 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3299c641-a09f-4a65-af53-c9650361fa0d-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.883174 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3299c641-a09f-4a65-af53-c9650361fa0d-kube-api-access-72cbf" (OuterVolumeSpecName: "kube-api-access-72cbf") pod "3299c641-a09f-4a65-af53-c9650361fa0d" (UID: "3299c641-a09f-4a65-af53-c9650361fa0d"). InnerVolumeSpecName "kube-api-access-72cbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.896713 4736 scope.go:117] "RemoveContainer" containerID="9a291f8696cf86927b159b1022c1f113b0c28a1c9cb32a364b2fd91a6c2b282a" Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.979288 4736 scope.go:117] "RemoveContainer" containerID="0a2e3cf1b04778ea7ab10e432dca603cf60e6a45858256bca4beeecb3a4eb2f1" Mar 16 18:09:11 crc kubenswrapper[4736]: I0316 18:09:11.980777 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72cbf\" (UniqueName: \"kubernetes.io/projected/3299c641-a09f-4a65-af53-c9650361fa0d-kube-api-access-72cbf\") on node \"crc\" DevicePath \"\"" Mar 16 18:09:12 crc kubenswrapper[4736]: I0316 18:09:12.008074 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3299c641-a09f-4a65-af53-c9650361fa0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3299c641-a09f-4a65-af53-c9650361fa0d" (UID: "3299c641-a09f-4a65-af53-c9650361fa0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:09:12 crc kubenswrapper[4736]: I0316 18:09:12.016846 4736 scope.go:117] "RemoveContainer" containerID="af647d92a261e62fc9412739e8c870ac279d4863b2c60af404cb3916434d5772" Mar 16 18:09:12 crc kubenswrapper[4736]: E0316 18:09:12.017378 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af647d92a261e62fc9412739e8c870ac279d4863b2c60af404cb3916434d5772\": container with ID starting with af647d92a261e62fc9412739e8c870ac279d4863b2c60af404cb3916434d5772 not found: ID does not exist" containerID="af647d92a261e62fc9412739e8c870ac279d4863b2c60af404cb3916434d5772" Mar 16 18:09:12 crc kubenswrapper[4736]: I0316 18:09:12.017440 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af647d92a261e62fc9412739e8c870ac279d4863b2c60af404cb3916434d5772"} err="failed to get container status \"af647d92a261e62fc9412739e8c870ac279d4863b2c60af404cb3916434d5772\": rpc error: code = NotFound desc = could not find container \"af647d92a261e62fc9412739e8c870ac279d4863b2c60af404cb3916434d5772\": container with ID starting with af647d92a261e62fc9412739e8c870ac279d4863b2c60af404cb3916434d5772 not found: ID does not exist" Mar 16 18:09:12 crc kubenswrapper[4736]: I0316 18:09:12.017466 4736 scope.go:117] "RemoveContainer" containerID="9a291f8696cf86927b159b1022c1f113b0c28a1c9cb32a364b2fd91a6c2b282a" Mar 16 18:09:12 crc kubenswrapper[4736]: E0316 18:09:12.017971 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a291f8696cf86927b159b1022c1f113b0c28a1c9cb32a364b2fd91a6c2b282a\": container with ID starting with 9a291f8696cf86927b159b1022c1f113b0c28a1c9cb32a364b2fd91a6c2b282a not found: ID does not exist" containerID="9a291f8696cf86927b159b1022c1f113b0c28a1c9cb32a364b2fd91a6c2b282a" Mar 16 18:09:12 crc kubenswrapper[4736]: I0316 18:09:12.017997 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a291f8696cf86927b159b1022c1f113b0c28a1c9cb32a364b2fd91a6c2b282a"} err="failed to get container status \"9a291f8696cf86927b159b1022c1f113b0c28a1c9cb32a364b2fd91a6c2b282a\": rpc error: code = NotFound desc = could not find container \"9a291f8696cf86927b159b1022c1f113b0c28a1c9cb32a364b2fd91a6c2b282a\": container with ID starting with 9a291f8696cf86927b159b1022c1f113b0c28a1c9cb32a364b2fd91a6c2b282a not found: ID does not exist" Mar 16 18:09:12 crc kubenswrapper[4736]: I0316 18:09:12.018011 4736 scope.go:117] "RemoveContainer" containerID="0a2e3cf1b04778ea7ab10e432dca603cf60e6a45858256bca4beeecb3a4eb2f1" Mar 16 18:09:12 crc kubenswrapper[4736]: E0316 18:09:12.018257 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a2e3cf1b04778ea7ab10e432dca603cf60e6a45858256bca4beeecb3a4eb2f1\": container with ID starting with 0a2e3cf1b04778ea7ab10e432dca603cf60e6a45858256bca4beeecb3a4eb2f1 not found: ID does not exist" containerID="0a2e3cf1b04778ea7ab10e432dca603cf60e6a45858256bca4beeecb3a4eb2f1" Mar 16 18:09:12 crc kubenswrapper[4736]: I0316 18:09:12.018300 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a2e3cf1b04778ea7ab10e432dca603cf60e6a45858256bca4beeecb3a4eb2f1"} err="failed to get container status \"0a2e3cf1b04778ea7ab10e432dca603cf60e6a45858256bca4beeecb3a4eb2f1\": rpc error: code = NotFound desc = could not find container \"0a2e3cf1b04778ea7ab10e432dca603cf60e6a45858256bca4beeecb3a4eb2f1\": container with ID starting with 0a2e3cf1b04778ea7ab10e432dca603cf60e6a45858256bca4beeecb3a4eb2f1 not found: ID does not exist" Mar 16 18:09:12 crc kubenswrapper[4736]: I0316 18:09:12.083037 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3299c641-a09f-4a65-af53-c9650361fa0d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:09:12 crc kubenswrapper[4736]: I0316 18:09:12.193538 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vwfj8"] Mar 16 18:09:12 crc kubenswrapper[4736]: I0316 18:09:12.201237 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vwfj8"] Mar 16 18:09:12 crc kubenswrapper[4736]: I0316 18:09:12.989028 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3299c641-a09f-4a65-af53-c9650361fa0d" path="/var/lib/kubelet/pods/3299c641-a09f-4a65-af53-c9650361fa0d/volumes" Mar 16 18:09:38 crc kubenswrapper[4736]: I0316 18:09:38.508821 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:09:38 crc kubenswrapper[4736]: I0316 18:09:38.509950 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.285434 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l8g22"] Mar 16 18:09:56 crc kubenswrapper[4736]: E0316 18:09:56.286539 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db44c42-0295-43a6-90ba-5a9c5535c303" containerName="registry-server" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.286555 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db44c42-0295-43a6-90ba-5a9c5535c303" containerName="registry-server" Mar 16 18:09:56 crc kubenswrapper[4736]: E0316 18:09:56.286590 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3299c641-a09f-4a65-af53-c9650361fa0d" containerName="extract-utilities" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.286598 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3299c641-a09f-4a65-af53-c9650361fa0d" containerName="extract-utilities" Mar 16 18:09:56 crc kubenswrapper[4736]: E0316 18:09:56.286636 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db44c42-0295-43a6-90ba-5a9c5535c303" containerName="extract-utilities" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.286643 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db44c42-0295-43a6-90ba-5a9c5535c303" containerName="extract-utilities" Mar 16 18:09:56 crc kubenswrapper[4736]: E0316 18:09:56.286657 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db44c42-0295-43a6-90ba-5a9c5535c303" containerName="extract-content" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.286662 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db44c42-0295-43a6-90ba-5a9c5535c303" containerName="extract-content" Mar 16 18:09:56 crc kubenswrapper[4736]: E0316 18:09:56.286681 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3299c641-a09f-4a65-af53-c9650361fa0d" containerName="extract-content" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.286689 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3299c641-a09f-4a65-af53-c9650361fa0d" containerName="extract-content" Mar 16 18:09:56 crc kubenswrapper[4736]: E0316 18:09:56.286714 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3299c641-a09f-4a65-af53-c9650361fa0d" containerName="registry-server" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.286723 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3299c641-a09f-4a65-af53-c9650361fa0d" containerName="registry-server" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.287056 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4db44c42-0295-43a6-90ba-5a9c5535c303" containerName="registry-server" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.287084 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3299c641-a09f-4a65-af53-c9650361fa0d" containerName="registry-server" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.293100 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.316044 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8g22"] Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.404059 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-utilities\") pod \"redhat-marketplace-l8g22\" (UID: \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\") " pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.404126 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5cg\" (UniqueName: \"kubernetes.io/projected/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-kube-api-access-qm5cg\") pod \"redhat-marketplace-l8g22\" (UID: \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\") " pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.404216 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-catalog-content\") pod \"redhat-marketplace-l8g22\" (UID: \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\") " pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.505869 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-catalog-content\") pod \"redhat-marketplace-l8g22\" (UID: \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\") " pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.505973 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-utilities\") pod \"redhat-marketplace-l8g22\" (UID: \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\") " pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.506002 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm5cg\" (UniqueName: \"kubernetes.io/projected/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-kube-api-access-qm5cg\") pod \"redhat-marketplace-l8g22\" (UID: \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\") " pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.506405 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-catalog-content\") pod \"redhat-marketplace-l8g22\" (UID: \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\") " pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.506411 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-utilities\") pod \"redhat-marketplace-l8g22\" (UID: \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\") " pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.542089 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm5cg\" (UniqueName: \"kubernetes.io/projected/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-kube-api-access-qm5cg\") pod \"redhat-marketplace-l8g22\" (UID: \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\") " pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:09:56 crc kubenswrapper[4736]: I0316 18:09:56.612329 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:09:57 crc kubenswrapper[4736]: W0316 18:09:57.187922 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcea8c10a_d899_4e54_b109_7bbc67f2cc7d.slice/crio-686d7a02585d8ccb59fd4173b467d213bd76bc698029f829fad6b78678442f8f WatchSource:0}: Error finding container 686d7a02585d8ccb59fd4173b467d213bd76bc698029f829fad6b78678442f8f: Status 404 returned error can't find the container with id 686d7a02585d8ccb59fd4173b467d213bd76bc698029f829fad6b78678442f8f Mar 16 18:09:57 crc kubenswrapper[4736]: I0316 18:09:57.198693 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8g22"] Mar 16 18:09:57 crc kubenswrapper[4736]: I0316 18:09:57.481957 4736 generic.go:334] "Generic (PLEG): container finished" podID="cea8c10a-d899-4e54-b109-7bbc67f2cc7d" containerID="4cdaa55cf6a220bb3b8c9732bc9027b55e8c30fcfd18c0cb071771e08206817f" exitCode=0 Mar 16 18:09:57 crc kubenswrapper[4736]: I0316 18:09:57.482154 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8g22" event={"ID":"cea8c10a-d899-4e54-b109-7bbc67f2cc7d","Type":"ContainerDied","Data":"4cdaa55cf6a220bb3b8c9732bc9027b55e8c30fcfd18c0cb071771e08206817f"} Mar 16 18:09:57 crc kubenswrapper[4736]: I0316 18:09:57.482326 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8g22" event={"ID":"cea8c10a-d899-4e54-b109-7bbc67f2cc7d","Type":"ContainerStarted","Data":"686d7a02585d8ccb59fd4173b467d213bd76bc698029f829fad6b78678442f8f"} Mar 16 18:09:59 crc kubenswrapper[4736]: I0316 18:09:59.508874 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8g22" event={"ID":"cea8c10a-d899-4e54-b109-7bbc67f2cc7d","Type":"ContainerStarted","Data":"c230b6d5663f3fe0b9d95fe5a134fd9bfdb898eabe85b5035aa546c8f37fbcd7"} Mar 16 18:10:00 crc kubenswrapper[4736]: I0316 18:10:00.142457 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561410-x66fm"] Mar 16 18:10:00 crc kubenswrapper[4736]: I0316 18:10:00.143726 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561410-x66fm" Mar 16 18:10:00 crc kubenswrapper[4736]: I0316 18:10:00.146405 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:10:00 crc kubenswrapper[4736]: I0316 18:10:00.146924 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:10:00 crc kubenswrapper[4736]: I0316 18:10:00.147610 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:10:00 crc kubenswrapper[4736]: I0316 18:10:00.151032 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561410-x66fm"] Mar 16 18:10:00 crc kubenswrapper[4736]: I0316 18:10:00.192001 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb7b2\" (UniqueName: \"kubernetes.io/projected/26cb33b8-6cd0-41c6-8215-a2c837e00939-kube-api-access-sb7b2\") pod \"auto-csr-approver-29561410-x66fm\" (UID: \"26cb33b8-6cd0-41c6-8215-a2c837e00939\") " pod="openshift-infra/auto-csr-approver-29561410-x66fm" Mar 16 18:10:00 crc kubenswrapper[4736]: I0316 18:10:00.294703 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb7b2\" (UniqueName: \"kubernetes.io/projected/26cb33b8-6cd0-41c6-8215-a2c837e00939-kube-api-access-sb7b2\") pod \"auto-csr-approver-29561410-x66fm\" (UID: \"26cb33b8-6cd0-41c6-8215-a2c837e00939\") " pod="openshift-infra/auto-csr-approver-29561410-x66fm" Mar 16 18:10:00 crc kubenswrapper[4736]: I0316 18:10:00.320368 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb7b2\" (UniqueName: \"kubernetes.io/projected/26cb33b8-6cd0-41c6-8215-a2c837e00939-kube-api-access-sb7b2\") pod \"auto-csr-approver-29561410-x66fm\" (UID: \"26cb33b8-6cd0-41c6-8215-a2c837e00939\") " pod="openshift-infra/auto-csr-approver-29561410-x66fm" Mar 16 18:10:00 crc kubenswrapper[4736]: I0316 18:10:00.459574 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561410-x66fm" Mar 16 18:10:00 crc kubenswrapper[4736]: I0316 18:10:00.519648 4736 generic.go:334] "Generic (PLEG): container finished" podID="cea8c10a-d899-4e54-b109-7bbc67f2cc7d" containerID="c230b6d5663f3fe0b9d95fe5a134fd9bfdb898eabe85b5035aa546c8f37fbcd7" exitCode=0 Mar 16 18:10:00 crc kubenswrapper[4736]: I0316 18:10:00.519704 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8g22" event={"ID":"cea8c10a-d899-4e54-b109-7bbc67f2cc7d","Type":"ContainerDied","Data":"c230b6d5663f3fe0b9d95fe5a134fd9bfdb898eabe85b5035aa546c8f37fbcd7"} Mar 16 18:10:01 crc kubenswrapper[4736]: I0316 18:10:01.005593 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561410-x66fm"] Mar 16 18:10:01 crc kubenswrapper[4736]: I0316 18:10:01.529413 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561410-x66fm" event={"ID":"26cb33b8-6cd0-41c6-8215-a2c837e00939","Type":"ContainerStarted","Data":"c5ed754669d7234fd7135ecaa9658479b6fe0437efc807572775378452299d59"} Mar 16 18:10:01 crc kubenswrapper[4736]: I0316 18:10:01.535561 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8g22" event={"ID":"cea8c10a-d899-4e54-b109-7bbc67f2cc7d","Type":"ContainerStarted","Data":"d1e1633c75b23f9aa55edf51b7fd530328f42a6868cf47c12adfbf17af31d513"} Mar 16 18:10:01 crc kubenswrapper[4736]: I0316 18:10:01.574895 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l8g22" podStartSLOduration=1.899623594 podStartE2EDuration="5.5748718s" podCreationTimestamp="2026-03-16 18:09:56 +0000 UTC" firstStartedPulling="2026-03-16 18:09:57.484686922 +0000 UTC m=+10599.212077199" lastFinishedPulling="2026-03-16 18:10:01.159935118 +0000 UTC m=+10602.887325405" observedRunningTime="2026-03-16 18:10:01.562886563 +0000 UTC m=+10603.290276900" watchObservedRunningTime="2026-03-16 18:10:01.5748718 +0000 UTC m=+10603.302262087" Mar 16 18:10:03 crc kubenswrapper[4736]: I0316 18:10:03.555539 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561410-x66fm" event={"ID":"26cb33b8-6cd0-41c6-8215-a2c837e00939","Type":"ContainerStarted","Data":"587caa56124088812954bc250081036706a9eb7caa0d64f0838f5d47c93ba3a6"} Mar 16 18:10:03 crc kubenswrapper[4736]: I0316 18:10:03.573039 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561410-x66fm" podStartSLOduration=1.807032629 podStartE2EDuration="3.573022375s" podCreationTimestamp="2026-03-16 18:10:00 +0000 UTC" firstStartedPulling="2026-03-16 18:10:00.996090651 +0000 UTC m=+10602.723480928" lastFinishedPulling="2026-03-16 18:10:02.762080387 +0000 UTC m=+10604.489470674" observedRunningTime="2026-03-16 18:10:03.569355555 +0000 UTC m=+10605.296745842" watchObservedRunningTime="2026-03-16 18:10:03.573022375 +0000 UTC m=+10605.300412682" Mar 16 18:10:04 crc kubenswrapper[4736]: I0316 18:10:04.563170 4736 generic.go:334] "Generic (PLEG): container finished" podID="26cb33b8-6cd0-41c6-8215-a2c837e00939" containerID="587caa56124088812954bc250081036706a9eb7caa0d64f0838f5d47c93ba3a6" exitCode=0 Mar 16 18:10:04 crc kubenswrapper[4736]: I0316 18:10:04.563271 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561410-x66fm" event={"ID":"26cb33b8-6cd0-41c6-8215-a2c837e00939","Type":"ContainerDied","Data":"587caa56124088812954bc250081036706a9eb7caa0d64f0838f5d47c93ba3a6"} Mar 16 18:10:06 crc kubenswrapper[4736]: I0316 18:10:06.583037 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561410-x66fm" event={"ID":"26cb33b8-6cd0-41c6-8215-a2c837e00939","Type":"ContainerDied","Data":"c5ed754669d7234fd7135ecaa9658479b6fe0437efc807572775378452299d59"} Mar 16 18:10:06 crc kubenswrapper[4736]: I0316 18:10:06.584371 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5ed754669d7234fd7135ecaa9658479b6fe0437efc807572775378452299d59" Mar 16 18:10:06 crc kubenswrapper[4736]: I0316 18:10:06.612787 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:10:06 crc kubenswrapper[4736]: I0316 18:10:06.613073 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:10:06 crc kubenswrapper[4736]: I0316 18:10:06.644152 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561410-x66fm" Mar 16 18:10:06 crc kubenswrapper[4736]: I0316 18:10:06.749480 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb7b2\" (UniqueName: \"kubernetes.io/projected/26cb33b8-6cd0-41c6-8215-a2c837e00939-kube-api-access-sb7b2\") pod \"26cb33b8-6cd0-41c6-8215-a2c837e00939\" (UID: \"26cb33b8-6cd0-41c6-8215-a2c837e00939\") " Mar 16 18:10:06 crc kubenswrapper[4736]: I0316 18:10:06.764729 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26cb33b8-6cd0-41c6-8215-a2c837e00939-kube-api-access-sb7b2" (OuterVolumeSpecName: "kube-api-access-sb7b2") pod "26cb33b8-6cd0-41c6-8215-a2c837e00939" (UID: "26cb33b8-6cd0-41c6-8215-a2c837e00939"). InnerVolumeSpecName "kube-api-access-sb7b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:10:06 crc kubenswrapper[4736]: I0316 18:10:06.852559 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb7b2\" (UniqueName: \"kubernetes.io/projected/26cb33b8-6cd0-41c6-8215-a2c837e00939-kube-api-access-sb7b2\") on node \"crc\" DevicePath \"\"" Mar 16 18:10:07 crc kubenswrapper[4736]: I0316 18:10:07.591117 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561410-x66fm" Mar 16 18:10:07 crc kubenswrapper[4736]: I0316 18:10:07.687560 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-l8g22" podUID="cea8c10a-d899-4e54-b109-7bbc67f2cc7d" containerName="registry-server" probeResult="failure" output=< Mar 16 18:10:07 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:10:07 crc kubenswrapper[4736]: > Mar 16 18:10:07 crc kubenswrapper[4736]: I0316 18:10:07.723587 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561404-7fc7k"] Mar 16 18:10:07 crc kubenswrapper[4736]: I0316 18:10:07.732471 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561404-7fc7k"] Mar 16 18:10:08 crc kubenswrapper[4736]: I0316 18:10:08.508460 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:10:08 crc kubenswrapper[4736]: I0316 18:10:08.509689 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:10:08 crc kubenswrapper[4736]: I0316 18:10:08.509743 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 18:10:08 crc kubenswrapper[4736]: I0316 18:10:08.510548 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 18:10:08 crc kubenswrapper[4736]: I0316 18:10:08.510607 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" gracePeriod=600 Mar 16 18:10:08 crc kubenswrapper[4736]: E0316 18:10:08.634076 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:10:08 crc kubenswrapper[4736]: I0316 18:10:08.994975 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f479397a-a884-422d-a0a1-11584e019834" path="/var/lib/kubelet/pods/f479397a-a884-422d-a0a1-11584e019834/volumes" Mar 16 18:10:09 crc kubenswrapper[4736]: I0316 18:10:09.612346 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" exitCode=0 Mar 16 18:10:09 crc kubenswrapper[4736]: I0316 18:10:09.612422 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973"} Mar 16 18:10:09 crc kubenswrapper[4736]: I0316 18:10:09.612581 4736 scope.go:117] "RemoveContainer" containerID="e6cbd714d2780fc58536aa22c4e07012375aac3b67cad375ec626f65c37d1f92" Mar 16 18:10:09 crc kubenswrapper[4736]: I0316 18:10:09.613276 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:10:09 crc kubenswrapper[4736]: E0316 18:10:09.614077 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:10:16 crc kubenswrapper[4736]: I0316 18:10:16.719930 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:10:16 crc kubenswrapper[4736]: I0316 18:10:16.788472 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:10:16 crc kubenswrapper[4736]: I0316 18:10:16.957001 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8g22"] Mar 16 18:10:18 crc kubenswrapper[4736]: I0316 18:10:18.708171 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l8g22" podUID="cea8c10a-d899-4e54-b109-7bbc67f2cc7d" containerName="registry-server" containerID="cri-o://d1e1633c75b23f9aa55edf51b7fd530328f42a6868cf47c12adfbf17af31d513" gracePeriod=2 Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.483216 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.632584 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm5cg\" (UniqueName: \"kubernetes.io/projected/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-kube-api-access-qm5cg\") pod \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\" (UID: \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\") " Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.632756 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-utilities\") pod \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\" (UID: \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\") " Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.632982 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-catalog-content\") pod \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\" (UID: \"cea8c10a-d899-4e54-b109-7bbc67f2cc7d\") " Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.633484 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-utilities" (OuterVolumeSpecName: "utilities") pod "cea8c10a-d899-4e54-b109-7bbc67f2cc7d" (UID: "cea8c10a-d899-4e54-b109-7bbc67f2cc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.634044 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.652595 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-kube-api-access-qm5cg" (OuterVolumeSpecName: "kube-api-access-qm5cg") pod "cea8c10a-d899-4e54-b109-7bbc67f2cc7d" (UID: "cea8c10a-d899-4e54-b109-7bbc67f2cc7d"). InnerVolumeSpecName "kube-api-access-qm5cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.668777 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cea8c10a-d899-4e54-b109-7bbc67f2cc7d" (UID: "cea8c10a-d899-4e54-b109-7bbc67f2cc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.724859 4736 generic.go:334] "Generic (PLEG): container finished" podID="cea8c10a-d899-4e54-b109-7bbc67f2cc7d" containerID="d1e1633c75b23f9aa55edf51b7fd530328f42a6868cf47c12adfbf17af31d513" exitCode=0 Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.724918 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8g22" event={"ID":"cea8c10a-d899-4e54-b109-7bbc67f2cc7d","Type":"ContainerDied","Data":"d1e1633c75b23f9aa55edf51b7fd530328f42a6868cf47c12adfbf17af31d513"} Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.724958 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l8g22" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.724980 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l8g22" event={"ID":"cea8c10a-d899-4e54-b109-7bbc67f2cc7d","Type":"ContainerDied","Data":"686d7a02585d8ccb59fd4173b467d213bd76bc698029f829fad6b78678442f8f"} Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.725009 4736 scope.go:117] "RemoveContainer" containerID="d1e1633c75b23f9aa55edf51b7fd530328f42a6868cf47c12adfbf17af31d513" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.736838 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm5cg\" (UniqueName: \"kubernetes.io/projected/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-kube-api-access-qm5cg\") on node \"crc\" DevicePath \"\"" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.736872 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cea8c10a-d899-4e54-b109-7bbc67f2cc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.776914 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8g22"] Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.806237 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l8g22"] Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.837718 4736 scope.go:117] "RemoveContainer" containerID="c230b6d5663f3fe0b9d95fe5a134fd9bfdb898eabe85b5035aa546c8f37fbcd7" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.892332 4736 scope.go:117] "RemoveContainer" containerID="4cdaa55cf6a220bb3b8c9732bc9027b55e8c30fcfd18c0cb071771e08206817f" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.971738 4736 scope.go:117] "RemoveContainer" containerID="d1e1633c75b23f9aa55edf51b7fd530328f42a6868cf47c12adfbf17af31d513" Mar 16 18:10:19 crc kubenswrapper[4736]: E0316 18:10:19.972786 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1e1633c75b23f9aa55edf51b7fd530328f42a6868cf47c12adfbf17af31d513\": container with ID starting with d1e1633c75b23f9aa55edf51b7fd530328f42a6868cf47c12adfbf17af31d513 not found: ID does not exist" containerID="d1e1633c75b23f9aa55edf51b7fd530328f42a6868cf47c12adfbf17af31d513" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.972830 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1e1633c75b23f9aa55edf51b7fd530328f42a6868cf47c12adfbf17af31d513"} err="failed to get container status \"d1e1633c75b23f9aa55edf51b7fd530328f42a6868cf47c12adfbf17af31d513\": rpc error: code = NotFound desc = could not find container \"d1e1633c75b23f9aa55edf51b7fd530328f42a6868cf47c12adfbf17af31d513\": container with ID starting with d1e1633c75b23f9aa55edf51b7fd530328f42a6868cf47c12adfbf17af31d513 not found: ID does not exist" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.972855 4736 scope.go:117] "RemoveContainer" containerID="c230b6d5663f3fe0b9d95fe5a134fd9bfdb898eabe85b5035aa546c8f37fbcd7" Mar 16 18:10:19 crc kubenswrapper[4736]: E0316 18:10:19.973183 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c230b6d5663f3fe0b9d95fe5a134fd9bfdb898eabe85b5035aa546c8f37fbcd7\": container with ID starting with c230b6d5663f3fe0b9d95fe5a134fd9bfdb898eabe85b5035aa546c8f37fbcd7 not found: ID does not exist" containerID="c230b6d5663f3fe0b9d95fe5a134fd9bfdb898eabe85b5035aa546c8f37fbcd7" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.973232 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c230b6d5663f3fe0b9d95fe5a134fd9bfdb898eabe85b5035aa546c8f37fbcd7"} err="failed to get container status \"c230b6d5663f3fe0b9d95fe5a134fd9bfdb898eabe85b5035aa546c8f37fbcd7\": rpc error: code = NotFound desc = could not find container \"c230b6d5663f3fe0b9d95fe5a134fd9bfdb898eabe85b5035aa546c8f37fbcd7\": container with ID starting with c230b6d5663f3fe0b9d95fe5a134fd9bfdb898eabe85b5035aa546c8f37fbcd7 not found: ID does not exist" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.973258 4736 scope.go:117] "RemoveContainer" containerID="4cdaa55cf6a220bb3b8c9732bc9027b55e8c30fcfd18c0cb071771e08206817f" Mar 16 18:10:19 crc kubenswrapper[4736]: E0316 18:10:19.973534 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cdaa55cf6a220bb3b8c9732bc9027b55e8c30fcfd18c0cb071771e08206817f\": container with ID starting with 4cdaa55cf6a220bb3b8c9732bc9027b55e8c30fcfd18c0cb071771e08206817f not found: ID does not exist" containerID="4cdaa55cf6a220bb3b8c9732bc9027b55e8c30fcfd18c0cb071771e08206817f" Mar 16 18:10:19 crc kubenswrapper[4736]: I0316 18:10:19.973559 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cdaa55cf6a220bb3b8c9732bc9027b55e8c30fcfd18c0cb071771e08206817f"} err="failed to get container status \"4cdaa55cf6a220bb3b8c9732bc9027b55e8c30fcfd18c0cb071771e08206817f\": rpc error: code = NotFound desc = could not find container \"4cdaa55cf6a220bb3b8c9732bc9027b55e8c30fcfd18c0cb071771e08206817f\": container with ID starting with 4cdaa55cf6a220bb3b8c9732bc9027b55e8c30fcfd18c0cb071771e08206817f not found: ID does not exist" Mar 16 18:10:20 crc kubenswrapper[4736]: I0316 18:10:20.989877 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea8c10a-d899-4e54-b109-7bbc67f2cc7d" path="/var/lib/kubelet/pods/cea8c10a-d899-4e54-b109-7bbc67f2cc7d/volumes" Mar 16 18:10:23 crc kubenswrapper[4736]: I0316 18:10:23.978594 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:10:23 crc kubenswrapper[4736]: E0316 18:10:23.980355 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:10:35 crc kubenswrapper[4736]: I0316 18:10:35.978757 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:10:35 crc kubenswrapper[4736]: E0316 18:10:35.979617 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:10:46 crc kubenswrapper[4736]: I0316 18:10:46.978397 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:10:46 crc kubenswrapper[4736]: E0316 18:10:46.979246 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:10:58 crc kubenswrapper[4736]: I0316 18:10:58.991202 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:10:58 crc kubenswrapper[4736]: E0316 18:10:58.994426 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:11:07 crc kubenswrapper[4736]: I0316 18:11:07.562522 4736 scope.go:117] "RemoveContainer" containerID="8060089675f0eb49cbb3f267656e6287b9d7e66e3b05d85dcf188ffe63703ced" Mar 16 18:11:11 crc kubenswrapper[4736]: I0316 18:11:11.978325 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:11:11 crc kubenswrapper[4736]: E0316 18:11:11.979156 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:11:22 crc kubenswrapper[4736]: I0316 18:11:22.978723 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:11:22 crc kubenswrapper[4736]: E0316 18:11:22.979630 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:11:33 crc kubenswrapper[4736]: I0316 18:11:33.978470 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:11:33 crc kubenswrapper[4736]: E0316 18:11:33.979726 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:11:44 crc kubenswrapper[4736]: I0316 18:11:44.978098 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:11:44 crc kubenswrapper[4736]: E0316 18:11:44.978948 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:11:57 crc kubenswrapper[4736]: I0316 18:11:57.978473 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:11:57 crc kubenswrapper[4736]: E0316 18:11:57.979468 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.174407 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561412-kt2hp"] Mar 16 18:12:00 crc kubenswrapper[4736]: E0316 18:12:00.177737 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea8c10a-d899-4e54-b109-7bbc67f2cc7d" containerName="extract-utilities" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.177970 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea8c10a-d899-4e54-b109-7bbc67f2cc7d" containerName="extract-utilities" Mar 16 18:12:00 crc kubenswrapper[4736]: E0316 18:12:00.178212 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea8c10a-d899-4e54-b109-7bbc67f2cc7d" containerName="registry-server" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.178376 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea8c10a-d899-4e54-b109-7bbc67f2cc7d" containerName="registry-server" Mar 16 18:12:00 crc kubenswrapper[4736]: E0316 18:12:00.178601 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea8c10a-d899-4e54-b109-7bbc67f2cc7d" containerName="extract-content" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.178794 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea8c10a-d899-4e54-b109-7bbc67f2cc7d" containerName="extract-content" Mar 16 18:12:00 crc kubenswrapper[4736]: E0316 18:12:00.179023 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26cb33b8-6cd0-41c6-8215-a2c837e00939" containerName="oc" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.179270 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="26cb33b8-6cd0-41c6-8215-a2c837e00939" containerName="oc" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.179895 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea8c10a-d899-4e54-b109-7bbc67f2cc7d" containerName="registry-server" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.180208 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="26cb33b8-6cd0-41c6-8215-a2c837e00939" containerName="oc" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.181867 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561412-kt2hp" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.188032 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.188291 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.189875 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561412-kt2hp"] Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.191569 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.257759 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vql4b\" (UniqueName: \"kubernetes.io/projected/cdf49138-10b0-43bd-8ca8-52f909a3c064-kube-api-access-vql4b\") pod \"auto-csr-approver-29561412-kt2hp\" (UID: \"cdf49138-10b0-43bd-8ca8-52f909a3c064\") " pod="openshift-infra/auto-csr-approver-29561412-kt2hp" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.359513 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vql4b\" (UniqueName: \"kubernetes.io/projected/cdf49138-10b0-43bd-8ca8-52f909a3c064-kube-api-access-vql4b\") pod \"auto-csr-approver-29561412-kt2hp\" (UID: \"cdf49138-10b0-43bd-8ca8-52f909a3c064\") " pod="openshift-infra/auto-csr-approver-29561412-kt2hp" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.499499 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vql4b\" (UniqueName: \"kubernetes.io/projected/cdf49138-10b0-43bd-8ca8-52f909a3c064-kube-api-access-vql4b\") pod \"auto-csr-approver-29561412-kt2hp\" (UID: \"cdf49138-10b0-43bd-8ca8-52f909a3c064\") " pod="openshift-infra/auto-csr-approver-29561412-kt2hp" Mar 16 18:12:00 crc kubenswrapper[4736]: I0316 18:12:00.516600 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561412-kt2hp" Mar 16 18:12:01 crc kubenswrapper[4736]: I0316 18:12:01.018844 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561412-kt2hp"] Mar 16 18:12:01 crc kubenswrapper[4736]: I0316 18:12:01.749469 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561412-kt2hp" event={"ID":"cdf49138-10b0-43bd-8ca8-52f909a3c064","Type":"ContainerStarted","Data":"f0c2825d5faff77af119a816eda552b7cc76888da5d28111d2b3d6b2bfa62ccb"} Mar 16 18:12:02 crc kubenswrapper[4736]: I0316 18:12:02.760087 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561412-kt2hp" event={"ID":"cdf49138-10b0-43bd-8ca8-52f909a3c064","Type":"ContainerStarted","Data":"cfd6d5060fbcdd094d2b9493f5b7a3d3ccd0862a871e8c6f1d301f2fafea2a9c"} Mar 16 18:12:02 crc kubenswrapper[4736]: I0316 18:12:02.786358 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561412-kt2hp" podStartSLOduration=1.767263333 podStartE2EDuration="2.786336486s" podCreationTimestamp="2026-03-16 18:12:00 +0000 UTC" firstStartedPulling="2026-03-16 18:12:01.041973779 +0000 UTC m=+10722.769364106" lastFinishedPulling="2026-03-16 18:12:02.061046952 +0000 UTC m=+10723.788437259" observedRunningTime="2026-03-16 18:12:02.777399112 +0000 UTC m=+10724.504789409" watchObservedRunningTime="2026-03-16 18:12:02.786336486 +0000 UTC m=+10724.513726783" Mar 16 18:12:03 crc kubenswrapper[4736]: I0316 18:12:03.774677 4736 generic.go:334] "Generic (PLEG): container finished" podID="cdf49138-10b0-43bd-8ca8-52f909a3c064" containerID="cfd6d5060fbcdd094d2b9493f5b7a3d3ccd0862a871e8c6f1d301f2fafea2a9c" exitCode=0 Mar 16 18:12:03 crc kubenswrapper[4736]: I0316 18:12:03.774760 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561412-kt2hp" event={"ID":"cdf49138-10b0-43bd-8ca8-52f909a3c064","Type":"ContainerDied","Data":"cfd6d5060fbcdd094d2b9493f5b7a3d3ccd0862a871e8c6f1d301f2fafea2a9c"} Mar 16 18:12:05 crc kubenswrapper[4736]: I0316 18:12:05.187012 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561412-kt2hp" Mar 16 18:12:05 crc kubenswrapper[4736]: I0316 18:12:05.377481 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vql4b\" (UniqueName: \"kubernetes.io/projected/cdf49138-10b0-43bd-8ca8-52f909a3c064-kube-api-access-vql4b\") pod \"cdf49138-10b0-43bd-8ca8-52f909a3c064\" (UID: \"cdf49138-10b0-43bd-8ca8-52f909a3c064\") " Mar 16 18:12:05 crc kubenswrapper[4736]: I0316 18:12:05.386431 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdf49138-10b0-43bd-8ca8-52f909a3c064-kube-api-access-vql4b" (OuterVolumeSpecName: "kube-api-access-vql4b") pod "cdf49138-10b0-43bd-8ca8-52f909a3c064" (UID: "cdf49138-10b0-43bd-8ca8-52f909a3c064"). InnerVolumeSpecName "kube-api-access-vql4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:12:05 crc kubenswrapper[4736]: I0316 18:12:05.481332 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vql4b\" (UniqueName: \"kubernetes.io/projected/cdf49138-10b0-43bd-8ca8-52f909a3c064-kube-api-access-vql4b\") on node \"crc\" DevicePath \"\"" Mar 16 18:12:05 crc kubenswrapper[4736]: I0316 18:12:05.796087 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561412-kt2hp" event={"ID":"cdf49138-10b0-43bd-8ca8-52f909a3c064","Type":"ContainerDied","Data":"f0c2825d5faff77af119a816eda552b7cc76888da5d28111d2b3d6b2bfa62ccb"} Mar 16 18:12:05 crc kubenswrapper[4736]: I0316 18:12:05.796254 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561412-kt2hp" Mar 16 18:12:05 crc kubenswrapper[4736]: I0316 18:12:05.796270 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0c2825d5faff77af119a816eda552b7cc76888da5d28111d2b3d6b2bfa62ccb" Mar 16 18:12:05 crc kubenswrapper[4736]: I0316 18:12:05.875119 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561406-wxb9x"] Mar 16 18:12:05 crc kubenswrapper[4736]: I0316 18:12:05.882960 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561406-wxb9x"] Mar 16 18:12:06 crc kubenswrapper[4736]: I0316 18:12:06.998155 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8253ef30-ce5e-42ae-8c8a-d687c009eb6d" path="/var/lib/kubelet/pods/8253ef30-ce5e-42ae-8c8a-d687c009eb6d/volumes" Mar 16 18:12:07 crc kubenswrapper[4736]: I0316 18:12:07.684877 4736 scope.go:117] "RemoveContainer" containerID="2708ab8130dbca4541463dfb7fef40af4955bbb4f5f3bf68ff2ec3a398dd407a" Mar 16 18:12:11 crc kubenswrapper[4736]: I0316 18:12:11.978525 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:12:11 crc kubenswrapper[4736]: E0316 18:12:11.979292 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:12:25 crc kubenswrapper[4736]: I0316 18:12:25.978673 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:12:25 crc kubenswrapper[4736]: E0316 18:12:25.980086 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:12:36 crc kubenswrapper[4736]: I0316 18:12:36.978965 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:12:36 crc kubenswrapper[4736]: E0316 18:12:36.979903 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:12:48 crc kubenswrapper[4736]: I0316 18:12:48.990728 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:12:48 crc kubenswrapper[4736]: E0316 18:12:48.992086 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:13:03 crc kubenswrapper[4736]: I0316 18:13:03.979150 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:13:03 crc kubenswrapper[4736]: E0316 18:13:03.981495 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:13:14 crc kubenswrapper[4736]: I0316 18:13:14.979129 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:13:14 crc kubenswrapper[4736]: E0316 18:13:14.980145 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:13:26 crc kubenswrapper[4736]: I0316 18:13:26.979540 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:13:26 crc kubenswrapper[4736]: E0316 18:13:26.980593 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:13:41 crc kubenswrapper[4736]: I0316 18:13:41.992971 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:13:41 crc kubenswrapper[4736]: E0316 18:13:41.999449 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:13:56 crc kubenswrapper[4736]: I0316 18:13:56.978836 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:13:56 crc kubenswrapper[4736]: E0316 18:13:56.979764 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:14:00 crc kubenswrapper[4736]: I0316 18:14:00.176257 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561414-8zdsm"] Mar 16 18:14:00 crc kubenswrapper[4736]: E0316 18:14:00.177187 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdf49138-10b0-43bd-8ca8-52f909a3c064" containerName="oc" Mar 16 18:14:00 crc kubenswrapper[4736]: I0316 18:14:00.177205 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdf49138-10b0-43bd-8ca8-52f909a3c064" containerName="oc" Mar 16 18:14:00 crc kubenswrapper[4736]: I0316 18:14:00.177434 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdf49138-10b0-43bd-8ca8-52f909a3c064" containerName="oc" Mar 16 18:14:00 crc kubenswrapper[4736]: I0316 18:14:00.178238 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561414-8zdsm" Mar 16 18:14:00 crc kubenswrapper[4736]: I0316 18:14:00.181921 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:14:00 crc kubenswrapper[4736]: I0316 18:14:00.182435 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:14:00 crc kubenswrapper[4736]: I0316 18:14:00.183088 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:14:00 crc kubenswrapper[4736]: I0316 18:14:00.196962 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561414-8zdsm"] Mar 16 18:14:00 crc kubenswrapper[4736]: I0316 18:14:00.286857 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hh6g\" (UniqueName: \"kubernetes.io/projected/dc89d032-c80e-4004-9550-f269d0bf354e-kube-api-access-4hh6g\") pod \"auto-csr-approver-29561414-8zdsm\" (UID: \"dc89d032-c80e-4004-9550-f269d0bf354e\") " pod="openshift-infra/auto-csr-approver-29561414-8zdsm" Mar 16 18:14:00 crc kubenswrapper[4736]: I0316 18:14:00.388395 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hh6g\" (UniqueName: \"kubernetes.io/projected/dc89d032-c80e-4004-9550-f269d0bf354e-kube-api-access-4hh6g\") pod \"auto-csr-approver-29561414-8zdsm\" (UID: \"dc89d032-c80e-4004-9550-f269d0bf354e\") " pod="openshift-infra/auto-csr-approver-29561414-8zdsm" Mar 16 18:14:00 crc kubenswrapper[4736]: I0316 18:14:00.893780 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hh6g\" (UniqueName: \"kubernetes.io/projected/dc89d032-c80e-4004-9550-f269d0bf354e-kube-api-access-4hh6g\") pod \"auto-csr-approver-29561414-8zdsm\" (UID: \"dc89d032-c80e-4004-9550-f269d0bf354e\") " pod="openshift-infra/auto-csr-approver-29561414-8zdsm" Mar 16 18:14:01 crc kubenswrapper[4736]: I0316 18:14:01.104455 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561414-8zdsm" Mar 16 18:14:01 crc kubenswrapper[4736]: I0316 18:14:01.654520 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561414-8zdsm"] Mar 16 18:14:01 crc kubenswrapper[4736]: I0316 18:14:01.659847 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 18:14:02 crc kubenswrapper[4736]: I0316 18:14:02.064816 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561414-8zdsm" event={"ID":"dc89d032-c80e-4004-9550-f269d0bf354e","Type":"ContainerStarted","Data":"81de165ed8af248d20d623d3e1c536977cab7a7403a2797ab0a7062036f407a6"} Mar 16 18:14:04 crc kubenswrapper[4736]: I0316 18:14:04.082276 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561414-8zdsm" event={"ID":"dc89d032-c80e-4004-9550-f269d0bf354e","Type":"ContainerStarted","Data":"4d4e20458d9810a832362d1480adba0c2be0c92eaf8beaecddfac0ab961d785d"} Mar 16 18:14:04 crc kubenswrapper[4736]: I0316 18:14:04.106583 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561414-8zdsm" podStartSLOduration=3.119919769 podStartE2EDuration="4.106558677s" podCreationTimestamp="2026-03-16 18:14:00 +0000 UTC" firstStartedPulling="2026-03-16 18:14:01.656865522 +0000 UTC m=+10843.384255809" lastFinishedPulling="2026-03-16 18:14:02.64350443 +0000 UTC m=+10844.370894717" observedRunningTime="2026-03-16 18:14:04.097805888 +0000 UTC m=+10845.825196175" watchObservedRunningTime="2026-03-16 18:14:04.106558677 +0000 UTC m=+10845.833948974" Mar 16 18:14:05 crc kubenswrapper[4736]: I0316 18:14:05.092969 4736 generic.go:334] "Generic (PLEG): container finished" podID="dc89d032-c80e-4004-9550-f269d0bf354e" containerID="4d4e20458d9810a832362d1480adba0c2be0c92eaf8beaecddfac0ab961d785d" exitCode=0 Mar 16 18:14:05 crc kubenswrapper[4736]: I0316 18:14:05.093078 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561414-8zdsm" event={"ID":"dc89d032-c80e-4004-9550-f269d0bf354e","Type":"ContainerDied","Data":"4d4e20458d9810a832362d1480adba0c2be0c92eaf8beaecddfac0ab961d785d"} Mar 16 18:14:06 crc kubenswrapper[4736]: I0316 18:14:06.708447 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561414-8zdsm" Mar 16 18:14:06 crc kubenswrapper[4736]: I0316 18:14:06.810641 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hh6g\" (UniqueName: \"kubernetes.io/projected/dc89d032-c80e-4004-9550-f269d0bf354e-kube-api-access-4hh6g\") pod \"dc89d032-c80e-4004-9550-f269d0bf354e\" (UID: \"dc89d032-c80e-4004-9550-f269d0bf354e\") " Mar 16 18:14:06 crc kubenswrapper[4736]: I0316 18:14:06.819894 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc89d032-c80e-4004-9550-f269d0bf354e-kube-api-access-4hh6g" (OuterVolumeSpecName: "kube-api-access-4hh6g") pod "dc89d032-c80e-4004-9550-f269d0bf354e" (UID: "dc89d032-c80e-4004-9550-f269d0bf354e"). InnerVolumeSpecName "kube-api-access-4hh6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:14:06 crc kubenswrapper[4736]: I0316 18:14:06.913770 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hh6g\" (UniqueName: \"kubernetes.io/projected/dc89d032-c80e-4004-9550-f269d0bf354e-kube-api-access-4hh6g\") on node \"crc\" DevicePath \"\"" Mar 16 18:14:07 crc kubenswrapper[4736]: I0316 18:14:07.115288 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561414-8zdsm" event={"ID":"dc89d032-c80e-4004-9550-f269d0bf354e","Type":"ContainerDied","Data":"81de165ed8af248d20d623d3e1c536977cab7a7403a2797ab0a7062036f407a6"} Mar 16 18:14:07 crc kubenswrapper[4736]: I0316 18:14:07.115347 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81de165ed8af248d20d623d3e1c536977cab7a7403a2797ab0a7062036f407a6" Mar 16 18:14:07 crc kubenswrapper[4736]: I0316 18:14:07.115388 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561414-8zdsm" Mar 16 18:14:07 crc kubenswrapper[4736]: I0316 18:14:07.187895 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561408-x2c9f"] Mar 16 18:14:07 crc kubenswrapper[4736]: I0316 18:14:07.195647 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561408-x2c9f"] Mar 16 18:14:08 crc kubenswrapper[4736]: I0316 18:14:08.989420 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02858064-4490-4511-9348-c1444d5e8539" path="/var/lib/kubelet/pods/02858064-4490-4511-9348-c1444d5e8539/volumes" Mar 16 18:14:09 crc kubenswrapper[4736]: I0316 18:14:09.978210 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:14:09 crc kubenswrapper[4736]: E0316 18:14:09.978742 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:14:22 crc kubenswrapper[4736]: I0316 18:14:22.978797 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:14:22 crc kubenswrapper[4736]: E0316 18:14:22.979603 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:14:33 crc kubenswrapper[4736]: I0316 18:14:33.724969 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gzgr2"] Mar 16 18:14:33 crc kubenswrapper[4736]: E0316 18:14:33.727277 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc89d032-c80e-4004-9550-f269d0bf354e" containerName="oc" Mar 16 18:14:33 crc kubenswrapper[4736]: I0316 18:14:33.727301 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc89d032-c80e-4004-9550-f269d0bf354e" containerName="oc" Mar 16 18:14:33 crc kubenswrapper[4736]: I0316 18:14:33.727684 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc89d032-c80e-4004-9550-f269d0bf354e" containerName="oc" Mar 16 18:14:33 crc kubenswrapper[4736]: I0316 18:14:33.730241 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:14:33 crc kubenswrapper[4736]: I0316 18:14:33.794554 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-catalog-content\") pod \"redhat-operators-gzgr2\" (UID: \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\") " pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:14:33 crc kubenswrapper[4736]: I0316 18:14:33.794810 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp8q9\" (UniqueName: \"kubernetes.io/projected/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-kube-api-access-kp8q9\") pod \"redhat-operators-gzgr2\" (UID: \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\") " pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:14:33 crc kubenswrapper[4736]: I0316 18:14:33.794974 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-utilities\") pod \"redhat-operators-gzgr2\" (UID: \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\") " pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:14:33 crc kubenswrapper[4736]: I0316 18:14:33.811098 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gzgr2"] Mar 16 18:14:33 crc kubenswrapper[4736]: I0316 18:14:33.896453 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-utilities\") pod \"redhat-operators-gzgr2\" (UID: \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\") " pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:14:33 crc kubenswrapper[4736]: I0316 18:14:33.896672 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-catalog-content\") pod \"redhat-operators-gzgr2\" (UID: \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\") " pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:14:33 crc kubenswrapper[4736]: I0316 18:14:33.896736 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp8q9\" (UniqueName: \"kubernetes.io/projected/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-kube-api-access-kp8q9\") pod \"redhat-operators-gzgr2\" (UID: \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\") " pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:14:33 crc kubenswrapper[4736]: I0316 18:14:33.896916 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-utilities\") pod \"redhat-operators-gzgr2\" (UID: \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\") " pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:14:33 crc kubenswrapper[4736]: I0316 18:14:33.897596 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-catalog-content\") pod \"redhat-operators-gzgr2\" (UID: \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\") " pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:14:33 crc kubenswrapper[4736]: I0316 18:14:33.920555 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp8q9\" (UniqueName: \"kubernetes.io/projected/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-kube-api-access-kp8q9\") pod \"redhat-operators-gzgr2\" (UID: \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\") " pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:14:34 crc kubenswrapper[4736]: I0316 18:14:34.059889 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:14:34 crc kubenswrapper[4736]: I0316 18:14:34.622193 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gzgr2"] Mar 16 18:14:35 crc kubenswrapper[4736]: I0316 18:14:35.423496 4736 generic.go:334] "Generic (PLEG): container finished" podID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerID="dd2ea98009dbceff6e999b2032a909a94808b81805512c7b32c3cc366b297de0" exitCode=0 Mar 16 18:14:35 crc kubenswrapper[4736]: I0316 18:14:35.423558 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzgr2" event={"ID":"8ce4bf39-c93b-491c-98c7-59f4722ea8ac","Type":"ContainerDied","Data":"dd2ea98009dbceff6e999b2032a909a94808b81805512c7b32c3cc366b297de0"} Mar 16 18:14:35 crc kubenswrapper[4736]: I0316 18:14:35.427386 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzgr2" event={"ID":"8ce4bf39-c93b-491c-98c7-59f4722ea8ac","Type":"ContainerStarted","Data":"b3a2a472705ef3643a62d6903374ffb7befc93be6c9443c9f962639576623185"} Mar 16 18:14:35 crc kubenswrapper[4736]: I0316 18:14:35.978526 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:14:35 crc kubenswrapper[4736]: E0316 18:14:35.979511 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:14:37 crc kubenswrapper[4736]: I0316 18:14:37.447885 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzgr2" event={"ID":"8ce4bf39-c93b-491c-98c7-59f4722ea8ac","Type":"ContainerStarted","Data":"9c540e7d9b737fc2121f9742d55b555331d6ca19aad775d39f7b7749a708a19c"} Mar 16 18:14:41 crc kubenswrapper[4736]: I0316 18:14:41.487511 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzgr2" event={"ID":"8ce4bf39-c93b-491c-98c7-59f4722ea8ac","Type":"ContainerDied","Data":"9c540e7d9b737fc2121f9742d55b555331d6ca19aad775d39f7b7749a708a19c"} Mar 16 18:14:41 crc kubenswrapper[4736]: I0316 18:14:41.488070 4736 generic.go:334] "Generic (PLEG): container finished" podID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerID="9c540e7d9b737fc2121f9742d55b555331d6ca19aad775d39f7b7749a708a19c" exitCode=0 Mar 16 18:14:42 crc kubenswrapper[4736]: I0316 18:14:42.501384 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzgr2" event={"ID":"8ce4bf39-c93b-491c-98c7-59f4722ea8ac","Type":"ContainerStarted","Data":"353d5ade27c4280038b832c63c4676ce3574869ecd3a819077f94d462733cbba"} Mar 16 18:14:42 crc kubenswrapper[4736]: I0316 18:14:42.540051 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gzgr2" podStartSLOduration=3.087292683 podStartE2EDuration="9.540013662s" podCreationTimestamp="2026-03-16 18:14:33 +0000 UTC" firstStartedPulling="2026-03-16 18:14:35.425609864 +0000 UTC m=+10877.153000151" lastFinishedPulling="2026-03-16 18:14:41.878330842 +0000 UTC m=+10883.605721130" observedRunningTime="2026-03-16 18:14:42.523906742 +0000 UTC m=+10884.251297029" watchObservedRunningTime="2026-03-16 18:14:42.540013662 +0000 UTC m=+10884.267403949" Mar 16 18:14:44 crc kubenswrapper[4736]: I0316 18:14:44.062620 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:14:44 crc kubenswrapper[4736]: I0316 18:14:44.063302 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:14:45 crc kubenswrapper[4736]: I0316 18:14:45.149163 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzgr2" podUID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerName="registry-server" probeResult="failure" output=< Mar 16 18:14:45 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:14:45 crc kubenswrapper[4736]: > Mar 16 18:14:46 crc kubenswrapper[4736]: I0316 18:14:46.979557 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:14:46 crc kubenswrapper[4736]: E0316 18:14:46.982018 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:14:55 crc kubenswrapper[4736]: I0316 18:14:55.150548 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzgr2" podUID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerName="registry-server" probeResult="failure" output=< Mar 16 18:14:55 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:14:55 crc kubenswrapper[4736]: > Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.243504 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd"] Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.246038 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.254382 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.254481 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.260049 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd"] Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.264393 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3963c081-267c-4116-b66f-06a77ceccd93-secret-volume\") pod \"collect-profiles-29561415-6l6fd\" (UID: \"3963c081-267c-4116-b66f-06a77ceccd93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.264541 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3963c081-267c-4116-b66f-06a77ceccd93-config-volume\") pod \"collect-profiles-29561415-6l6fd\" (UID: \"3963c081-267c-4116-b66f-06a77ceccd93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.264578 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48dhp\" (UniqueName: \"kubernetes.io/projected/3963c081-267c-4116-b66f-06a77ceccd93-kube-api-access-48dhp\") pod \"collect-profiles-29561415-6l6fd\" (UID: \"3963c081-267c-4116-b66f-06a77ceccd93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.366695 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3963c081-267c-4116-b66f-06a77ceccd93-config-volume\") pod \"collect-profiles-29561415-6l6fd\" (UID: \"3963c081-267c-4116-b66f-06a77ceccd93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.366785 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48dhp\" (UniqueName: \"kubernetes.io/projected/3963c081-267c-4116-b66f-06a77ceccd93-kube-api-access-48dhp\") pod \"collect-profiles-29561415-6l6fd\" (UID: \"3963c081-267c-4116-b66f-06a77ceccd93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.366869 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3963c081-267c-4116-b66f-06a77ceccd93-secret-volume\") pod \"collect-profiles-29561415-6l6fd\" (UID: \"3963c081-267c-4116-b66f-06a77ceccd93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.368189 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3963c081-267c-4116-b66f-06a77ceccd93-config-volume\") pod \"collect-profiles-29561415-6l6fd\" (UID: \"3963c081-267c-4116-b66f-06a77ceccd93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.387835 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3963c081-267c-4116-b66f-06a77ceccd93-secret-volume\") pod \"collect-profiles-29561415-6l6fd\" (UID: \"3963c081-267c-4116-b66f-06a77ceccd93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.391424 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48dhp\" (UniqueName: \"kubernetes.io/projected/3963c081-267c-4116-b66f-06a77ceccd93-kube-api-access-48dhp\") pod \"collect-profiles-29561415-6l6fd\" (UID: \"3963c081-267c-4116-b66f-06a77ceccd93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" Mar 16 18:15:00 crc kubenswrapper[4736]: I0316 18:15:00.573635 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" Mar 16 18:15:01 crc kubenswrapper[4736]: I0316 18:15:01.378202 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd"] Mar 16 18:15:01 crc kubenswrapper[4736]: I0316 18:15:01.702879 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" event={"ID":"3963c081-267c-4116-b66f-06a77ceccd93","Type":"ContainerStarted","Data":"ef3fee262b6a5ffa18116ec771fb503568d7496bf62d0a9939ae06bb50275f42"} Mar 16 18:15:01 crc kubenswrapper[4736]: I0316 18:15:01.703174 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" event={"ID":"3963c081-267c-4116-b66f-06a77ceccd93","Type":"ContainerStarted","Data":"f616abf5e22e1eb24af27e4e07900f1a14a6d1cc52b5573dd5816a9490c0583e"} Mar 16 18:15:01 crc kubenswrapper[4736]: I0316 18:15:01.724307 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" podStartSLOduration=1.7242870030000002 podStartE2EDuration="1.724287003s" podCreationTimestamp="2026-03-16 18:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 18:15:01.721735303 +0000 UTC m=+10903.449125590" watchObservedRunningTime="2026-03-16 18:15:01.724287003 +0000 UTC m=+10903.451677290" Mar 16 18:15:01 crc kubenswrapper[4736]: I0316 18:15:01.978186 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:15:01 crc kubenswrapper[4736]: E0316 18:15:01.978525 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:15:02 crc kubenswrapper[4736]: I0316 18:15:02.711431 4736 generic.go:334] "Generic (PLEG): container finished" podID="3963c081-267c-4116-b66f-06a77ceccd93" containerID="ef3fee262b6a5ffa18116ec771fb503568d7496bf62d0a9939ae06bb50275f42" exitCode=0 Mar 16 18:15:02 crc kubenswrapper[4736]: I0316 18:15:02.711521 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" event={"ID":"3963c081-267c-4116-b66f-06a77ceccd93","Type":"ContainerDied","Data":"ef3fee262b6a5ffa18116ec771fb503568d7496bf62d0a9939ae06bb50275f42"} Mar 16 18:15:04 crc kubenswrapper[4736]: I0316 18:15:04.714755 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" Mar 16 18:15:04 crc kubenswrapper[4736]: I0316 18:15:04.740989 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" event={"ID":"3963c081-267c-4116-b66f-06a77ceccd93","Type":"ContainerDied","Data":"f616abf5e22e1eb24af27e4e07900f1a14a6d1cc52b5573dd5816a9490c0583e"} Mar 16 18:15:04 crc kubenswrapper[4736]: I0316 18:15:04.741702 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f616abf5e22e1eb24af27e4e07900f1a14a6d1cc52b5573dd5816a9490c0583e" Mar 16 18:15:04 crc kubenswrapper[4736]: I0316 18:15:04.742327 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561415-6l6fd" Mar 16 18:15:04 crc kubenswrapper[4736]: I0316 18:15:04.750363 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3963c081-267c-4116-b66f-06a77ceccd93-config-volume\") pod \"3963c081-267c-4116-b66f-06a77ceccd93\" (UID: \"3963c081-267c-4116-b66f-06a77ceccd93\") " Mar 16 18:15:04 crc kubenswrapper[4736]: I0316 18:15:04.750561 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3963c081-267c-4116-b66f-06a77ceccd93-secret-volume\") pod \"3963c081-267c-4116-b66f-06a77ceccd93\" (UID: \"3963c081-267c-4116-b66f-06a77ceccd93\") " Mar 16 18:15:04 crc kubenswrapper[4736]: I0316 18:15:04.750611 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48dhp\" (UniqueName: \"kubernetes.io/projected/3963c081-267c-4116-b66f-06a77ceccd93-kube-api-access-48dhp\") pod \"3963c081-267c-4116-b66f-06a77ceccd93\" (UID: \"3963c081-267c-4116-b66f-06a77ceccd93\") " Mar 16 18:15:04 crc kubenswrapper[4736]: I0316 18:15:04.755841 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3963c081-267c-4116-b66f-06a77ceccd93-config-volume" (OuterVolumeSpecName: "config-volume") pod "3963c081-267c-4116-b66f-06a77ceccd93" (UID: "3963c081-267c-4116-b66f-06a77ceccd93"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 18:15:04 crc kubenswrapper[4736]: I0316 18:15:04.772633 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3963c081-267c-4116-b66f-06a77ceccd93-kube-api-access-48dhp" (OuterVolumeSpecName: "kube-api-access-48dhp") pod "3963c081-267c-4116-b66f-06a77ceccd93" (UID: "3963c081-267c-4116-b66f-06a77ceccd93"). InnerVolumeSpecName "kube-api-access-48dhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:15:04 crc kubenswrapper[4736]: I0316 18:15:04.780620 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3963c081-267c-4116-b66f-06a77ceccd93-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3963c081-267c-4116-b66f-06a77ceccd93" (UID: "3963c081-267c-4116-b66f-06a77ceccd93"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:15:04 crc kubenswrapper[4736]: I0316 18:15:04.852816 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3963c081-267c-4116-b66f-06a77ceccd93-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 18:15:04 crc kubenswrapper[4736]: I0316 18:15:04.852849 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48dhp\" (UniqueName: \"kubernetes.io/projected/3963c081-267c-4116-b66f-06a77ceccd93-kube-api-access-48dhp\") on node \"crc\" DevicePath \"\"" Mar 16 18:15:04 crc kubenswrapper[4736]: I0316 18:15:04.852858 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3963c081-267c-4116-b66f-06a77ceccd93-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 18:15:05 crc kubenswrapper[4736]: I0316 18:15:05.658658 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzgr2" podUID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerName="registry-server" probeResult="failure" output=< Mar 16 18:15:05 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:15:05 crc kubenswrapper[4736]: > Mar 16 18:15:05 crc kubenswrapper[4736]: I0316 18:15:05.803136 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq"] Mar 16 18:15:05 crc kubenswrapper[4736]: I0316 18:15:05.811577 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561370-xrjlq"] Mar 16 18:15:06 crc kubenswrapper[4736]: I0316 18:15:06.988076 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dc919f4-69b8-4475-98f2-413962ddefb4" path="/var/lib/kubelet/pods/3dc919f4-69b8-4475-98f2-413962ddefb4/volumes" Mar 16 18:15:07 crc kubenswrapper[4736]: I0316 18:15:07.831625 4736 scope.go:117] "RemoveContainer" containerID="d1d983ca7ac478df850289c46acdef42517546cbf87d2dbcb332cdde75583835" Mar 16 18:15:07 crc kubenswrapper[4736]: I0316 18:15:07.882718 4736 scope.go:117] "RemoveContainer" containerID="79293c63ce84dbd9b4bd7d7a30aea4a3ab70fdfe2ded26cda9aa9c390619ceaa" Mar 16 18:15:14 crc kubenswrapper[4736]: I0316 18:15:14.978667 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:15:15 crc kubenswrapper[4736]: I0316 18:15:15.107593 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gzgr2" podUID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerName="registry-server" probeResult="failure" output=< Mar 16 18:15:15 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:15:15 crc kubenswrapper[4736]: > Mar 16 18:15:15 crc kubenswrapper[4736]: I0316 18:15:15.877557 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"a4760e567f9fb62c26edce85dfb5e47a9aadfa5accff364e505237cc46bcf10c"} Mar 16 18:15:24 crc kubenswrapper[4736]: I0316 18:15:24.166955 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:15:24 crc kubenswrapper[4736]: I0316 18:15:24.249447 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:15:24 crc kubenswrapper[4736]: I0316 18:15:24.423927 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gzgr2"] Mar 16 18:15:25 crc kubenswrapper[4736]: I0316 18:15:25.980438 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gzgr2" podUID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerName="registry-server" containerID="cri-o://353d5ade27c4280038b832c63c4676ce3574869ecd3a819077f94d462733cbba" gracePeriod=2 Mar 16 18:15:26 crc kubenswrapper[4736]: I0316 18:15:26.862921 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:15:26 crc kubenswrapper[4736]: I0316 18:15:26.992138 4736 generic.go:334] "Generic (PLEG): container finished" podID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerID="353d5ade27c4280038b832c63c4676ce3574869ecd3a819077f94d462733cbba" exitCode=0 Mar 16 18:15:26 crc kubenswrapper[4736]: I0316 18:15:26.993205 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gzgr2" Mar 16 18:15:26 crc kubenswrapper[4736]: I0316 18:15:26.997661 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-catalog-content\") pod \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\" (UID: \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\") " Mar 16 18:15:26 crc kubenswrapper[4736]: I0316 18:15:26.997932 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-utilities\") pod \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\" (UID: \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\") " Mar 16 18:15:26 crc kubenswrapper[4736]: I0316 18:15:26.998018 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp8q9\" (UniqueName: \"kubernetes.io/projected/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-kube-api-access-kp8q9\") pod \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\" (UID: \"8ce4bf39-c93b-491c-98c7-59f4722ea8ac\") " Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.000172 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-utilities" (OuterVolumeSpecName: "utilities") pod "8ce4bf39-c93b-491c-98c7-59f4722ea8ac" (UID: "8ce4bf39-c93b-491c-98c7-59f4722ea8ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.001313 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzgr2" event={"ID":"8ce4bf39-c93b-491c-98c7-59f4722ea8ac","Type":"ContainerDied","Data":"353d5ade27c4280038b832c63c4676ce3574869ecd3a819077f94d462733cbba"} Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.001610 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gzgr2" event={"ID":"8ce4bf39-c93b-491c-98c7-59f4722ea8ac","Type":"ContainerDied","Data":"b3a2a472705ef3643a62d6903374ffb7befc93be6c9443c9f962639576623185"} Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.001679 4736 scope.go:117] "RemoveContainer" containerID="353d5ade27c4280038b832c63c4676ce3574869ecd3a819077f94d462733cbba" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.005725 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-kube-api-access-kp8q9" (OuterVolumeSpecName: "kube-api-access-kp8q9") pod "8ce4bf39-c93b-491c-98c7-59f4722ea8ac" (UID: "8ce4bf39-c93b-491c-98c7-59f4722ea8ac"). InnerVolumeSpecName "kube-api-access-kp8q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.067564 4736 scope.go:117] "RemoveContainer" containerID="9c540e7d9b737fc2121f9742d55b555331d6ca19aad775d39f7b7749a708a19c" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.100506 4736 scope.go:117] "RemoveContainer" containerID="dd2ea98009dbceff6e999b2032a909a94808b81805512c7b32c3cc366b297de0" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.101390 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.101455 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp8q9\" (UniqueName: \"kubernetes.io/projected/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-kube-api-access-kp8q9\") on node \"crc\" DevicePath \"\"" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.140828 4736 scope.go:117] "RemoveContainer" containerID="353d5ade27c4280038b832c63c4676ce3574869ecd3a819077f94d462733cbba" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.141351 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ce4bf39-c93b-491c-98c7-59f4722ea8ac" (UID: "8ce4bf39-c93b-491c-98c7-59f4722ea8ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:15:27 crc kubenswrapper[4736]: E0316 18:15:27.144083 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353d5ade27c4280038b832c63c4676ce3574869ecd3a819077f94d462733cbba\": container with ID starting with 353d5ade27c4280038b832c63c4676ce3574869ecd3a819077f94d462733cbba not found: ID does not exist" containerID="353d5ade27c4280038b832c63c4676ce3574869ecd3a819077f94d462733cbba" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.144212 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353d5ade27c4280038b832c63c4676ce3574869ecd3a819077f94d462733cbba"} err="failed to get container status \"353d5ade27c4280038b832c63c4676ce3574869ecd3a819077f94d462733cbba\": rpc error: code = NotFound desc = could not find container \"353d5ade27c4280038b832c63c4676ce3574869ecd3a819077f94d462733cbba\": container with ID starting with 353d5ade27c4280038b832c63c4676ce3574869ecd3a819077f94d462733cbba not found: ID does not exist" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.144316 4736 scope.go:117] "RemoveContainer" containerID="9c540e7d9b737fc2121f9742d55b555331d6ca19aad775d39f7b7749a708a19c" Mar 16 18:15:27 crc kubenswrapper[4736]: E0316 18:15:27.144790 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c540e7d9b737fc2121f9742d55b555331d6ca19aad775d39f7b7749a708a19c\": container with ID starting with 9c540e7d9b737fc2121f9742d55b555331d6ca19aad775d39f7b7749a708a19c not found: ID does not exist" containerID="9c540e7d9b737fc2121f9742d55b555331d6ca19aad775d39f7b7749a708a19c" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.144870 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c540e7d9b737fc2121f9742d55b555331d6ca19aad775d39f7b7749a708a19c"} err="failed to get container status \"9c540e7d9b737fc2121f9742d55b555331d6ca19aad775d39f7b7749a708a19c\": rpc error: code = NotFound desc = could not find container \"9c540e7d9b737fc2121f9742d55b555331d6ca19aad775d39f7b7749a708a19c\": container with ID starting with 9c540e7d9b737fc2121f9742d55b555331d6ca19aad775d39f7b7749a708a19c not found: ID does not exist" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.144933 4736 scope.go:117] "RemoveContainer" containerID="dd2ea98009dbceff6e999b2032a909a94808b81805512c7b32c3cc366b297de0" Mar 16 18:15:27 crc kubenswrapper[4736]: E0316 18:15:27.145572 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd2ea98009dbceff6e999b2032a909a94808b81805512c7b32c3cc366b297de0\": container with ID starting with dd2ea98009dbceff6e999b2032a909a94808b81805512c7b32c3cc366b297de0 not found: ID does not exist" containerID="dd2ea98009dbceff6e999b2032a909a94808b81805512c7b32c3cc366b297de0" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.145662 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd2ea98009dbceff6e999b2032a909a94808b81805512c7b32c3cc366b297de0"} err="failed to get container status \"dd2ea98009dbceff6e999b2032a909a94808b81805512c7b32c3cc366b297de0\": rpc error: code = NotFound desc = could not find container \"dd2ea98009dbceff6e999b2032a909a94808b81805512c7b32c3cc366b297de0\": container with ID starting with dd2ea98009dbceff6e999b2032a909a94808b81805512c7b32c3cc366b297de0 not found: ID does not exist" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.203143 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ce4bf39-c93b-491c-98c7-59f4722ea8ac-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.337198 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gzgr2"] Mar 16 18:15:27 crc kubenswrapper[4736]: I0316 18:15:27.345536 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gzgr2"] Mar 16 18:15:28 crc kubenswrapper[4736]: I0316 18:15:28.989688 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" path="/var/lib/kubelet/pods/8ce4bf39-c93b-491c-98c7-59f4722ea8ac/volumes" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.165237 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561416-hqr4s"] Mar 16 18:16:00 crc kubenswrapper[4736]: E0316 18:16:00.167881 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerName="registry-server" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.167926 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerName="registry-server" Mar 16 18:16:00 crc kubenswrapper[4736]: E0316 18:16:00.167957 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3963c081-267c-4116-b66f-06a77ceccd93" containerName="collect-profiles" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.167968 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3963c081-267c-4116-b66f-06a77ceccd93" containerName="collect-profiles" Mar 16 18:16:00 crc kubenswrapper[4736]: E0316 18:16:00.167998 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerName="extract-utilities" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.168009 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerName="extract-utilities" Mar 16 18:16:00 crc kubenswrapper[4736]: E0316 18:16:00.168050 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerName="extract-content" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.168061 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerName="extract-content" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.169024 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce4bf39-c93b-491c-98c7-59f4722ea8ac" containerName="registry-server" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.169078 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3963c081-267c-4116-b66f-06a77ceccd93" containerName="collect-profiles" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.170153 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561416-hqr4s" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.172444 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.174713 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.174949 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.208582 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561416-hqr4s"] Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.250622 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvthw\" (UniqueName: \"kubernetes.io/projected/3ba55e57-66b3-4e34-b234-25bef1fb480e-kube-api-access-nvthw\") pod \"auto-csr-approver-29561416-hqr4s\" (UID: \"3ba55e57-66b3-4e34-b234-25bef1fb480e\") " pod="openshift-infra/auto-csr-approver-29561416-hqr4s" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.353555 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvthw\" (UniqueName: \"kubernetes.io/projected/3ba55e57-66b3-4e34-b234-25bef1fb480e-kube-api-access-nvthw\") pod \"auto-csr-approver-29561416-hqr4s\" (UID: \"3ba55e57-66b3-4e34-b234-25bef1fb480e\") " pod="openshift-infra/auto-csr-approver-29561416-hqr4s" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.384240 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvthw\" (UniqueName: \"kubernetes.io/projected/3ba55e57-66b3-4e34-b234-25bef1fb480e-kube-api-access-nvthw\") pod \"auto-csr-approver-29561416-hqr4s\" (UID: \"3ba55e57-66b3-4e34-b234-25bef1fb480e\") " pod="openshift-infra/auto-csr-approver-29561416-hqr4s" Mar 16 18:16:00 crc kubenswrapper[4736]: I0316 18:16:00.498827 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561416-hqr4s" Mar 16 18:16:01 crc kubenswrapper[4736]: I0316 18:16:01.054551 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561416-hqr4s"] Mar 16 18:16:01 crc kubenswrapper[4736]: I0316 18:16:01.356663 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561416-hqr4s" event={"ID":"3ba55e57-66b3-4e34-b234-25bef1fb480e","Type":"ContainerStarted","Data":"6b3fa06eb3eea7af18f0a959f9296f03a9d1878800c23e1a10cabb481f4e8666"} Mar 16 18:16:03 crc kubenswrapper[4736]: I0316 18:16:03.381074 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561416-hqr4s" event={"ID":"3ba55e57-66b3-4e34-b234-25bef1fb480e","Type":"ContainerStarted","Data":"9820eee29b4b5d2f04ab375553743894706dd994bc1cbfe506352610311da808"} Mar 16 18:16:03 crc kubenswrapper[4736]: I0316 18:16:03.400989 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561416-hqr4s" podStartSLOduration=2.25954043 podStartE2EDuration="3.400967329s" podCreationTimestamp="2026-03-16 18:16:00 +0000 UTC" firstStartedPulling="2026-03-16 18:16:01.061570731 +0000 UTC m=+10962.788961018" lastFinishedPulling="2026-03-16 18:16:02.20299761 +0000 UTC m=+10963.930387917" observedRunningTime="2026-03-16 18:16:03.398824131 +0000 UTC m=+10965.126214418" watchObservedRunningTime="2026-03-16 18:16:03.400967329 +0000 UTC m=+10965.128357616" Mar 16 18:16:04 crc kubenswrapper[4736]: I0316 18:16:04.392490 4736 generic.go:334] "Generic (PLEG): container finished" podID="3ba55e57-66b3-4e34-b234-25bef1fb480e" containerID="9820eee29b4b5d2f04ab375553743894706dd994bc1cbfe506352610311da808" exitCode=0 Mar 16 18:16:04 crc kubenswrapper[4736]: I0316 18:16:04.392578 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561416-hqr4s" event={"ID":"3ba55e57-66b3-4e34-b234-25bef1fb480e","Type":"ContainerDied","Data":"9820eee29b4b5d2f04ab375553743894706dd994bc1cbfe506352610311da808"} Mar 16 18:16:05 crc kubenswrapper[4736]: I0316 18:16:05.846071 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561416-hqr4s" Mar 16 18:16:05 crc kubenswrapper[4736]: I0316 18:16:05.974671 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvthw\" (UniqueName: \"kubernetes.io/projected/3ba55e57-66b3-4e34-b234-25bef1fb480e-kube-api-access-nvthw\") pod \"3ba55e57-66b3-4e34-b234-25bef1fb480e\" (UID: \"3ba55e57-66b3-4e34-b234-25bef1fb480e\") " Mar 16 18:16:05 crc kubenswrapper[4736]: I0316 18:16:05.985555 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba55e57-66b3-4e34-b234-25bef1fb480e-kube-api-access-nvthw" (OuterVolumeSpecName: "kube-api-access-nvthw") pod "3ba55e57-66b3-4e34-b234-25bef1fb480e" (UID: "3ba55e57-66b3-4e34-b234-25bef1fb480e"). InnerVolumeSpecName "kube-api-access-nvthw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:16:06 crc kubenswrapper[4736]: I0316 18:16:06.079041 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvthw\" (UniqueName: \"kubernetes.io/projected/3ba55e57-66b3-4e34-b234-25bef1fb480e-kube-api-access-nvthw\") on node \"crc\" DevicePath \"\"" Mar 16 18:16:06 crc kubenswrapper[4736]: I0316 18:16:06.417004 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561416-hqr4s" event={"ID":"3ba55e57-66b3-4e34-b234-25bef1fb480e","Type":"ContainerDied","Data":"6b3fa06eb3eea7af18f0a959f9296f03a9d1878800c23e1a10cabb481f4e8666"} Mar 16 18:16:06 crc kubenswrapper[4736]: I0316 18:16:06.417068 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b3fa06eb3eea7af18f0a959f9296f03a9d1878800c23e1a10cabb481f4e8666" Mar 16 18:16:06 crc kubenswrapper[4736]: I0316 18:16:06.417096 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561416-hqr4s" Mar 16 18:16:06 crc kubenswrapper[4736]: I0316 18:16:06.492918 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561410-x66fm"] Mar 16 18:16:06 crc kubenswrapper[4736]: I0316 18:16:06.501604 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561410-x66fm"] Mar 16 18:16:06 crc kubenswrapper[4736]: I0316 18:16:06.988873 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26cb33b8-6cd0-41c6-8215-a2c837e00939" path="/var/lib/kubelet/pods/26cb33b8-6cd0-41c6-8215-a2c837e00939/volumes" Mar 16 18:16:08 crc kubenswrapper[4736]: I0316 18:16:08.105612 4736 scope.go:117] "RemoveContainer" containerID="587caa56124088812954bc250081036706a9eb7caa0d64f0838f5d47c93ba3a6" Mar 16 18:17:38 crc kubenswrapper[4736]: I0316 18:17:38.508542 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:17:38 crc kubenswrapper[4736]: I0316 18:17:38.509192 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:18:00 crc kubenswrapper[4736]: I0316 18:18:00.179965 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561418-w9mwf"] Mar 16 18:18:00 crc kubenswrapper[4736]: E0316 18:18:00.181038 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba55e57-66b3-4e34-b234-25bef1fb480e" containerName="oc" Mar 16 18:18:00 crc kubenswrapper[4736]: I0316 18:18:00.181055 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba55e57-66b3-4e34-b234-25bef1fb480e" containerName="oc" Mar 16 18:18:00 crc kubenswrapper[4736]: I0316 18:18:00.181304 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba55e57-66b3-4e34-b234-25bef1fb480e" containerName="oc" Mar 16 18:18:00 crc kubenswrapper[4736]: I0316 18:18:00.182584 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561418-w9mwf" Mar 16 18:18:00 crc kubenswrapper[4736]: I0316 18:18:00.185932 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:18:00 crc kubenswrapper[4736]: I0316 18:18:00.187814 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:18:00 crc kubenswrapper[4736]: I0316 18:18:00.188704 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:18:00 crc kubenswrapper[4736]: I0316 18:18:00.197670 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561418-w9mwf"] Mar 16 18:18:00 crc kubenswrapper[4736]: I0316 18:18:00.294533 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmnrq\" (UniqueName: \"kubernetes.io/projected/2b68c55f-995a-4b03-aa49-9da54f252b8f-kube-api-access-tmnrq\") pod \"auto-csr-approver-29561418-w9mwf\" (UID: \"2b68c55f-995a-4b03-aa49-9da54f252b8f\") " pod="openshift-infra/auto-csr-approver-29561418-w9mwf" Mar 16 18:18:00 crc kubenswrapper[4736]: I0316 18:18:00.396561 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmnrq\" (UniqueName: \"kubernetes.io/projected/2b68c55f-995a-4b03-aa49-9da54f252b8f-kube-api-access-tmnrq\") pod \"auto-csr-approver-29561418-w9mwf\" (UID: \"2b68c55f-995a-4b03-aa49-9da54f252b8f\") " pod="openshift-infra/auto-csr-approver-29561418-w9mwf" Mar 16 18:18:00 crc kubenswrapper[4736]: I0316 18:18:00.696032 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmnrq\" (UniqueName: \"kubernetes.io/projected/2b68c55f-995a-4b03-aa49-9da54f252b8f-kube-api-access-tmnrq\") pod \"auto-csr-approver-29561418-w9mwf\" (UID: \"2b68c55f-995a-4b03-aa49-9da54f252b8f\") " pod="openshift-infra/auto-csr-approver-29561418-w9mwf" Mar 16 18:18:00 crc kubenswrapper[4736]: I0316 18:18:00.804336 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561418-w9mwf" Mar 16 18:18:01 crc kubenswrapper[4736]: I0316 18:18:01.364782 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561418-w9mwf"] Mar 16 18:18:01 crc kubenswrapper[4736]: I0316 18:18:01.611045 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561418-w9mwf" event={"ID":"2b68c55f-995a-4b03-aa49-9da54f252b8f","Type":"ContainerStarted","Data":"0461f252ecc320842c23b6a067b195a684cd333ecc2ff15290b4de7d80954de0"} Mar 16 18:18:03 crc kubenswrapper[4736]: I0316 18:18:03.632424 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561418-w9mwf" event={"ID":"2b68c55f-995a-4b03-aa49-9da54f252b8f","Type":"ContainerStarted","Data":"d6652ceaa7183144175a7141ccc52da14e358ef43b04506a9ba1b7641c6f142e"} Mar 16 18:18:03 crc kubenswrapper[4736]: I0316 18:18:03.682208 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561418-w9mwf" podStartSLOduration=2.621157291 podStartE2EDuration="3.682183277s" podCreationTimestamp="2026-03-16 18:18:00 +0000 UTC" firstStartedPulling="2026-03-16 18:18:01.376447006 +0000 UTC m=+11083.103837283" lastFinishedPulling="2026-03-16 18:18:02.437472952 +0000 UTC m=+11084.164863269" observedRunningTime="2026-03-16 18:18:03.646936935 +0000 UTC m=+11085.374327232" watchObservedRunningTime="2026-03-16 18:18:03.682183277 +0000 UTC m=+11085.409573564" Mar 16 18:18:04 crc kubenswrapper[4736]: I0316 18:18:04.646583 4736 generic.go:334] "Generic (PLEG): container finished" podID="2b68c55f-995a-4b03-aa49-9da54f252b8f" containerID="d6652ceaa7183144175a7141ccc52da14e358ef43b04506a9ba1b7641c6f142e" exitCode=0 Mar 16 18:18:04 crc kubenswrapper[4736]: I0316 18:18:04.646658 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561418-w9mwf" event={"ID":"2b68c55f-995a-4b03-aa49-9da54f252b8f","Type":"ContainerDied","Data":"d6652ceaa7183144175a7141ccc52da14e358ef43b04506a9ba1b7641c6f142e"} Mar 16 18:18:06 crc kubenswrapper[4736]: I0316 18:18:06.314191 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561418-w9mwf" Mar 16 18:18:06 crc kubenswrapper[4736]: I0316 18:18:06.427760 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmnrq\" (UniqueName: \"kubernetes.io/projected/2b68c55f-995a-4b03-aa49-9da54f252b8f-kube-api-access-tmnrq\") pod \"2b68c55f-995a-4b03-aa49-9da54f252b8f\" (UID: \"2b68c55f-995a-4b03-aa49-9da54f252b8f\") " Mar 16 18:18:06 crc kubenswrapper[4736]: I0316 18:18:06.440903 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b68c55f-995a-4b03-aa49-9da54f252b8f-kube-api-access-tmnrq" (OuterVolumeSpecName: "kube-api-access-tmnrq") pod "2b68c55f-995a-4b03-aa49-9da54f252b8f" (UID: "2b68c55f-995a-4b03-aa49-9da54f252b8f"). InnerVolumeSpecName "kube-api-access-tmnrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:18:06 crc kubenswrapper[4736]: I0316 18:18:06.530347 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmnrq\" (UniqueName: \"kubernetes.io/projected/2b68c55f-995a-4b03-aa49-9da54f252b8f-kube-api-access-tmnrq\") on node \"crc\" DevicePath \"\"" Mar 16 18:18:06 crc kubenswrapper[4736]: I0316 18:18:06.709519 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561418-w9mwf" event={"ID":"2b68c55f-995a-4b03-aa49-9da54f252b8f","Type":"ContainerDied","Data":"0461f252ecc320842c23b6a067b195a684cd333ecc2ff15290b4de7d80954de0"} Mar 16 18:18:06 crc kubenswrapper[4736]: I0316 18:18:06.709552 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0461f252ecc320842c23b6a067b195a684cd333ecc2ff15290b4de7d80954de0" Mar 16 18:18:06 crc kubenswrapper[4736]: I0316 18:18:06.709579 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561418-w9mwf" Mar 16 18:18:06 crc kubenswrapper[4736]: I0316 18:18:06.754416 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561412-kt2hp"] Mar 16 18:18:06 crc kubenswrapper[4736]: I0316 18:18:06.763473 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561412-kt2hp"] Mar 16 18:18:06 crc kubenswrapper[4736]: I0316 18:18:06.992358 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdf49138-10b0-43bd-8ca8-52f909a3c064" path="/var/lib/kubelet/pods/cdf49138-10b0-43bd-8ca8-52f909a3c064/volumes" Mar 16 18:18:08 crc kubenswrapper[4736]: I0316 18:18:08.301058 4736 scope.go:117] "RemoveContainer" containerID="cfd6d5060fbcdd094d2b9493f5b7a3d3ccd0862a871e8c6f1d301f2fafea2a9c" Mar 16 18:18:08 crc kubenswrapper[4736]: I0316 18:18:08.514504 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:18:08 crc kubenswrapper[4736]: I0316 18:18:08.517504 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:18:38 crc kubenswrapper[4736]: I0316 18:18:38.508246 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:18:38 crc kubenswrapper[4736]: I0316 18:18:38.508982 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:18:38 crc kubenswrapper[4736]: I0316 18:18:38.509036 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 18:18:38 crc kubenswrapper[4736]: I0316 18:18:38.510130 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a4760e567f9fb62c26edce85dfb5e47a9aadfa5accff364e505237cc46bcf10c"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 18:18:38 crc kubenswrapper[4736]: I0316 18:18:38.510207 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://a4760e567f9fb62c26edce85dfb5e47a9aadfa5accff364e505237cc46bcf10c" gracePeriod=600 Mar 16 18:18:39 crc kubenswrapper[4736]: I0316 18:18:39.361327 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="a4760e567f9fb62c26edce85dfb5e47a9aadfa5accff364e505237cc46bcf10c" exitCode=0 Mar 16 18:18:39 crc kubenswrapper[4736]: I0316 18:18:39.361390 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"a4760e567f9fb62c26edce85dfb5e47a9aadfa5accff364e505237cc46bcf10c"} Mar 16 18:18:39 crc kubenswrapper[4736]: I0316 18:18:39.361815 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f"} Mar 16 18:18:39 crc kubenswrapper[4736]: I0316 18:18:39.361833 4736 scope.go:117] "RemoveContainer" containerID="5ef036cc752d6e767e7721f181e75d3fbe539bfb0d8eed37940a3ecfa0473973" Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.032485 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hkwkw"] Mar 16 18:19:19 crc kubenswrapper[4736]: E0316 18:19:19.033270 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b68c55f-995a-4b03-aa49-9da54f252b8f" containerName="oc" Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.033282 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b68c55f-995a-4b03-aa49-9da54f252b8f" containerName="oc" Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.033481 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b68c55f-995a-4b03-aa49-9da54f252b8f" containerName="oc" Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.037984 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.069635 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hkwkw"] Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.235085 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0afe7620-23ae-47a7-a7ae-02e9b675ea66-catalog-content\") pod \"community-operators-hkwkw\" (UID: \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\") " pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.235151 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0afe7620-23ae-47a7-a7ae-02e9b675ea66-utilities\") pod \"community-operators-hkwkw\" (UID: \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\") " pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.235504 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs87l\" (UniqueName: \"kubernetes.io/projected/0afe7620-23ae-47a7-a7ae-02e9b675ea66-kube-api-access-zs87l\") pod \"community-operators-hkwkw\" (UID: \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\") " pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.337654 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0afe7620-23ae-47a7-a7ae-02e9b675ea66-catalog-content\") pod \"community-operators-hkwkw\" (UID: \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\") " pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.337991 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0afe7620-23ae-47a7-a7ae-02e9b675ea66-utilities\") pod \"community-operators-hkwkw\" (UID: \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\") " pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.338152 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0afe7620-23ae-47a7-a7ae-02e9b675ea66-catalog-content\") pod \"community-operators-hkwkw\" (UID: \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\") " pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.338320 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs87l\" (UniqueName: \"kubernetes.io/projected/0afe7620-23ae-47a7-a7ae-02e9b675ea66-kube-api-access-zs87l\") pod \"community-operators-hkwkw\" (UID: \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\") " pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.338456 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0afe7620-23ae-47a7-a7ae-02e9b675ea66-utilities\") pod \"community-operators-hkwkw\" (UID: \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\") " pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.364750 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs87l\" (UniqueName: \"kubernetes.io/projected/0afe7620-23ae-47a7-a7ae-02e9b675ea66-kube-api-access-zs87l\") pod \"community-operators-hkwkw\" (UID: \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\") " pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:19 crc kubenswrapper[4736]: I0316 18:19:19.659590 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:20 crc kubenswrapper[4736]: I0316 18:19:20.494409 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hkwkw"] Mar 16 18:19:20 crc kubenswrapper[4736]: I0316 18:19:20.787244 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwkw" event={"ID":"0afe7620-23ae-47a7-a7ae-02e9b675ea66","Type":"ContainerStarted","Data":"76547c7ad99adf76be8c5568c96a3d193f1d6aac0f1052073d8aff03d668687b"} Mar 16 18:19:21 crc kubenswrapper[4736]: I0316 18:19:21.797814 4736 generic.go:334] "Generic (PLEG): container finished" podID="0afe7620-23ae-47a7-a7ae-02e9b675ea66" containerID="d98babf98ffb1cfbb546b25a8f4e4f0fd0166f56f0680325a01b3356a917b7f2" exitCode=0 Mar 16 18:19:21 crc kubenswrapper[4736]: I0316 18:19:21.797885 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwkw" event={"ID":"0afe7620-23ae-47a7-a7ae-02e9b675ea66","Type":"ContainerDied","Data":"d98babf98ffb1cfbb546b25a8f4e4f0fd0166f56f0680325a01b3356a917b7f2"} Mar 16 18:19:21 crc kubenswrapper[4736]: I0316 18:19:21.802456 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 18:19:22 crc kubenswrapper[4736]: I0316 18:19:22.809232 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwkw" event={"ID":"0afe7620-23ae-47a7-a7ae-02e9b675ea66","Type":"ContainerStarted","Data":"d15d02d50fb36d5428e3f3966f0f5f4f202839426156d7fe1c06c61ab35fe122"} Mar 16 18:19:24 crc kubenswrapper[4736]: I0316 18:19:24.829596 4736 generic.go:334] "Generic (PLEG): container finished" podID="0afe7620-23ae-47a7-a7ae-02e9b675ea66" containerID="d15d02d50fb36d5428e3f3966f0f5f4f202839426156d7fe1c06c61ab35fe122" exitCode=0 Mar 16 18:19:24 crc kubenswrapper[4736]: I0316 18:19:24.829638 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwkw" event={"ID":"0afe7620-23ae-47a7-a7ae-02e9b675ea66","Type":"ContainerDied","Data":"d15d02d50fb36d5428e3f3966f0f5f4f202839426156d7fe1c06c61ab35fe122"} Mar 16 18:19:25 crc kubenswrapper[4736]: I0316 18:19:25.843547 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwkw" event={"ID":"0afe7620-23ae-47a7-a7ae-02e9b675ea66","Type":"ContainerStarted","Data":"6ecaf641ce3106dc810790c56e129a75bcdfbd0bba11ea44f3c1ca7ba4a0ca5e"} Mar 16 18:19:25 crc kubenswrapper[4736]: I0316 18:19:25.879184 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hkwkw" podStartSLOduration=4.459632683 podStartE2EDuration="7.879160318s" podCreationTimestamp="2026-03-16 18:19:18 +0000 UTC" firstStartedPulling="2026-03-16 18:19:21.799813845 +0000 UTC m=+11163.527204142" lastFinishedPulling="2026-03-16 18:19:25.2193415 +0000 UTC m=+11166.946731777" observedRunningTime="2026-03-16 18:19:25.867970283 +0000 UTC m=+11167.595360570" watchObservedRunningTime="2026-03-16 18:19:25.879160318 +0000 UTC m=+11167.606550615" Mar 16 18:19:29 crc kubenswrapper[4736]: I0316 18:19:29.660617 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:29 crc kubenswrapper[4736]: I0316 18:19:29.661152 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:30 crc kubenswrapper[4736]: I0316 18:19:30.720395 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hkwkw" podUID="0afe7620-23ae-47a7-a7ae-02e9b675ea66" containerName="registry-server" probeResult="failure" output=< Mar 16 18:19:30 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:19:30 crc kubenswrapper[4736]: > Mar 16 18:19:39 crc kubenswrapper[4736]: I0316 18:19:39.739773 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:39 crc kubenswrapper[4736]: I0316 18:19:39.828211 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:40 crc kubenswrapper[4736]: I0316 18:19:40.000005 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hkwkw"] Mar 16 18:19:40 crc kubenswrapper[4736]: I0316 18:19:40.979294 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hkwkw" podUID="0afe7620-23ae-47a7-a7ae-02e9b675ea66" containerName="registry-server" containerID="cri-o://6ecaf641ce3106dc810790c56e129a75bcdfbd0bba11ea44f3c1ca7ba4a0ca5e" gracePeriod=2 Mar 16 18:19:41 crc kubenswrapper[4736]: I0316 18:19:41.926547 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.000621 4736 generic.go:334] "Generic (PLEG): container finished" podID="0afe7620-23ae-47a7-a7ae-02e9b675ea66" containerID="6ecaf641ce3106dc810790c56e129a75bcdfbd0bba11ea44f3c1ca7ba4a0ca5e" exitCode=0 Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.000664 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwkw" event={"ID":"0afe7620-23ae-47a7-a7ae-02e9b675ea66","Type":"ContainerDied","Data":"6ecaf641ce3106dc810790c56e129a75bcdfbd0bba11ea44f3c1ca7ba4a0ca5e"} Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.000704 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hkwkw" event={"ID":"0afe7620-23ae-47a7-a7ae-02e9b675ea66","Type":"ContainerDied","Data":"76547c7ad99adf76be8c5568c96a3d193f1d6aac0f1052073d8aff03d668687b"} Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.001645 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hkwkw" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.002301 4736 scope.go:117] "RemoveContainer" containerID="6ecaf641ce3106dc810790c56e129a75bcdfbd0bba11ea44f3c1ca7ba4a0ca5e" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.041702 4736 scope.go:117] "RemoveContainer" containerID="d15d02d50fb36d5428e3f3966f0f5f4f202839426156d7fe1c06c61ab35fe122" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.078385 4736 scope.go:117] "RemoveContainer" containerID="d98babf98ffb1cfbb546b25a8f4e4f0fd0166f56f0680325a01b3356a917b7f2" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.099890 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0afe7620-23ae-47a7-a7ae-02e9b675ea66-utilities\") pod \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\" (UID: \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\") " Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.099990 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0afe7620-23ae-47a7-a7ae-02e9b675ea66-catalog-content\") pod \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\" (UID: \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\") " Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.100130 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs87l\" (UniqueName: \"kubernetes.io/projected/0afe7620-23ae-47a7-a7ae-02e9b675ea66-kube-api-access-zs87l\") pod \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\" (UID: \"0afe7620-23ae-47a7-a7ae-02e9b675ea66\") " Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.103202 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0afe7620-23ae-47a7-a7ae-02e9b675ea66-utilities" (OuterVolumeSpecName: "utilities") pod "0afe7620-23ae-47a7-a7ae-02e9b675ea66" (UID: "0afe7620-23ae-47a7-a7ae-02e9b675ea66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.137273 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0afe7620-23ae-47a7-a7ae-02e9b675ea66-kube-api-access-zs87l" (OuterVolumeSpecName: "kube-api-access-zs87l") pod "0afe7620-23ae-47a7-a7ae-02e9b675ea66" (UID: "0afe7620-23ae-47a7-a7ae-02e9b675ea66"). InnerVolumeSpecName "kube-api-access-zs87l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.144775 4736 scope.go:117] "RemoveContainer" containerID="6ecaf641ce3106dc810790c56e129a75bcdfbd0bba11ea44f3c1ca7ba4a0ca5e" Mar 16 18:19:42 crc kubenswrapper[4736]: E0316 18:19:42.152929 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ecaf641ce3106dc810790c56e129a75bcdfbd0bba11ea44f3c1ca7ba4a0ca5e\": container with ID starting with 6ecaf641ce3106dc810790c56e129a75bcdfbd0bba11ea44f3c1ca7ba4a0ca5e not found: ID does not exist" containerID="6ecaf641ce3106dc810790c56e129a75bcdfbd0bba11ea44f3c1ca7ba4a0ca5e" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.153684 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ecaf641ce3106dc810790c56e129a75bcdfbd0bba11ea44f3c1ca7ba4a0ca5e"} err="failed to get container status \"6ecaf641ce3106dc810790c56e129a75bcdfbd0bba11ea44f3c1ca7ba4a0ca5e\": rpc error: code = NotFound desc = could not find container \"6ecaf641ce3106dc810790c56e129a75bcdfbd0bba11ea44f3c1ca7ba4a0ca5e\": container with ID starting with 6ecaf641ce3106dc810790c56e129a75bcdfbd0bba11ea44f3c1ca7ba4a0ca5e not found: ID does not exist" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.153724 4736 scope.go:117] "RemoveContainer" containerID="d15d02d50fb36d5428e3f3966f0f5f4f202839426156d7fe1c06c61ab35fe122" Mar 16 18:19:42 crc kubenswrapper[4736]: E0316 18:19:42.154609 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d15d02d50fb36d5428e3f3966f0f5f4f202839426156d7fe1c06c61ab35fe122\": container with ID starting with d15d02d50fb36d5428e3f3966f0f5f4f202839426156d7fe1c06c61ab35fe122 not found: ID does not exist" containerID="d15d02d50fb36d5428e3f3966f0f5f4f202839426156d7fe1c06c61ab35fe122" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.154649 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d15d02d50fb36d5428e3f3966f0f5f4f202839426156d7fe1c06c61ab35fe122"} err="failed to get container status \"d15d02d50fb36d5428e3f3966f0f5f4f202839426156d7fe1c06c61ab35fe122\": rpc error: code = NotFound desc = could not find container \"d15d02d50fb36d5428e3f3966f0f5f4f202839426156d7fe1c06c61ab35fe122\": container with ID starting with d15d02d50fb36d5428e3f3966f0f5f4f202839426156d7fe1c06c61ab35fe122 not found: ID does not exist" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.154673 4736 scope.go:117] "RemoveContainer" containerID="d98babf98ffb1cfbb546b25a8f4e4f0fd0166f56f0680325a01b3356a917b7f2" Mar 16 18:19:42 crc kubenswrapper[4736]: E0316 18:19:42.155735 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98babf98ffb1cfbb546b25a8f4e4f0fd0166f56f0680325a01b3356a917b7f2\": container with ID starting with d98babf98ffb1cfbb546b25a8f4e4f0fd0166f56f0680325a01b3356a917b7f2 not found: ID does not exist" containerID="d98babf98ffb1cfbb546b25a8f4e4f0fd0166f56f0680325a01b3356a917b7f2" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.155768 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98babf98ffb1cfbb546b25a8f4e4f0fd0166f56f0680325a01b3356a917b7f2"} err="failed to get container status \"d98babf98ffb1cfbb546b25a8f4e4f0fd0166f56f0680325a01b3356a917b7f2\": rpc error: code = NotFound desc = could not find container \"d98babf98ffb1cfbb546b25a8f4e4f0fd0166f56f0680325a01b3356a917b7f2\": container with ID starting with d98babf98ffb1cfbb546b25a8f4e4f0fd0166f56f0680325a01b3356a917b7f2 not found: ID does not exist" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.185835 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0afe7620-23ae-47a7-a7ae-02e9b675ea66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0afe7620-23ae-47a7-a7ae-02e9b675ea66" (UID: "0afe7620-23ae-47a7-a7ae-02e9b675ea66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.203034 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs87l\" (UniqueName: \"kubernetes.io/projected/0afe7620-23ae-47a7-a7ae-02e9b675ea66-kube-api-access-zs87l\") on node \"crc\" DevicePath \"\"" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.203071 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0afe7620-23ae-47a7-a7ae-02e9b675ea66-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.203082 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0afe7620-23ae-47a7-a7ae-02e9b675ea66-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.359797 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hkwkw"] Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.382064 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hkwkw"] Mar 16 18:19:42 crc kubenswrapper[4736]: I0316 18:19:42.988887 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0afe7620-23ae-47a7-a7ae-02e9b675ea66" path="/var/lib/kubelet/pods/0afe7620-23ae-47a7-a7ae-02e9b675ea66/volumes" Mar 16 18:20:00 crc kubenswrapper[4736]: I0316 18:20:00.225702 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561420-c2c7m"] Mar 16 18:20:00 crc kubenswrapper[4736]: E0316 18:20:00.229378 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afe7620-23ae-47a7-a7ae-02e9b675ea66" containerName="extract-content" Mar 16 18:20:00 crc kubenswrapper[4736]: I0316 18:20:00.229423 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afe7620-23ae-47a7-a7ae-02e9b675ea66" containerName="extract-content" Mar 16 18:20:00 crc kubenswrapper[4736]: E0316 18:20:00.229480 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afe7620-23ae-47a7-a7ae-02e9b675ea66" containerName="registry-server" Mar 16 18:20:00 crc kubenswrapper[4736]: I0316 18:20:00.229489 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afe7620-23ae-47a7-a7ae-02e9b675ea66" containerName="registry-server" Mar 16 18:20:00 crc kubenswrapper[4736]: E0316 18:20:00.229519 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afe7620-23ae-47a7-a7ae-02e9b675ea66" containerName="extract-utilities" Mar 16 18:20:00 crc kubenswrapper[4736]: I0316 18:20:00.229527 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afe7620-23ae-47a7-a7ae-02e9b675ea66" containerName="extract-utilities" Mar 16 18:20:00 crc kubenswrapper[4736]: I0316 18:20:00.229988 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0afe7620-23ae-47a7-a7ae-02e9b675ea66" containerName="registry-server" Mar 16 18:20:00 crc kubenswrapper[4736]: I0316 18:20:00.232551 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561420-c2c7m" Mar 16 18:20:00 crc kubenswrapper[4736]: I0316 18:20:00.236176 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561420-c2c7m"] Mar 16 18:20:00 crc kubenswrapper[4736]: I0316 18:20:00.247382 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:20:00 crc kubenswrapper[4736]: I0316 18:20:00.247755 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:20:00 crc kubenswrapper[4736]: I0316 18:20:00.247392 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:20:00 crc kubenswrapper[4736]: I0316 18:20:00.368886 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjfzq\" (UniqueName: \"kubernetes.io/projected/1a57c754-211e-4ba8-a9a7-d194e5d82012-kube-api-access-wjfzq\") pod \"auto-csr-approver-29561420-c2c7m\" (UID: \"1a57c754-211e-4ba8-a9a7-d194e5d82012\") " pod="openshift-infra/auto-csr-approver-29561420-c2c7m" Mar 16 18:20:00 crc kubenswrapper[4736]: I0316 18:20:00.470604 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjfzq\" (UniqueName: \"kubernetes.io/projected/1a57c754-211e-4ba8-a9a7-d194e5d82012-kube-api-access-wjfzq\") pod \"auto-csr-approver-29561420-c2c7m\" (UID: \"1a57c754-211e-4ba8-a9a7-d194e5d82012\") " pod="openshift-infra/auto-csr-approver-29561420-c2c7m" Mar 16 18:20:00 crc kubenswrapper[4736]: I0316 18:20:00.500813 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjfzq\" (UniqueName: \"kubernetes.io/projected/1a57c754-211e-4ba8-a9a7-d194e5d82012-kube-api-access-wjfzq\") pod \"auto-csr-approver-29561420-c2c7m\" (UID: \"1a57c754-211e-4ba8-a9a7-d194e5d82012\") " pod="openshift-infra/auto-csr-approver-29561420-c2c7m" Mar 16 18:20:00 crc kubenswrapper[4736]: I0316 18:20:00.573174 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561420-c2c7m" Mar 16 18:20:01 crc kubenswrapper[4736]: I0316 18:20:01.062024 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561420-c2c7m"] Mar 16 18:20:01 crc kubenswrapper[4736]: I0316 18:20:01.192164 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561420-c2c7m" event={"ID":"1a57c754-211e-4ba8-a9a7-d194e5d82012","Type":"ContainerStarted","Data":"84d7fa496b3ebc8dc37986403e60c99d3a327fa3a67ff112ce2ee1738b718884"} Mar 16 18:20:04 crc kubenswrapper[4736]: I0316 18:20:04.221141 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561420-c2c7m" event={"ID":"1a57c754-211e-4ba8-a9a7-d194e5d82012","Type":"ContainerStarted","Data":"3fac0eb2ee8f1212990f4c459bd12f9900b102c8f958bbcfdbddba82aede1e79"} Mar 16 18:20:04 crc kubenswrapper[4736]: I0316 18:20:04.245004 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561420-c2c7m" podStartSLOduration=2.535427672 podStartE2EDuration="4.244958488s" podCreationTimestamp="2026-03-16 18:20:00 +0000 UTC" firstStartedPulling="2026-03-16 18:20:01.063895904 +0000 UTC m=+11202.791286191" lastFinishedPulling="2026-03-16 18:20:02.77342671 +0000 UTC m=+11204.500817007" observedRunningTime="2026-03-16 18:20:04.239283353 +0000 UTC m=+11205.966673680" watchObservedRunningTime="2026-03-16 18:20:04.244958488 +0000 UTC m=+11205.972348785" Mar 16 18:20:05 crc kubenswrapper[4736]: I0316 18:20:05.231400 4736 generic.go:334] "Generic (PLEG): container finished" podID="1a57c754-211e-4ba8-a9a7-d194e5d82012" containerID="3fac0eb2ee8f1212990f4c459bd12f9900b102c8f958bbcfdbddba82aede1e79" exitCode=0 Mar 16 18:20:05 crc kubenswrapper[4736]: I0316 18:20:05.231555 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561420-c2c7m" event={"ID":"1a57c754-211e-4ba8-a9a7-d194e5d82012","Type":"ContainerDied","Data":"3fac0eb2ee8f1212990f4c459bd12f9900b102c8f958bbcfdbddba82aede1e79"} Mar 16 18:20:06 crc kubenswrapper[4736]: I0316 18:20:06.739900 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561420-c2c7m" Mar 16 18:20:06 crc kubenswrapper[4736]: I0316 18:20:06.796589 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjfzq\" (UniqueName: \"kubernetes.io/projected/1a57c754-211e-4ba8-a9a7-d194e5d82012-kube-api-access-wjfzq\") pod \"1a57c754-211e-4ba8-a9a7-d194e5d82012\" (UID: \"1a57c754-211e-4ba8-a9a7-d194e5d82012\") " Mar 16 18:20:06 crc kubenswrapper[4736]: I0316 18:20:06.801995 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a57c754-211e-4ba8-a9a7-d194e5d82012-kube-api-access-wjfzq" (OuterVolumeSpecName: "kube-api-access-wjfzq") pod "1a57c754-211e-4ba8-a9a7-d194e5d82012" (UID: "1a57c754-211e-4ba8-a9a7-d194e5d82012"). InnerVolumeSpecName "kube-api-access-wjfzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:20:06 crc kubenswrapper[4736]: I0316 18:20:06.898334 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjfzq\" (UniqueName: \"kubernetes.io/projected/1a57c754-211e-4ba8-a9a7-d194e5d82012-kube-api-access-wjfzq\") on node \"crc\" DevicePath \"\"" Mar 16 18:20:07 crc kubenswrapper[4736]: I0316 18:20:07.254920 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561420-c2c7m" event={"ID":"1a57c754-211e-4ba8-a9a7-d194e5d82012","Type":"ContainerDied","Data":"84d7fa496b3ebc8dc37986403e60c99d3a327fa3a67ff112ce2ee1738b718884"} Mar 16 18:20:07 crc kubenswrapper[4736]: I0316 18:20:07.254967 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84d7fa496b3ebc8dc37986403e60c99d3a327fa3a67ff112ce2ee1738b718884" Mar 16 18:20:07 crc kubenswrapper[4736]: I0316 18:20:07.255056 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561420-c2c7m" Mar 16 18:20:07 crc kubenswrapper[4736]: I0316 18:20:07.340280 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561414-8zdsm"] Mar 16 18:20:07 crc kubenswrapper[4736]: I0316 18:20:07.348749 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561414-8zdsm"] Mar 16 18:20:08 crc kubenswrapper[4736]: I0316 18:20:08.991291 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc89d032-c80e-4004-9550-f269d0bf354e" path="/var/lib/kubelet/pods/dc89d032-c80e-4004-9550-f269d0bf354e/volumes" Mar 16 18:20:38 crc kubenswrapper[4736]: I0316 18:20:38.507947 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:20:38 crc kubenswrapper[4736]: I0316 18:20:38.509133 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.587471 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5d6tr"] Mar 16 18:20:53 crc kubenswrapper[4736]: E0316 18:20:53.588771 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a57c754-211e-4ba8-a9a7-d194e5d82012" containerName="oc" Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.588795 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a57c754-211e-4ba8-a9a7-d194e5d82012" containerName="oc" Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.589181 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a57c754-211e-4ba8-a9a7-d194e5d82012" containerName="oc" Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.592314 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.602298 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5d6tr"] Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.683080 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a534a8f-b132-423f-8555-2556c9f2c9d0-utilities\") pod \"redhat-marketplace-5d6tr\" (UID: \"3a534a8f-b132-423f-8555-2556c9f2c9d0\") " pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.683435 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjv24\" (UniqueName: \"kubernetes.io/projected/3a534a8f-b132-423f-8555-2556c9f2c9d0-kube-api-access-cjv24\") pod \"redhat-marketplace-5d6tr\" (UID: \"3a534a8f-b132-423f-8555-2556c9f2c9d0\") " pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.683470 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a534a8f-b132-423f-8555-2556c9f2c9d0-catalog-content\") pod \"redhat-marketplace-5d6tr\" (UID: \"3a534a8f-b132-423f-8555-2556c9f2c9d0\") " pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.784945 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a534a8f-b132-423f-8555-2556c9f2c9d0-utilities\") pod \"redhat-marketplace-5d6tr\" (UID: \"3a534a8f-b132-423f-8555-2556c9f2c9d0\") " pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.785194 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjv24\" (UniqueName: \"kubernetes.io/projected/3a534a8f-b132-423f-8555-2556c9f2c9d0-kube-api-access-cjv24\") pod \"redhat-marketplace-5d6tr\" (UID: \"3a534a8f-b132-423f-8555-2556c9f2c9d0\") " pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.785226 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a534a8f-b132-423f-8555-2556c9f2c9d0-catalog-content\") pod \"redhat-marketplace-5d6tr\" (UID: \"3a534a8f-b132-423f-8555-2556c9f2c9d0\") " pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.787509 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a534a8f-b132-423f-8555-2556c9f2c9d0-utilities\") pod \"redhat-marketplace-5d6tr\" (UID: \"3a534a8f-b132-423f-8555-2556c9f2c9d0\") " pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.787617 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a534a8f-b132-423f-8555-2556c9f2c9d0-catalog-content\") pod \"redhat-marketplace-5d6tr\" (UID: \"3a534a8f-b132-423f-8555-2556c9f2c9d0\") " pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.810860 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjv24\" (UniqueName: \"kubernetes.io/projected/3a534a8f-b132-423f-8555-2556c9f2c9d0-kube-api-access-cjv24\") pod \"redhat-marketplace-5d6tr\" (UID: \"3a534a8f-b132-423f-8555-2556c9f2c9d0\") " pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:20:53 crc kubenswrapper[4736]: I0316 18:20:53.960651 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:20:54 crc kubenswrapper[4736]: I0316 18:20:54.707351 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5d6tr"] Mar 16 18:20:54 crc kubenswrapper[4736]: I0316 18:20:54.752475 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5d6tr" event={"ID":"3a534a8f-b132-423f-8555-2556c9f2c9d0","Type":"ContainerStarted","Data":"c6c5f09701724ce25364f1e3cbd88cf44ad1994e01ef1a019af7bc1fd8f0b501"} Mar 16 18:20:55 crc kubenswrapper[4736]: I0316 18:20:55.765840 4736 generic.go:334] "Generic (PLEG): container finished" podID="3a534a8f-b132-423f-8555-2556c9f2c9d0" containerID="caf103b52b1dc0c634d352fe750a5d5dd1764a068854ebccf7921aa7b645f6ad" exitCode=0 Mar 16 18:20:55 crc kubenswrapper[4736]: I0316 18:20:55.765947 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5d6tr" event={"ID":"3a534a8f-b132-423f-8555-2556c9f2c9d0","Type":"ContainerDied","Data":"caf103b52b1dc0c634d352fe750a5d5dd1764a068854ebccf7921aa7b645f6ad"} Mar 16 18:20:57 crc kubenswrapper[4736]: I0316 18:20:57.789339 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5d6tr" event={"ID":"3a534a8f-b132-423f-8555-2556c9f2c9d0","Type":"ContainerStarted","Data":"2fd0d0538a57473697305f0db1845d0e25d0264114b9e02d548e289306cabdd2"} Mar 16 18:20:58 crc kubenswrapper[4736]: I0316 18:20:58.801232 4736 generic.go:334] "Generic (PLEG): container finished" podID="3a534a8f-b132-423f-8555-2556c9f2c9d0" containerID="2fd0d0538a57473697305f0db1845d0e25d0264114b9e02d548e289306cabdd2" exitCode=0 Mar 16 18:20:58 crc kubenswrapper[4736]: I0316 18:20:58.801354 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5d6tr" event={"ID":"3a534a8f-b132-423f-8555-2556c9f2c9d0","Type":"ContainerDied","Data":"2fd0d0538a57473697305f0db1845d0e25d0264114b9e02d548e289306cabdd2"} Mar 16 18:20:59 crc kubenswrapper[4736]: I0316 18:20:59.811955 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5d6tr" event={"ID":"3a534a8f-b132-423f-8555-2556c9f2c9d0","Type":"ContainerStarted","Data":"aceb010b137b767f97892c5404a2ca69203baf3ae0ad027ada9e00df8af73f57"} Mar 16 18:20:59 crc kubenswrapper[4736]: I0316 18:20:59.833126 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5d6tr" podStartSLOduration=3.245568412 podStartE2EDuration="6.833083316s" podCreationTimestamp="2026-03-16 18:20:53 +0000 UTC" firstStartedPulling="2026-03-16 18:20:55.768274789 +0000 UTC m=+11257.495665086" lastFinishedPulling="2026-03-16 18:20:59.355789683 +0000 UTC m=+11261.083179990" observedRunningTime="2026-03-16 18:20:59.832038707 +0000 UTC m=+11261.559429004" watchObservedRunningTime="2026-03-16 18:20:59.833083316 +0000 UTC m=+11261.560473613" Mar 16 18:21:03 crc kubenswrapper[4736]: I0316 18:21:03.961167 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:21:03 crc kubenswrapper[4736]: I0316 18:21:03.961747 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:21:05 crc kubenswrapper[4736]: I0316 18:21:05.047091 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-5d6tr" podUID="3a534a8f-b132-423f-8555-2556c9f2c9d0" containerName="registry-server" probeResult="failure" output=< Mar 16 18:21:05 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:21:05 crc kubenswrapper[4736]: > Mar 16 18:21:08 crc kubenswrapper[4736]: I0316 18:21:08.508486 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:21:08 crc kubenswrapper[4736]: I0316 18:21:08.508850 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:21:08 crc kubenswrapper[4736]: I0316 18:21:08.557501 4736 scope.go:117] "RemoveContainer" containerID="4d4e20458d9810a832362d1480adba0c2be0c92eaf8beaecddfac0ab961d785d" Mar 16 18:21:14 crc kubenswrapper[4736]: I0316 18:21:14.033453 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:21:14 crc kubenswrapper[4736]: I0316 18:21:14.113702 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:21:14 crc kubenswrapper[4736]: I0316 18:21:14.280353 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5d6tr"] Mar 16 18:21:16 crc kubenswrapper[4736]: I0316 18:21:16.004237 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5d6tr" podUID="3a534a8f-b132-423f-8555-2556c9f2c9d0" containerName="registry-server" containerID="cri-o://aceb010b137b767f97892c5404a2ca69203baf3ae0ad027ada9e00df8af73f57" gracePeriod=2 Mar 16 18:21:16 crc kubenswrapper[4736]: I0316 18:21:16.689297 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:21:16 crc kubenswrapper[4736]: I0316 18:21:16.771313 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjv24\" (UniqueName: \"kubernetes.io/projected/3a534a8f-b132-423f-8555-2556c9f2c9d0-kube-api-access-cjv24\") pod \"3a534a8f-b132-423f-8555-2556c9f2c9d0\" (UID: \"3a534a8f-b132-423f-8555-2556c9f2c9d0\") " Mar 16 18:21:16 crc kubenswrapper[4736]: I0316 18:21:16.771455 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a534a8f-b132-423f-8555-2556c9f2c9d0-catalog-content\") pod \"3a534a8f-b132-423f-8555-2556c9f2c9d0\" (UID: \"3a534a8f-b132-423f-8555-2556c9f2c9d0\") " Mar 16 18:21:16 crc kubenswrapper[4736]: I0316 18:21:16.771490 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a534a8f-b132-423f-8555-2556c9f2c9d0-utilities\") pod \"3a534a8f-b132-423f-8555-2556c9f2c9d0\" (UID: \"3a534a8f-b132-423f-8555-2556c9f2c9d0\") " Mar 16 18:21:16 crc kubenswrapper[4736]: I0316 18:21:16.772370 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a534a8f-b132-423f-8555-2556c9f2c9d0-utilities" (OuterVolumeSpecName: "utilities") pod "3a534a8f-b132-423f-8555-2556c9f2c9d0" (UID: "3a534a8f-b132-423f-8555-2556c9f2c9d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:21:16 crc kubenswrapper[4736]: I0316 18:21:16.788658 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a534a8f-b132-423f-8555-2556c9f2c9d0-kube-api-access-cjv24" (OuterVolumeSpecName: "kube-api-access-cjv24") pod "3a534a8f-b132-423f-8555-2556c9f2c9d0" (UID: "3a534a8f-b132-423f-8555-2556c9f2c9d0"). InnerVolumeSpecName "kube-api-access-cjv24". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:21:16 crc kubenswrapper[4736]: I0316 18:21:16.798915 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a534a8f-b132-423f-8555-2556c9f2c9d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a534a8f-b132-423f-8555-2556c9f2c9d0" (UID: "3a534a8f-b132-423f-8555-2556c9f2c9d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:21:16 crc kubenswrapper[4736]: I0316 18:21:16.873495 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a534a8f-b132-423f-8555-2556c9f2c9d0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:21:16 crc kubenswrapper[4736]: I0316 18:21:16.873526 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a534a8f-b132-423f-8555-2556c9f2c9d0-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:21:16 crc kubenswrapper[4736]: I0316 18:21:16.873537 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjv24\" (UniqueName: \"kubernetes.io/projected/3a534a8f-b132-423f-8555-2556c9f2c9d0-kube-api-access-cjv24\") on node \"crc\" DevicePath \"\"" Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.018147 4736 generic.go:334] "Generic (PLEG): container finished" podID="3a534a8f-b132-423f-8555-2556c9f2c9d0" containerID="aceb010b137b767f97892c5404a2ca69203baf3ae0ad027ada9e00df8af73f57" exitCode=0 Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.018206 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5d6tr" event={"ID":"3a534a8f-b132-423f-8555-2556c9f2c9d0","Type":"ContainerDied","Data":"aceb010b137b767f97892c5404a2ca69203baf3ae0ad027ada9e00df8af73f57"} Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.018240 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5d6tr" event={"ID":"3a534a8f-b132-423f-8555-2556c9f2c9d0","Type":"ContainerDied","Data":"c6c5f09701724ce25364f1e3cbd88cf44ad1994e01ef1a019af7bc1fd8f0b501"} Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.018260 4736 scope.go:117] "RemoveContainer" containerID="aceb010b137b767f97892c5404a2ca69203baf3ae0ad027ada9e00df8af73f57" Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.018276 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5d6tr" Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.048693 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5d6tr"] Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.051345 4736 scope.go:117] "RemoveContainer" containerID="2fd0d0538a57473697305f0db1845d0e25d0264114b9e02d548e289306cabdd2" Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.058537 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5d6tr"] Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.074266 4736 scope.go:117] "RemoveContainer" containerID="caf103b52b1dc0c634d352fe750a5d5dd1764a068854ebccf7921aa7b645f6ad" Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.119168 4736 scope.go:117] "RemoveContainer" containerID="aceb010b137b767f97892c5404a2ca69203baf3ae0ad027ada9e00df8af73f57" Mar 16 18:21:17 crc kubenswrapper[4736]: E0316 18:21:17.119563 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aceb010b137b767f97892c5404a2ca69203baf3ae0ad027ada9e00df8af73f57\": container with ID starting with aceb010b137b767f97892c5404a2ca69203baf3ae0ad027ada9e00df8af73f57 not found: ID does not exist" containerID="aceb010b137b767f97892c5404a2ca69203baf3ae0ad027ada9e00df8af73f57" Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.119593 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aceb010b137b767f97892c5404a2ca69203baf3ae0ad027ada9e00df8af73f57"} err="failed to get container status \"aceb010b137b767f97892c5404a2ca69203baf3ae0ad027ada9e00df8af73f57\": rpc error: code = NotFound desc = could not find container \"aceb010b137b767f97892c5404a2ca69203baf3ae0ad027ada9e00df8af73f57\": container with ID starting with aceb010b137b767f97892c5404a2ca69203baf3ae0ad027ada9e00df8af73f57 not found: ID does not exist" Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.119612 4736 scope.go:117] "RemoveContainer" containerID="2fd0d0538a57473697305f0db1845d0e25d0264114b9e02d548e289306cabdd2" Mar 16 18:21:17 crc kubenswrapper[4736]: E0316 18:21:17.119969 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fd0d0538a57473697305f0db1845d0e25d0264114b9e02d548e289306cabdd2\": container with ID starting with 2fd0d0538a57473697305f0db1845d0e25d0264114b9e02d548e289306cabdd2 not found: ID does not exist" containerID="2fd0d0538a57473697305f0db1845d0e25d0264114b9e02d548e289306cabdd2" Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.119992 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd0d0538a57473697305f0db1845d0e25d0264114b9e02d548e289306cabdd2"} err="failed to get container status \"2fd0d0538a57473697305f0db1845d0e25d0264114b9e02d548e289306cabdd2\": rpc error: code = NotFound desc = could not find container \"2fd0d0538a57473697305f0db1845d0e25d0264114b9e02d548e289306cabdd2\": container with ID starting with 2fd0d0538a57473697305f0db1845d0e25d0264114b9e02d548e289306cabdd2 not found: ID does not exist" Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.120006 4736 scope.go:117] "RemoveContainer" containerID="caf103b52b1dc0c634d352fe750a5d5dd1764a068854ebccf7921aa7b645f6ad" Mar 16 18:21:17 crc kubenswrapper[4736]: E0316 18:21:17.120232 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf103b52b1dc0c634d352fe750a5d5dd1764a068854ebccf7921aa7b645f6ad\": container with ID starting with caf103b52b1dc0c634d352fe750a5d5dd1764a068854ebccf7921aa7b645f6ad not found: ID does not exist" containerID="caf103b52b1dc0c634d352fe750a5d5dd1764a068854ebccf7921aa7b645f6ad" Mar 16 18:21:17 crc kubenswrapper[4736]: I0316 18:21:17.120264 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf103b52b1dc0c634d352fe750a5d5dd1764a068854ebccf7921aa7b645f6ad"} err="failed to get container status \"caf103b52b1dc0c634d352fe750a5d5dd1764a068854ebccf7921aa7b645f6ad\": rpc error: code = NotFound desc = could not find container \"caf103b52b1dc0c634d352fe750a5d5dd1764a068854ebccf7921aa7b645f6ad\": container with ID starting with caf103b52b1dc0c634d352fe750a5d5dd1764a068854ebccf7921aa7b645f6ad not found: ID does not exist" Mar 16 18:21:19 crc kubenswrapper[4736]: I0316 18:21:19.003196 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a534a8f-b132-423f-8555-2556c9f2c9d0" path="/var/lib/kubelet/pods/3a534a8f-b132-423f-8555-2556c9f2c9d0/volumes" Mar 16 18:21:38 crc kubenswrapper[4736]: I0316 18:21:38.507704 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:21:38 crc kubenswrapper[4736]: I0316 18:21:38.508561 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:21:38 crc kubenswrapper[4736]: I0316 18:21:38.508648 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 18:21:38 crc kubenswrapper[4736]: I0316 18:21:38.509799 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 18:21:38 crc kubenswrapper[4736]: I0316 18:21:38.509884 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" gracePeriod=600 Mar 16 18:21:38 crc kubenswrapper[4736]: E0316 18:21:38.647310 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:21:39 crc kubenswrapper[4736]: I0316 18:21:39.415309 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" exitCode=0 Mar 16 18:21:39 crc kubenswrapper[4736]: I0316 18:21:39.415405 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f"} Mar 16 18:21:39 crc kubenswrapper[4736]: I0316 18:21:39.416178 4736 scope.go:117] "RemoveContainer" containerID="a4760e567f9fb62c26edce85dfb5e47a9aadfa5accff364e505237cc46bcf10c" Mar 16 18:21:39 crc kubenswrapper[4736]: I0316 18:21:39.416663 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:21:39 crc kubenswrapper[4736]: E0316 18:21:39.417182 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:21:50 crc kubenswrapper[4736]: I0316 18:21:50.982601 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:21:50 crc kubenswrapper[4736]: E0316 18:21:50.983775 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:22:00 crc kubenswrapper[4736]: I0316 18:22:00.156570 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561422-nfv2c"] Mar 16 18:22:00 crc kubenswrapper[4736]: E0316 18:22:00.157696 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a534a8f-b132-423f-8555-2556c9f2c9d0" containerName="registry-server" Mar 16 18:22:00 crc kubenswrapper[4736]: I0316 18:22:00.157714 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a534a8f-b132-423f-8555-2556c9f2c9d0" containerName="registry-server" Mar 16 18:22:00 crc kubenswrapper[4736]: E0316 18:22:00.157730 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a534a8f-b132-423f-8555-2556c9f2c9d0" containerName="extract-utilities" Mar 16 18:22:00 crc kubenswrapper[4736]: I0316 18:22:00.157739 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a534a8f-b132-423f-8555-2556c9f2c9d0" containerName="extract-utilities" Mar 16 18:22:00 crc kubenswrapper[4736]: E0316 18:22:00.157769 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a534a8f-b132-423f-8555-2556c9f2c9d0" containerName="extract-content" Mar 16 18:22:00 crc kubenswrapper[4736]: I0316 18:22:00.157778 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a534a8f-b132-423f-8555-2556c9f2c9d0" containerName="extract-content" Mar 16 18:22:00 crc kubenswrapper[4736]: I0316 18:22:00.158051 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a534a8f-b132-423f-8555-2556c9f2c9d0" containerName="registry-server" Mar 16 18:22:00 crc kubenswrapper[4736]: I0316 18:22:00.159044 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561422-nfv2c" Mar 16 18:22:00 crc kubenswrapper[4736]: I0316 18:22:00.166375 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:22:00 crc kubenswrapper[4736]: I0316 18:22:00.166425 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:22:00 crc kubenswrapper[4736]: I0316 18:22:00.166573 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:22:00 crc kubenswrapper[4736]: I0316 18:22:00.182459 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561422-nfv2c"] Mar 16 18:22:00 crc kubenswrapper[4736]: I0316 18:22:00.255552 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqn98\" (UniqueName: \"kubernetes.io/projected/a122c60e-8e40-4ea3-a9e3-44ac911ae23c-kube-api-access-fqn98\") pod \"auto-csr-approver-29561422-nfv2c\" (UID: \"a122c60e-8e40-4ea3-a9e3-44ac911ae23c\") " pod="openshift-infra/auto-csr-approver-29561422-nfv2c" Mar 16 18:22:00 crc kubenswrapper[4736]: I0316 18:22:00.358472 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqn98\" (UniqueName: \"kubernetes.io/projected/a122c60e-8e40-4ea3-a9e3-44ac911ae23c-kube-api-access-fqn98\") pod \"auto-csr-approver-29561422-nfv2c\" (UID: \"a122c60e-8e40-4ea3-a9e3-44ac911ae23c\") " pod="openshift-infra/auto-csr-approver-29561422-nfv2c" Mar 16 18:22:00 crc kubenswrapper[4736]: I0316 18:22:00.382038 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqn98\" (UniqueName: \"kubernetes.io/projected/a122c60e-8e40-4ea3-a9e3-44ac911ae23c-kube-api-access-fqn98\") pod \"auto-csr-approver-29561422-nfv2c\" (UID: \"a122c60e-8e40-4ea3-a9e3-44ac911ae23c\") " pod="openshift-infra/auto-csr-approver-29561422-nfv2c" Mar 16 18:22:00 crc kubenswrapper[4736]: I0316 18:22:00.487065 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561422-nfv2c" Mar 16 18:22:01 crc kubenswrapper[4736]: I0316 18:22:01.025592 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561422-nfv2c"] Mar 16 18:22:01 crc kubenswrapper[4736]: W0316 18:22:01.029697 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda122c60e_8e40_4ea3_a9e3_44ac911ae23c.slice/crio-81331b5232e10c1eb169a590923fb72d3c3e34da819f0adfb7c8b4097b633da5 WatchSource:0}: Error finding container 81331b5232e10c1eb169a590923fb72d3c3e34da819f0adfb7c8b4097b633da5: Status 404 returned error can't find the container with id 81331b5232e10c1eb169a590923fb72d3c3e34da819f0adfb7c8b4097b633da5 Mar 16 18:22:01 crc kubenswrapper[4736]: I0316 18:22:01.666912 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561422-nfv2c" event={"ID":"a122c60e-8e40-4ea3-a9e3-44ac911ae23c","Type":"ContainerStarted","Data":"81331b5232e10c1eb169a590923fb72d3c3e34da819f0adfb7c8b4097b633da5"} Mar 16 18:22:02 crc kubenswrapper[4736]: I0316 18:22:02.978210 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:22:02 crc kubenswrapper[4736]: E0316 18:22:02.978915 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:22:03 crc kubenswrapper[4736]: I0316 18:22:03.708639 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561422-nfv2c" event={"ID":"a122c60e-8e40-4ea3-a9e3-44ac911ae23c","Type":"ContainerStarted","Data":"06a6a3f6fc11839851938e0a0dd53934eedf4d5749df76a976dd37c6247197d5"} Mar 16 18:22:03 crc kubenswrapper[4736]: I0316 18:22:03.740508 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561422-nfv2c" podStartSLOduration=2.451458212 podStartE2EDuration="3.740488754s" podCreationTimestamp="2026-03-16 18:22:00 +0000 UTC" firstStartedPulling="2026-03-16 18:22:01.030911044 +0000 UTC m=+11322.758301331" lastFinishedPulling="2026-03-16 18:22:02.319941576 +0000 UTC m=+11324.047331873" observedRunningTime="2026-03-16 18:22:03.733099393 +0000 UTC m=+11325.460489680" watchObservedRunningTime="2026-03-16 18:22:03.740488754 +0000 UTC m=+11325.467879041" Mar 16 18:22:04 crc kubenswrapper[4736]: I0316 18:22:04.725053 4736 generic.go:334] "Generic (PLEG): container finished" podID="a122c60e-8e40-4ea3-a9e3-44ac911ae23c" containerID="06a6a3f6fc11839851938e0a0dd53934eedf4d5749df76a976dd37c6247197d5" exitCode=0 Mar 16 18:22:04 crc kubenswrapper[4736]: I0316 18:22:04.725134 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561422-nfv2c" event={"ID":"a122c60e-8e40-4ea3-a9e3-44ac911ae23c","Type":"ContainerDied","Data":"06a6a3f6fc11839851938e0a0dd53934eedf4d5749df76a976dd37c6247197d5"} Mar 16 18:22:06 crc kubenswrapper[4736]: I0316 18:22:06.353378 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561422-nfv2c" Mar 16 18:22:06 crc kubenswrapper[4736]: I0316 18:22:06.489675 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqn98\" (UniqueName: \"kubernetes.io/projected/a122c60e-8e40-4ea3-a9e3-44ac911ae23c-kube-api-access-fqn98\") pod \"a122c60e-8e40-4ea3-a9e3-44ac911ae23c\" (UID: \"a122c60e-8e40-4ea3-a9e3-44ac911ae23c\") " Mar 16 18:22:06 crc kubenswrapper[4736]: I0316 18:22:06.508786 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a122c60e-8e40-4ea3-a9e3-44ac911ae23c-kube-api-access-fqn98" (OuterVolumeSpecName: "kube-api-access-fqn98") pod "a122c60e-8e40-4ea3-a9e3-44ac911ae23c" (UID: "a122c60e-8e40-4ea3-a9e3-44ac911ae23c"). InnerVolumeSpecName "kube-api-access-fqn98". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:22:06 crc kubenswrapper[4736]: I0316 18:22:06.592866 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqn98\" (UniqueName: \"kubernetes.io/projected/a122c60e-8e40-4ea3-a9e3-44ac911ae23c-kube-api-access-fqn98\") on node \"crc\" DevicePath \"\"" Mar 16 18:22:06 crc kubenswrapper[4736]: I0316 18:22:06.746629 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561422-nfv2c" event={"ID":"a122c60e-8e40-4ea3-a9e3-44ac911ae23c","Type":"ContainerDied","Data":"81331b5232e10c1eb169a590923fb72d3c3e34da819f0adfb7c8b4097b633da5"} Mar 16 18:22:06 crc kubenswrapper[4736]: I0316 18:22:06.746665 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81331b5232e10c1eb169a590923fb72d3c3e34da819f0adfb7c8b4097b633da5" Mar 16 18:22:06 crc kubenswrapper[4736]: I0316 18:22:06.746679 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561422-nfv2c" Mar 16 18:22:06 crc kubenswrapper[4736]: I0316 18:22:06.811455 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561416-hqr4s"] Mar 16 18:22:06 crc kubenswrapper[4736]: I0316 18:22:06.819773 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561416-hqr4s"] Mar 16 18:22:06 crc kubenswrapper[4736]: I0316 18:22:06.991019 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ba55e57-66b3-4e34-b234-25bef1fb480e" path="/var/lib/kubelet/pods/3ba55e57-66b3-4e34-b234-25bef1fb480e/volumes" Mar 16 18:22:08 crc kubenswrapper[4736]: I0316 18:22:08.694586 4736 scope.go:117] "RemoveContainer" containerID="9820eee29b4b5d2f04ab375553743894706dd994bc1cbfe506352610311da808" Mar 16 18:22:13 crc kubenswrapper[4736]: I0316 18:22:13.977802 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:22:13 crc kubenswrapper[4736]: E0316 18:22:13.978709 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:22:26 crc kubenswrapper[4736]: I0316 18:22:26.980126 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:22:26 crc kubenswrapper[4736]: E0316 18:22:26.993256 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:22:41 crc kubenswrapper[4736]: I0316 18:22:41.978920 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:22:41 crc kubenswrapper[4736]: E0316 18:22:41.980067 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:22:56 crc kubenswrapper[4736]: I0316 18:22:56.979495 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:22:56 crc kubenswrapper[4736]: E0316 18:22:56.980718 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:23:11 crc kubenswrapper[4736]: I0316 18:23:11.978010 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:23:11 crc kubenswrapper[4736]: E0316 18:23:11.978990 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:23:25 crc kubenswrapper[4736]: I0316 18:23:25.978352 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:23:25 crc kubenswrapper[4736]: E0316 18:23:25.979062 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:23:38 crc kubenswrapper[4736]: I0316 18:23:38.985391 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:23:38 crc kubenswrapper[4736]: E0316 18:23:38.986861 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.524592 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-btv6q"] Mar 16 18:23:43 crc kubenswrapper[4736]: E0316 18:23:43.525745 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a122c60e-8e40-4ea3-a9e3-44ac911ae23c" containerName="oc" Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.525759 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a122c60e-8e40-4ea3-a9e3-44ac911ae23c" containerName="oc" Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.525982 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a122c60e-8e40-4ea3-a9e3-44ac911ae23c" containerName="oc" Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.527265 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.541841 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-btv6q"] Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.678746 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6ph5\" (UniqueName: \"kubernetes.io/projected/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-kube-api-access-q6ph5\") pod \"certified-operators-btv6q\" (UID: \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\") " pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.679204 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-catalog-content\") pod \"certified-operators-btv6q\" (UID: \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\") " pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.679567 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-utilities\") pod \"certified-operators-btv6q\" (UID: \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\") " pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.783176 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6ph5\" (UniqueName: \"kubernetes.io/projected/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-kube-api-access-q6ph5\") pod \"certified-operators-btv6q\" (UID: \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\") " pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.783833 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-catalog-content\") pod \"certified-operators-btv6q\" (UID: \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\") " pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.784344 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-utilities\") pod \"certified-operators-btv6q\" (UID: \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\") " pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.784567 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-catalog-content\") pod \"certified-operators-btv6q\" (UID: \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\") " pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.785002 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-utilities\") pod \"certified-operators-btv6q\" (UID: \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\") " pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.820818 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6ph5\" (UniqueName: \"kubernetes.io/projected/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-kube-api-access-q6ph5\") pod \"certified-operators-btv6q\" (UID: \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\") " pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:23:43 crc kubenswrapper[4736]: I0316 18:23:43.843529 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:23:44 crc kubenswrapper[4736]: I0316 18:23:44.441093 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-btv6q"] Mar 16 18:23:44 crc kubenswrapper[4736]: I0316 18:23:44.768709 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btv6q" event={"ID":"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301","Type":"ContainerDied","Data":"7ef1e802c30c898e20bc2aeee37c2ffeee283fa86c135a14a5d9ad11c054bf2d"} Mar 16 18:23:44 crc kubenswrapper[4736]: I0316 18:23:44.770564 4736 generic.go:334] "Generic (PLEG): container finished" podID="73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" containerID="7ef1e802c30c898e20bc2aeee37c2ffeee283fa86c135a14a5d9ad11c054bf2d" exitCode=0 Mar 16 18:23:44 crc kubenswrapper[4736]: I0316 18:23:44.770604 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btv6q" event={"ID":"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301","Type":"ContainerStarted","Data":"e89e9bfaf59c6891fde2fe617155c6c83cd2227faf3c93d2ca55a44d80db4251"} Mar 16 18:23:45 crc kubenswrapper[4736]: I0316 18:23:45.782035 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btv6q" event={"ID":"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301","Type":"ContainerStarted","Data":"23b77278d298233f5e9f2b7df656dcf910be4c0ee44888ce61e081734dc38bc2"} Mar 16 18:23:47 crc kubenswrapper[4736]: I0316 18:23:47.801295 4736 generic.go:334] "Generic (PLEG): container finished" podID="73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" containerID="23b77278d298233f5e9f2b7df656dcf910be4c0ee44888ce61e081734dc38bc2" exitCode=0 Mar 16 18:23:47 crc kubenswrapper[4736]: I0316 18:23:47.801343 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btv6q" event={"ID":"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301","Type":"ContainerDied","Data":"23b77278d298233f5e9f2b7df656dcf910be4c0ee44888ce61e081734dc38bc2"} Mar 16 18:23:48 crc kubenswrapper[4736]: I0316 18:23:48.813675 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btv6q" event={"ID":"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301","Type":"ContainerStarted","Data":"87e998c6b54e76cccac8d10817fb39a62ba4adc2386938a2eff67b8038f7628c"} Mar 16 18:23:48 crc kubenswrapper[4736]: I0316 18:23:48.836953 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-btv6q" podStartSLOduration=2.399787174 podStartE2EDuration="5.8369332s" podCreationTimestamp="2026-03-16 18:23:43 +0000 UTC" firstStartedPulling="2026-03-16 18:23:44.77127862 +0000 UTC m=+11426.498668907" lastFinishedPulling="2026-03-16 18:23:48.208424646 +0000 UTC m=+11429.935814933" observedRunningTime="2026-03-16 18:23:48.828210082 +0000 UTC m=+11430.555600379" watchObservedRunningTime="2026-03-16 18:23:48.8369332 +0000 UTC m=+11430.564323487" Mar 16 18:23:49 crc kubenswrapper[4736]: I0316 18:23:49.977549 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:23:49 crc kubenswrapper[4736]: E0316 18:23:49.978056 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:23:53 crc kubenswrapper[4736]: I0316 18:23:53.843893 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:23:53 crc kubenswrapper[4736]: I0316 18:23:53.845205 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:23:54 crc kubenswrapper[4736]: I0316 18:23:54.895940 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-btv6q" podUID="73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" containerName="registry-server" probeResult="failure" output=< Mar 16 18:23:54 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:23:54 crc kubenswrapper[4736]: > Mar 16 18:24:00 crc kubenswrapper[4736]: I0316 18:24:00.160010 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561424-qd7xt"] Mar 16 18:24:00 crc kubenswrapper[4736]: I0316 18:24:00.162543 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561424-qd7xt" Mar 16 18:24:00 crc kubenswrapper[4736]: I0316 18:24:00.167398 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:24:00 crc kubenswrapper[4736]: I0316 18:24:00.167540 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:24:00 crc kubenswrapper[4736]: I0316 18:24:00.167696 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:24:00 crc kubenswrapper[4736]: I0316 18:24:00.192944 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561424-qd7xt"] Mar 16 18:24:00 crc kubenswrapper[4736]: I0316 18:24:00.236388 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tssb2\" (UniqueName: \"kubernetes.io/projected/d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e-kube-api-access-tssb2\") pod \"auto-csr-approver-29561424-qd7xt\" (UID: \"d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e\") " pod="openshift-infra/auto-csr-approver-29561424-qd7xt" Mar 16 18:24:00 crc kubenswrapper[4736]: I0316 18:24:00.338602 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tssb2\" (UniqueName: \"kubernetes.io/projected/d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e-kube-api-access-tssb2\") pod \"auto-csr-approver-29561424-qd7xt\" (UID: \"d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e\") " pod="openshift-infra/auto-csr-approver-29561424-qd7xt" Mar 16 18:24:00 crc kubenswrapper[4736]: I0316 18:24:00.360398 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tssb2\" (UniqueName: \"kubernetes.io/projected/d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e-kube-api-access-tssb2\") pod \"auto-csr-approver-29561424-qd7xt\" (UID: \"d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e\") " pod="openshift-infra/auto-csr-approver-29561424-qd7xt" Mar 16 18:24:00 crc kubenswrapper[4736]: I0316 18:24:00.498725 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561424-qd7xt" Mar 16 18:24:01 crc kubenswrapper[4736]: I0316 18:24:01.258032 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561424-qd7xt"] Mar 16 18:24:01 crc kubenswrapper[4736]: W0316 18:24:01.260873 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8eb6ca2_0a9d_4dd8_b3f3_cebf828ec43e.slice/crio-c427c84bedc8a6696611bf742c29b5b8162e965416a9b8ed4ed013ca39c1dfb6 WatchSource:0}: Error finding container c427c84bedc8a6696611bf742c29b5b8162e965416a9b8ed4ed013ca39c1dfb6: Status 404 returned error can't find the container with id c427c84bedc8a6696611bf742c29b5b8162e965416a9b8ed4ed013ca39c1dfb6 Mar 16 18:24:01 crc kubenswrapper[4736]: I0316 18:24:01.941171 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561424-qd7xt" event={"ID":"d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e","Type":"ContainerStarted","Data":"c427c84bedc8a6696611bf742c29b5b8162e965416a9b8ed4ed013ca39c1dfb6"} Mar 16 18:24:03 crc kubenswrapper[4736]: I0316 18:24:03.912890 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:24:03 crc kubenswrapper[4736]: I0316 18:24:03.966869 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561424-qd7xt" event={"ID":"d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e","Type":"ContainerStarted","Data":"ed7d2b2f44c2ff5ecfb1fcc502bae5ee63d80c28b56f49c33be48bef39e27c53"} Mar 16 18:24:03 crc kubenswrapper[4736]: I0316 18:24:03.991699 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:24:03 crc kubenswrapper[4736]: I0316 18:24:03.992749 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561424-qd7xt" podStartSLOduration=2.873624137 podStartE2EDuration="3.992732567s" podCreationTimestamp="2026-03-16 18:24:00 +0000 UTC" firstStartedPulling="2026-03-16 18:24:01.263272495 +0000 UTC m=+11442.990662812" lastFinishedPulling="2026-03-16 18:24:02.382380955 +0000 UTC m=+11444.109771242" observedRunningTime="2026-03-16 18:24:03.985998544 +0000 UTC m=+11445.713388831" watchObservedRunningTime="2026-03-16 18:24:03.992732567 +0000 UTC m=+11445.720122854" Mar 16 18:24:04 crc kubenswrapper[4736]: I0316 18:24:04.161031 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-btv6q"] Mar 16 18:24:04 crc kubenswrapper[4736]: I0316 18:24:04.977602 4736 generic.go:334] "Generic (PLEG): container finished" podID="d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e" containerID="ed7d2b2f44c2ff5ecfb1fcc502bae5ee63d80c28b56f49c33be48bef39e27c53" exitCode=0 Mar 16 18:24:04 crc kubenswrapper[4736]: I0316 18:24:04.978560 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:24:04 crc kubenswrapper[4736]: I0316 18:24:04.978512 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-btv6q" podUID="73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" containerName="registry-server" containerID="cri-o://87e998c6b54e76cccac8d10817fb39a62ba4adc2386938a2eff67b8038f7628c" gracePeriod=2 Mar 16 18:24:04 crc kubenswrapper[4736]: E0316 18:24:04.978934 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.007647 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561424-qd7xt" event={"ID":"d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e","Type":"ContainerDied","Data":"ed7d2b2f44c2ff5ecfb1fcc502bae5ee63d80c28b56f49c33be48bef39e27c53"} Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.568850 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.656868 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-utilities\") pod \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\" (UID: \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\") " Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.656972 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-catalog-content\") pod \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\" (UID: \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\") " Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.657024 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6ph5\" (UniqueName: \"kubernetes.io/projected/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-kube-api-access-q6ph5\") pod \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\" (UID: \"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301\") " Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.659293 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-utilities" (OuterVolumeSpecName: "utilities") pod "73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" (UID: "73b0f83f-5cd2-4826-96ff-ca3c5cbc0301"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.665282 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-kube-api-access-q6ph5" (OuterVolumeSpecName: "kube-api-access-q6ph5") pod "73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" (UID: "73b0f83f-5cd2-4826-96ff-ca3c5cbc0301"). InnerVolumeSpecName "kube-api-access-q6ph5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.708095 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" (UID: "73b0f83f-5cd2-4826-96ff-ca3c5cbc0301"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.759005 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.759030 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6ph5\" (UniqueName: \"kubernetes.io/projected/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-kube-api-access-q6ph5\") on node \"crc\" DevicePath \"\"" Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.759039 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.990555 4736 generic.go:334] "Generic (PLEG): container finished" podID="73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" containerID="87e998c6b54e76cccac8d10817fb39a62ba4adc2386938a2eff67b8038f7628c" exitCode=0 Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.990655 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btv6q" event={"ID":"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301","Type":"ContainerDied","Data":"87e998c6b54e76cccac8d10817fb39a62ba4adc2386938a2eff67b8038f7628c"} Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.990714 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-btv6q" event={"ID":"73b0f83f-5cd2-4826-96ff-ca3c5cbc0301","Type":"ContainerDied","Data":"e89e9bfaf59c6891fde2fe617155c6c83cd2227faf3c93d2ca55a44d80db4251"} Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.990731 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-btv6q" Mar 16 18:24:05 crc kubenswrapper[4736]: I0316 18:24:05.990770 4736 scope.go:117] "RemoveContainer" containerID="87e998c6b54e76cccac8d10817fb39a62ba4adc2386938a2eff67b8038f7628c" Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.037018 4736 scope.go:117] "RemoveContainer" containerID="23b77278d298233f5e9f2b7df656dcf910be4c0ee44888ce61e081734dc38bc2" Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.050166 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-btv6q"] Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.064490 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-btv6q"] Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.069564 4736 scope.go:117] "RemoveContainer" containerID="7ef1e802c30c898e20bc2aeee37c2ffeee283fa86c135a14a5d9ad11c054bf2d" Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.109705 4736 scope.go:117] "RemoveContainer" containerID="87e998c6b54e76cccac8d10817fb39a62ba4adc2386938a2eff67b8038f7628c" Mar 16 18:24:06 crc kubenswrapper[4736]: E0316 18:24:06.110040 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87e998c6b54e76cccac8d10817fb39a62ba4adc2386938a2eff67b8038f7628c\": container with ID starting with 87e998c6b54e76cccac8d10817fb39a62ba4adc2386938a2eff67b8038f7628c not found: ID does not exist" containerID="87e998c6b54e76cccac8d10817fb39a62ba4adc2386938a2eff67b8038f7628c" Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.110069 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e998c6b54e76cccac8d10817fb39a62ba4adc2386938a2eff67b8038f7628c"} err="failed to get container status \"87e998c6b54e76cccac8d10817fb39a62ba4adc2386938a2eff67b8038f7628c\": rpc error: code = NotFound desc = could not find container \"87e998c6b54e76cccac8d10817fb39a62ba4adc2386938a2eff67b8038f7628c\": container with ID starting with 87e998c6b54e76cccac8d10817fb39a62ba4adc2386938a2eff67b8038f7628c not found: ID does not exist" Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.110089 4736 scope.go:117] "RemoveContainer" containerID="23b77278d298233f5e9f2b7df656dcf910be4c0ee44888ce61e081734dc38bc2" Mar 16 18:24:06 crc kubenswrapper[4736]: E0316 18:24:06.110355 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23b77278d298233f5e9f2b7df656dcf910be4c0ee44888ce61e081734dc38bc2\": container with ID starting with 23b77278d298233f5e9f2b7df656dcf910be4c0ee44888ce61e081734dc38bc2 not found: ID does not exist" containerID="23b77278d298233f5e9f2b7df656dcf910be4c0ee44888ce61e081734dc38bc2" Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.110382 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23b77278d298233f5e9f2b7df656dcf910be4c0ee44888ce61e081734dc38bc2"} err="failed to get container status \"23b77278d298233f5e9f2b7df656dcf910be4c0ee44888ce61e081734dc38bc2\": rpc error: code = NotFound desc = could not find container \"23b77278d298233f5e9f2b7df656dcf910be4c0ee44888ce61e081734dc38bc2\": container with ID starting with 23b77278d298233f5e9f2b7df656dcf910be4c0ee44888ce61e081734dc38bc2 not found: ID does not exist" Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.110405 4736 scope.go:117] "RemoveContainer" containerID="7ef1e802c30c898e20bc2aeee37c2ffeee283fa86c135a14a5d9ad11c054bf2d" Mar 16 18:24:06 crc kubenswrapper[4736]: E0316 18:24:06.110747 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef1e802c30c898e20bc2aeee37c2ffeee283fa86c135a14a5d9ad11c054bf2d\": container with ID starting with 7ef1e802c30c898e20bc2aeee37c2ffeee283fa86c135a14a5d9ad11c054bf2d not found: ID does not exist" containerID="7ef1e802c30c898e20bc2aeee37c2ffeee283fa86c135a14a5d9ad11c054bf2d" Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.110798 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef1e802c30c898e20bc2aeee37c2ffeee283fa86c135a14a5d9ad11c054bf2d"} err="failed to get container status \"7ef1e802c30c898e20bc2aeee37c2ffeee283fa86c135a14a5d9ad11c054bf2d\": rpc error: code = NotFound desc = could not find container \"7ef1e802c30c898e20bc2aeee37c2ffeee283fa86c135a14a5d9ad11c054bf2d\": container with ID starting with 7ef1e802c30c898e20bc2aeee37c2ffeee283fa86c135a14a5d9ad11c054bf2d not found: ID does not exist" Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.419787 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561424-qd7xt" Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.572778 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tssb2\" (UniqueName: \"kubernetes.io/projected/d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e-kube-api-access-tssb2\") pod \"d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e\" (UID: \"d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e\") " Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.582902 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e-kube-api-access-tssb2" (OuterVolumeSpecName: "kube-api-access-tssb2") pod "d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e" (UID: "d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e"). InnerVolumeSpecName "kube-api-access-tssb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.674796 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tssb2\" (UniqueName: \"kubernetes.io/projected/d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e-kube-api-access-tssb2\") on node \"crc\" DevicePath \"\"" Mar 16 18:24:06 crc kubenswrapper[4736]: I0316 18:24:06.995344 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" path="/var/lib/kubelet/pods/73b0f83f-5cd2-4826-96ff-ca3c5cbc0301/volumes" Mar 16 18:24:07 crc kubenswrapper[4736]: I0316 18:24:07.001964 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561424-qd7xt" event={"ID":"d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e","Type":"ContainerDied","Data":"c427c84bedc8a6696611bf742c29b5b8162e965416a9b8ed4ed013ca39c1dfb6"} Mar 16 18:24:07 crc kubenswrapper[4736]: I0316 18:24:07.002025 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c427c84bedc8a6696611bf742c29b5b8162e965416a9b8ed4ed013ca39c1dfb6" Mar 16 18:24:07 crc kubenswrapper[4736]: I0316 18:24:07.002087 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561424-qd7xt" Mar 16 18:24:07 crc kubenswrapper[4736]: I0316 18:24:07.067125 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561418-w9mwf"] Mar 16 18:24:07 crc kubenswrapper[4736]: I0316 18:24:07.076113 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561418-w9mwf"] Mar 16 18:24:08 crc kubenswrapper[4736]: I0316 18:24:08.993912 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b68c55f-995a-4b03-aa49-9da54f252b8f" path="/var/lib/kubelet/pods/2b68c55f-995a-4b03-aa49-9da54f252b8f/volumes" Mar 16 18:24:15 crc kubenswrapper[4736]: I0316 18:24:15.978361 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:24:15 crc kubenswrapper[4736]: E0316 18:24:15.979563 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:24:30 crc kubenswrapper[4736]: I0316 18:24:30.977947 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:24:30 crc kubenswrapper[4736]: E0316 18:24:30.979928 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:24:42 crc kubenswrapper[4736]: I0316 18:24:42.979153 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:24:42 crc kubenswrapper[4736]: E0316 18:24:42.980643 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:24:56 crc kubenswrapper[4736]: I0316 18:24:56.979063 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:24:56 crc kubenswrapper[4736]: E0316 18:24:56.980467 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:25:07 crc kubenswrapper[4736]: I0316 18:25:07.819474 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rh4nb"] Mar 16 18:25:07 crc kubenswrapper[4736]: E0316 18:25:07.820595 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" containerName="extract-utilities" Mar 16 18:25:07 crc kubenswrapper[4736]: I0316 18:25:07.820618 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" containerName="extract-utilities" Mar 16 18:25:07 crc kubenswrapper[4736]: E0316 18:25:07.820656 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" containerName="registry-server" Mar 16 18:25:07 crc kubenswrapper[4736]: I0316 18:25:07.820668 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" containerName="registry-server" Mar 16 18:25:07 crc kubenswrapper[4736]: E0316 18:25:07.820706 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e" containerName="oc" Mar 16 18:25:07 crc kubenswrapper[4736]: I0316 18:25:07.820719 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e" containerName="oc" Mar 16 18:25:07 crc kubenswrapper[4736]: E0316 18:25:07.820739 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" containerName="extract-content" Mar 16 18:25:07 crc kubenswrapper[4736]: I0316 18:25:07.820753 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" containerName="extract-content" Mar 16 18:25:07 crc kubenswrapper[4736]: I0316 18:25:07.821063 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b0f83f-5cd2-4826-96ff-ca3c5cbc0301" containerName="registry-server" Mar 16 18:25:07 crc kubenswrapper[4736]: I0316 18:25:07.821135 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e" containerName="oc" Mar 16 18:25:07 crc kubenswrapper[4736]: I0316 18:25:07.826468 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:07 crc kubenswrapper[4736]: I0316 18:25:07.887184 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rh4nb"] Mar 16 18:25:07 crc kubenswrapper[4736]: I0316 18:25:07.928975 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr7xz\" (UniqueName: \"kubernetes.io/projected/30668278-78ed-4150-b57f-75f839bdd0ca-kube-api-access-pr7xz\") pod \"redhat-operators-rh4nb\" (UID: \"30668278-78ed-4150-b57f-75f839bdd0ca\") " pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:07 crc kubenswrapper[4736]: I0316 18:25:07.929043 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30668278-78ed-4150-b57f-75f839bdd0ca-utilities\") pod \"redhat-operators-rh4nb\" (UID: \"30668278-78ed-4150-b57f-75f839bdd0ca\") " pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:07 crc kubenswrapper[4736]: I0316 18:25:07.929165 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30668278-78ed-4150-b57f-75f839bdd0ca-catalog-content\") pod \"redhat-operators-rh4nb\" (UID: \"30668278-78ed-4150-b57f-75f839bdd0ca\") " pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:08 crc kubenswrapper[4736]: I0316 18:25:08.030915 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30668278-78ed-4150-b57f-75f839bdd0ca-utilities\") pod \"redhat-operators-rh4nb\" (UID: \"30668278-78ed-4150-b57f-75f839bdd0ca\") " pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:08 crc kubenswrapper[4736]: I0316 18:25:08.031066 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30668278-78ed-4150-b57f-75f839bdd0ca-catalog-content\") pod \"redhat-operators-rh4nb\" (UID: \"30668278-78ed-4150-b57f-75f839bdd0ca\") " pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:08 crc kubenswrapper[4736]: I0316 18:25:08.031352 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr7xz\" (UniqueName: \"kubernetes.io/projected/30668278-78ed-4150-b57f-75f839bdd0ca-kube-api-access-pr7xz\") pod \"redhat-operators-rh4nb\" (UID: \"30668278-78ed-4150-b57f-75f839bdd0ca\") " pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:08 crc kubenswrapper[4736]: I0316 18:25:08.031857 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30668278-78ed-4150-b57f-75f839bdd0ca-utilities\") pod \"redhat-operators-rh4nb\" (UID: \"30668278-78ed-4150-b57f-75f839bdd0ca\") " pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:08 crc kubenswrapper[4736]: I0316 18:25:08.032975 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30668278-78ed-4150-b57f-75f839bdd0ca-catalog-content\") pod \"redhat-operators-rh4nb\" (UID: \"30668278-78ed-4150-b57f-75f839bdd0ca\") " pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:08 crc kubenswrapper[4736]: I0316 18:25:08.049620 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr7xz\" (UniqueName: \"kubernetes.io/projected/30668278-78ed-4150-b57f-75f839bdd0ca-kube-api-access-pr7xz\") pod \"redhat-operators-rh4nb\" (UID: \"30668278-78ed-4150-b57f-75f839bdd0ca\") " pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:08 crc kubenswrapper[4736]: I0316 18:25:08.212347 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:08 crc kubenswrapper[4736]: I0316 18:25:08.725542 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rh4nb"] Mar 16 18:25:08 crc kubenswrapper[4736]: I0316 18:25:08.888748 4736 scope.go:117] "RemoveContainer" containerID="d6652ceaa7183144175a7141ccc52da14e358ef43b04506a9ba1b7641c6f142e" Mar 16 18:25:09 crc kubenswrapper[4736]: I0316 18:25:09.750006 4736 generic.go:334] "Generic (PLEG): container finished" podID="30668278-78ed-4150-b57f-75f839bdd0ca" containerID="334896d852125de930c521ec37b84a5bb22f9ab7704666ecc95ee7de83e384f0" exitCode=0 Mar 16 18:25:09 crc kubenswrapper[4736]: I0316 18:25:09.750184 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh4nb" event={"ID":"30668278-78ed-4150-b57f-75f839bdd0ca","Type":"ContainerDied","Data":"334896d852125de930c521ec37b84a5bb22f9ab7704666ecc95ee7de83e384f0"} Mar 16 18:25:09 crc kubenswrapper[4736]: I0316 18:25:09.750335 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh4nb" event={"ID":"30668278-78ed-4150-b57f-75f839bdd0ca","Type":"ContainerStarted","Data":"094eeec4253d4a97a080e12c060c3074a00b6fa44f98946e741ce1d72cbce522"} Mar 16 18:25:09 crc kubenswrapper[4736]: I0316 18:25:09.761522 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 18:25:09 crc kubenswrapper[4736]: I0316 18:25:09.978804 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:25:09 crc kubenswrapper[4736]: E0316 18:25:09.979422 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:25:11 crc kubenswrapper[4736]: I0316 18:25:11.770302 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh4nb" event={"ID":"30668278-78ed-4150-b57f-75f839bdd0ca","Type":"ContainerStarted","Data":"c2684d20c11709cda9ec27f10cd4b80794ed0bb6c9016bde46f98bbf27f2b974"} Mar 16 18:25:15 crc kubenswrapper[4736]: I0316 18:25:15.807085 4736 generic.go:334] "Generic (PLEG): container finished" podID="30668278-78ed-4150-b57f-75f839bdd0ca" containerID="c2684d20c11709cda9ec27f10cd4b80794ed0bb6c9016bde46f98bbf27f2b974" exitCode=0 Mar 16 18:25:15 crc kubenswrapper[4736]: I0316 18:25:15.807571 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh4nb" event={"ID":"30668278-78ed-4150-b57f-75f839bdd0ca","Type":"ContainerDied","Data":"c2684d20c11709cda9ec27f10cd4b80794ed0bb6c9016bde46f98bbf27f2b974"} Mar 16 18:25:17 crc kubenswrapper[4736]: I0316 18:25:17.836348 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh4nb" event={"ID":"30668278-78ed-4150-b57f-75f839bdd0ca","Type":"ContainerStarted","Data":"960872a26df4a98ec20e987d0b20bcb6483c090517cd24a4bc058b1b34bd3686"} Mar 16 18:25:18 crc kubenswrapper[4736]: I0316 18:25:18.242964 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:18 crc kubenswrapper[4736]: I0316 18:25:18.242999 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:19 crc kubenswrapper[4736]: I0316 18:25:19.289563 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rh4nb" podUID="30668278-78ed-4150-b57f-75f839bdd0ca" containerName="registry-server" probeResult="failure" output=< Mar 16 18:25:19 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:25:19 crc kubenswrapper[4736]: > Mar 16 18:25:24 crc kubenswrapper[4736]: I0316 18:25:24.978342 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:25:24 crc kubenswrapper[4736]: E0316 18:25:24.979056 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:25:29 crc kubenswrapper[4736]: I0316 18:25:29.268909 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rh4nb" podUID="30668278-78ed-4150-b57f-75f839bdd0ca" containerName="registry-server" probeResult="failure" output=< Mar 16 18:25:29 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:25:29 crc kubenswrapper[4736]: > Mar 16 18:25:36 crc kubenswrapper[4736]: I0316 18:25:36.978945 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:25:36 crc kubenswrapper[4736]: E0316 18:25:36.979872 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:25:39 crc kubenswrapper[4736]: I0316 18:25:39.267380 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rh4nb" podUID="30668278-78ed-4150-b57f-75f839bdd0ca" containerName="registry-server" probeResult="failure" output=< Mar 16 18:25:39 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:25:39 crc kubenswrapper[4736]: > Mar 16 18:25:47 crc kubenswrapper[4736]: I0316 18:25:47.978052 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:25:47 crc kubenswrapper[4736]: E0316 18:25:47.978904 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:25:49 crc kubenswrapper[4736]: I0316 18:25:49.264313 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rh4nb" podUID="30668278-78ed-4150-b57f-75f839bdd0ca" containerName="registry-server" probeResult="failure" output=< Mar 16 18:25:49 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:25:49 crc kubenswrapper[4736]: > Mar 16 18:25:58 crc kubenswrapper[4736]: I0316 18:25:58.277840 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:58 crc kubenswrapper[4736]: I0316 18:25:58.315311 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rh4nb" podStartSLOduration=44.428418362 podStartE2EDuration="51.309780906s" podCreationTimestamp="2026-03-16 18:25:07 +0000 UTC" firstStartedPulling="2026-03-16 18:25:09.759161409 +0000 UTC m=+11511.486551696" lastFinishedPulling="2026-03-16 18:25:16.640523943 +0000 UTC m=+11518.367914240" observedRunningTime="2026-03-16 18:25:17.864877822 +0000 UTC m=+11519.592268109" watchObservedRunningTime="2026-03-16 18:25:58.309780906 +0000 UTC m=+11560.037171193" Mar 16 18:25:58 crc kubenswrapper[4736]: I0316 18:25:58.343646 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:25:58 crc kubenswrapper[4736]: I0316 18:25:58.529476 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rh4nb"] Mar 16 18:26:00 crc kubenswrapper[4736]: I0316 18:26:00.182430 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561426-ljmnv"] Mar 16 18:26:00 crc kubenswrapper[4736]: I0316 18:26:00.192785 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561426-ljmnv" Mar 16 18:26:00 crc kubenswrapper[4736]: I0316 18:26:00.197447 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561426-ljmnv"] Mar 16 18:26:00 crc kubenswrapper[4736]: I0316 18:26:00.213468 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:26:00 crc kubenswrapper[4736]: I0316 18:26:00.213481 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:26:00 crc kubenswrapper[4736]: I0316 18:26:00.213535 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:26:00 crc kubenswrapper[4736]: I0316 18:26:00.236711 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rh4nb" podUID="30668278-78ed-4150-b57f-75f839bdd0ca" containerName="registry-server" containerID="cri-o://960872a26df4a98ec20e987d0b20bcb6483c090517cd24a4bc058b1b34bd3686" gracePeriod=2 Mar 16 18:26:00 crc kubenswrapper[4736]: I0316 18:26:00.332263 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz7q8\" (UniqueName: \"kubernetes.io/projected/e9f0c706-a0fd-4796-8602-38fd8cca14e0-kube-api-access-tz7q8\") pod \"auto-csr-approver-29561426-ljmnv\" (UID: \"e9f0c706-a0fd-4796-8602-38fd8cca14e0\") " pod="openshift-infra/auto-csr-approver-29561426-ljmnv" Mar 16 18:26:00 crc kubenswrapper[4736]: I0316 18:26:00.434270 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz7q8\" (UniqueName: \"kubernetes.io/projected/e9f0c706-a0fd-4796-8602-38fd8cca14e0-kube-api-access-tz7q8\") pod \"auto-csr-approver-29561426-ljmnv\" (UID: \"e9f0c706-a0fd-4796-8602-38fd8cca14e0\") " pod="openshift-infra/auto-csr-approver-29561426-ljmnv" Mar 16 18:26:00 crc kubenswrapper[4736]: I0316 18:26:00.479313 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz7q8\" (UniqueName: \"kubernetes.io/projected/e9f0c706-a0fd-4796-8602-38fd8cca14e0-kube-api-access-tz7q8\") pod \"auto-csr-approver-29561426-ljmnv\" (UID: \"e9f0c706-a0fd-4796-8602-38fd8cca14e0\") " pod="openshift-infra/auto-csr-approver-29561426-ljmnv" Mar 16 18:26:00 crc kubenswrapper[4736]: I0316 18:26:00.522449 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561426-ljmnv" Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.256926 4736 generic.go:334] "Generic (PLEG): container finished" podID="30668278-78ed-4150-b57f-75f839bdd0ca" containerID="960872a26df4a98ec20e987d0b20bcb6483c090517cd24a4bc058b1b34bd3686" exitCode=0 Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.257048 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh4nb" event={"ID":"30668278-78ed-4150-b57f-75f839bdd0ca","Type":"ContainerDied","Data":"960872a26df4a98ec20e987d0b20bcb6483c090517cd24a4bc058b1b34bd3686"} Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.257290 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rh4nb" event={"ID":"30668278-78ed-4150-b57f-75f839bdd0ca","Type":"ContainerDied","Data":"094eeec4253d4a97a080e12c060c3074a00b6fa44f98946e741ce1d72cbce522"} Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.257929 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="094eeec4253d4a97a080e12c060c3074a00b6fa44f98946e741ce1d72cbce522" Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.269950 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.350309 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30668278-78ed-4150-b57f-75f839bdd0ca-catalog-content\") pod \"30668278-78ed-4150-b57f-75f839bdd0ca\" (UID: \"30668278-78ed-4150-b57f-75f839bdd0ca\") " Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.350452 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr7xz\" (UniqueName: \"kubernetes.io/projected/30668278-78ed-4150-b57f-75f839bdd0ca-kube-api-access-pr7xz\") pod \"30668278-78ed-4150-b57f-75f839bdd0ca\" (UID: \"30668278-78ed-4150-b57f-75f839bdd0ca\") " Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.350513 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30668278-78ed-4150-b57f-75f839bdd0ca-utilities\") pod \"30668278-78ed-4150-b57f-75f839bdd0ca\" (UID: \"30668278-78ed-4150-b57f-75f839bdd0ca\") " Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.356667 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30668278-78ed-4150-b57f-75f839bdd0ca-utilities" (OuterVolumeSpecName: "utilities") pod "30668278-78ed-4150-b57f-75f839bdd0ca" (UID: "30668278-78ed-4150-b57f-75f839bdd0ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.365178 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30668278-78ed-4150-b57f-75f839bdd0ca-kube-api-access-pr7xz" (OuterVolumeSpecName: "kube-api-access-pr7xz") pod "30668278-78ed-4150-b57f-75f839bdd0ca" (UID: "30668278-78ed-4150-b57f-75f839bdd0ca"). InnerVolumeSpecName "kube-api-access-pr7xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.454196 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr7xz\" (UniqueName: \"kubernetes.io/projected/30668278-78ed-4150-b57f-75f839bdd0ca-kube-api-access-pr7xz\") on node \"crc\" DevicePath \"\"" Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.454229 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30668278-78ed-4150-b57f-75f839bdd0ca-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.492532 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561426-ljmnv"] Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.509942 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30668278-78ed-4150-b57f-75f839bdd0ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30668278-78ed-4150-b57f-75f839bdd0ca" (UID: "30668278-78ed-4150-b57f-75f839bdd0ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.556787 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30668278-78ed-4150-b57f-75f839bdd0ca-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:26:01 crc kubenswrapper[4736]: I0316 18:26:01.978813 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:26:01 crc kubenswrapper[4736]: E0316 18:26:01.979058 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:26:02 crc kubenswrapper[4736]: I0316 18:26:02.270577 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561426-ljmnv" event={"ID":"e9f0c706-a0fd-4796-8602-38fd8cca14e0","Type":"ContainerStarted","Data":"971ae6e471d1bb7eb72ee21bf631b406b0d4c9e5cdf950216d4929d7eb55f80d"} Mar 16 18:26:02 crc kubenswrapper[4736]: I0316 18:26:02.270668 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rh4nb" Mar 16 18:26:02 crc kubenswrapper[4736]: I0316 18:26:02.307933 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rh4nb"] Mar 16 18:26:02 crc kubenswrapper[4736]: I0316 18:26:02.316659 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rh4nb"] Mar 16 18:26:02 crc kubenswrapper[4736]: I0316 18:26:02.989843 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30668278-78ed-4150-b57f-75f839bdd0ca" path="/var/lib/kubelet/pods/30668278-78ed-4150-b57f-75f839bdd0ca/volumes" Mar 16 18:26:04 crc kubenswrapper[4736]: I0316 18:26:04.292729 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561426-ljmnv" event={"ID":"e9f0c706-a0fd-4796-8602-38fd8cca14e0","Type":"ContainerStarted","Data":"58bd31c7f49cc46205052426da7b18ee9fac9f08eed75983c14133aee692dcd8"} Mar 16 18:26:04 crc kubenswrapper[4736]: I0316 18:26:04.322495 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561426-ljmnv" podStartSLOduration=3.076402646 podStartE2EDuration="4.322467086s" podCreationTimestamp="2026-03-16 18:26:00 +0000 UTC" firstStartedPulling="2026-03-16 18:26:01.516372836 +0000 UTC m=+11563.243763123" lastFinishedPulling="2026-03-16 18:26:02.762437276 +0000 UTC m=+11564.489827563" observedRunningTime="2026-03-16 18:26:04.31636576 +0000 UTC m=+11566.043756077" watchObservedRunningTime="2026-03-16 18:26:04.322467086 +0000 UTC m=+11566.049857413" Mar 16 18:26:05 crc kubenswrapper[4736]: I0316 18:26:05.307056 4736 generic.go:334] "Generic (PLEG): container finished" podID="e9f0c706-a0fd-4796-8602-38fd8cca14e0" containerID="58bd31c7f49cc46205052426da7b18ee9fac9f08eed75983c14133aee692dcd8" exitCode=0 Mar 16 18:26:05 crc kubenswrapper[4736]: I0316 18:26:05.307188 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561426-ljmnv" event={"ID":"e9f0c706-a0fd-4796-8602-38fd8cca14e0","Type":"ContainerDied","Data":"58bd31c7f49cc46205052426da7b18ee9fac9f08eed75983c14133aee692dcd8"} Mar 16 18:26:06 crc kubenswrapper[4736]: I0316 18:26:06.755927 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561426-ljmnv" Mar 16 18:26:06 crc kubenswrapper[4736]: I0316 18:26:06.878130 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz7q8\" (UniqueName: \"kubernetes.io/projected/e9f0c706-a0fd-4796-8602-38fd8cca14e0-kube-api-access-tz7q8\") pod \"e9f0c706-a0fd-4796-8602-38fd8cca14e0\" (UID: \"e9f0c706-a0fd-4796-8602-38fd8cca14e0\") " Mar 16 18:26:06 crc kubenswrapper[4736]: I0316 18:26:06.890370 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9f0c706-a0fd-4796-8602-38fd8cca14e0-kube-api-access-tz7q8" (OuterVolumeSpecName: "kube-api-access-tz7q8") pod "e9f0c706-a0fd-4796-8602-38fd8cca14e0" (UID: "e9f0c706-a0fd-4796-8602-38fd8cca14e0"). InnerVolumeSpecName "kube-api-access-tz7q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:26:06 crc kubenswrapper[4736]: I0316 18:26:06.980286 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz7q8\" (UniqueName: \"kubernetes.io/projected/e9f0c706-a0fd-4796-8602-38fd8cca14e0-kube-api-access-tz7q8\") on node \"crc\" DevicePath \"\"" Mar 16 18:26:07 crc kubenswrapper[4736]: I0316 18:26:07.331611 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561426-ljmnv" event={"ID":"e9f0c706-a0fd-4796-8602-38fd8cca14e0","Type":"ContainerDied","Data":"971ae6e471d1bb7eb72ee21bf631b406b0d4c9e5cdf950216d4929d7eb55f80d"} Mar 16 18:26:07 crc kubenswrapper[4736]: I0316 18:26:07.331915 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="971ae6e471d1bb7eb72ee21bf631b406b0d4c9e5cdf950216d4929d7eb55f80d" Mar 16 18:26:07 crc kubenswrapper[4736]: I0316 18:26:07.331970 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561426-ljmnv" Mar 16 18:26:07 crc kubenswrapper[4736]: I0316 18:26:07.421189 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561420-c2c7m"] Mar 16 18:26:07 crc kubenswrapper[4736]: I0316 18:26:07.432611 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561420-c2c7m"] Mar 16 18:26:08 crc kubenswrapper[4736]: I0316 18:26:08.992071 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a57c754-211e-4ba8-a9a7-d194e5d82012" path="/var/lib/kubelet/pods/1a57c754-211e-4ba8-a9a7-d194e5d82012/volumes" Mar 16 18:26:15 crc kubenswrapper[4736]: I0316 18:26:15.978723 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:26:15 crc kubenswrapper[4736]: E0316 18:26:15.979697 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:26:18 crc kubenswrapper[4736]: I0316 18:26:18.822938 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" podUID="634ac783-1fe6-4191-b432-f22ad5d84357" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 18:26:18 crc kubenswrapper[4736]: I0316 18:26:18.912687 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-9b78c" podUID="634ac783-1fe6-4191-b432-f22ad5d84357" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 18:26:28 crc kubenswrapper[4736]: I0316 18:26:28.984033 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:26:28 crc kubenswrapper[4736]: E0316 18:26:28.984975 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:26:40 crc kubenswrapper[4736]: I0316 18:26:40.977722 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:26:42 crc kubenswrapper[4736]: I0316 18:26:42.226241 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"7d0a720391c54d316f2f053aadbe78e9a28048d81dc5850440d9d91718b079e2"} Mar 16 18:27:09 crc kubenswrapper[4736]: I0316 18:27:09.051707 4736 scope.go:117] "RemoveContainer" containerID="3fac0eb2ee8f1212990f4c459bd12f9900b102c8f958bbcfdbddba82aede1e79" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.574695 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561428-gpw65"] Mar 16 18:28:00 crc kubenswrapper[4736]: E0316 18:28:00.584506 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30668278-78ed-4150-b57f-75f839bdd0ca" containerName="extract-utilities" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.585217 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="30668278-78ed-4150-b57f-75f839bdd0ca" containerName="extract-utilities" Mar 16 18:28:00 crc kubenswrapper[4736]: E0316 18:28:00.585272 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f0c706-a0fd-4796-8602-38fd8cca14e0" containerName="oc" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.585283 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f0c706-a0fd-4796-8602-38fd8cca14e0" containerName="oc" Mar 16 18:28:00 crc kubenswrapper[4736]: E0316 18:28:00.585324 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30668278-78ed-4150-b57f-75f839bdd0ca" containerName="extract-content" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.585333 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="30668278-78ed-4150-b57f-75f839bdd0ca" containerName="extract-content" Mar 16 18:28:00 crc kubenswrapper[4736]: E0316 18:28:00.585366 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30668278-78ed-4150-b57f-75f839bdd0ca" containerName="registry-server" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.585377 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="30668278-78ed-4150-b57f-75f839bdd0ca" containerName="registry-server" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.585717 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f0c706-a0fd-4796-8602-38fd8cca14e0" containerName="oc" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.585760 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="30668278-78ed-4150-b57f-75f839bdd0ca" containerName="registry-server" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.590268 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561428-gpw65" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.594261 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.594368 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.599084 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.692702 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq9dp\" (UniqueName: \"kubernetes.io/projected/8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d-kube-api-access-vq9dp\") pod \"auto-csr-approver-29561428-gpw65\" (UID: \"8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d\") " pod="openshift-infra/auto-csr-approver-29561428-gpw65" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.756267 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561428-gpw65"] Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.794607 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq9dp\" (UniqueName: \"kubernetes.io/projected/8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d-kube-api-access-vq9dp\") pod \"auto-csr-approver-29561428-gpw65\" (UID: \"8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d\") " pod="openshift-infra/auto-csr-approver-29561428-gpw65" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.837702 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq9dp\" (UniqueName: \"kubernetes.io/projected/8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d-kube-api-access-vq9dp\") pod \"auto-csr-approver-29561428-gpw65\" (UID: \"8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d\") " pod="openshift-infra/auto-csr-approver-29561428-gpw65" Mar 16 18:28:00 crc kubenswrapper[4736]: I0316 18:28:00.925967 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561428-gpw65" Mar 16 18:28:02 crc kubenswrapper[4736]: I0316 18:28:02.652822 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561428-gpw65"] Mar 16 18:28:02 crc kubenswrapper[4736]: W0316 18:28:02.682834 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1c1e3d_b99c_4aaa_b7b1_ffe18b15db6d.slice/crio-09f3cf0804f9c36bd316ea41e90714371907d6b2224d7061694924eb3bca07e0 WatchSource:0}: Error finding container 09f3cf0804f9c36bd316ea41e90714371907d6b2224d7061694924eb3bca07e0: Status 404 returned error can't find the container with id 09f3cf0804f9c36bd316ea41e90714371907d6b2224d7061694924eb3bca07e0 Mar 16 18:28:03 crc kubenswrapper[4736]: I0316 18:28:03.075516 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561428-gpw65" event={"ID":"8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d","Type":"ContainerStarted","Data":"09f3cf0804f9c36bd316ea41e90714371907d6b2224d7061694924eb3bca07e0"} Mar 16 18:28:08 crc kubenswrapper[4736]: I0316 18:28:08.126545 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561428-gpw65" event={"ID":"8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d","Type":"ContainerStarted","Data":"ea7f65d77b54958e5e2f0f3293b2650fb9d1b07a98d4b34821b05d712819d346"} Mar 16 18:28:08 crc kubenswrapper[4736]: I0316 18:28:08.215455 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561428-gpw65" podStartSLOduration=6.569831658 podStartE2EDuration="8.215429104s" podCreationTimestamp="2026-03-16 18:28:00 +0000 UTC" firstStartedPulling="2026-03-16 18:28:02.695629683 +0000 UTC m=+11684.423019970" lastFinishedPulling="2026-03-16 18:28:04.341227119 +0000 UTC m=+11686.068617416" observedRunningTime="2026-03-16 18:28:08.213952524 +0000 UTC m=+11689.941342831" watchObservedRunningTime="2026-03-16 18:28:08.215429104 +0000 UTC m=+11689.942819401" Mar 16 18:28:10 crc kubenswrapper[4736]: I0316 18:28:10.146262 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561428-gpw65" event={"ID":"8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d","Type":"ContainerDied","Data":"ea7f65d77b54958e5e2f0f3293b2650fb9d1b07a98d4b34821b05d712819d346"} Mar 16 18:28:10 crc kubenswrapper[4736]: I0316 18:28:10.146938 4736 generic.go:334] "Generic (PLEG): container finished" podID="8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d" containerID="ea7f65d77b54958e5e2f0f3293b2650fb9d1b07a98d4b34821b05d712819d346" exitCode=0 Mar 16 18:28:11 crc kubenswrapper[4736]: I0316 18:28:11.563477 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561428-gpw65" Mar 16 18:28:11 crc kubenswrapper[4736]: I0316 18:28:11.692866 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq9dp\" (UniqueName: \"kubernetes.io/projected/8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d-kube-api-access-vq9dp\") pod \"8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d\" (UID: \"8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d\") " Mar 16 18:28:11 crc kubenswrapper[4736]: I0316 18:28:11.703030 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d-kube-api-access-vq9dp" (OuterVolumeSpecName: "kube-api-access-vq9dp") pod "8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d" (UID: "8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d"). InnerVolumeSpecName "kube-api-access-vq9dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:28:11 crc kubenswrapper[4736]: I0316 18:28:11.795563 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq9dp\" (UniqueName: \"kubernetes.io/projected/8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d-kube-api-access-vq9dp\") on node \"crc\" DevicePath \"\"" Mar 16 18:28:12 crc kubenswrapper[4736]: I0316 18:28:12.173220 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561428-gpw65" event={"ID":"8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d","Type":"ContainerDied","Data":"09f3cf0804f9c36bd316ea41e90714371907d6b2224d7061694924eb3bca07e0"} Mar 16 18:28:12 crc kubenswrapper[4736]: I0316 18:28:12.173298 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09f3cf0804f9c36bd316ea41e90714371907d6b2224d7061694924eb3bca07e0" Mar 16 18:28:12 crc kubenswrapper[4736]: I0316 18:28:12.173701 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561428-gpw65" Mar 16 18:28:12 crc kubenswrapper[4736]: I0316 18:28:12.683637 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561422-nfv2c"] Mar 16 18:28:12 crc kubenswrapper[4736]: I0316 18:28:12.691923 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561422-nfv2c"] Mar 16 18:28:12 crc kubenswrapper[4736]: I0316 18:28:12.988034 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a122c60e-8e40-4ea3-a9e3-44ac911ae23c" path="/var/lib/kubelet/pods/a122c60e-8e40-4ea3-a9e3-44ac911ae23c/volumes" Mar 16 18:29:08 crc kubenswrapper[4736]: I0316 18:29:08.508966 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:29:08 crc kubenswrapper[4736]: I0316 18:29:08.512412 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:29:09 crc kubenswrapper[4736]: I0316 18:29:09.220848 4736 scope.go:117] "RemoveContainer" containerID="06a6a3f6fc11839851938e0a0dd53934eedf4d5749df76a976dd37c6247197d5" Mar 16 18:29:38 crc kubenswrapper[4736]: I0316 18:29:38.507895 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:29:38 crc kubenswrapper[4736]: I0316 18:29:38.508346 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.201128 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561430-x9c48"] Mar 16 18:30:00 crc kubenswrapper[4736]: E0316 18:30:00.206041 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d" containerName="oc" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.206089 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d" containerName="oc" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.206521 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d" containerName="oc" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.211011 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561430-x9c48" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.212306 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx"] Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.214436 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.221329 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.221344 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.221370 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.221363 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.221372 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.236739 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx"] Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.245331 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561430-x9c48"] Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.394659 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6a79683-81b2-4273-8de1-a68c9daa15cb-secret-volume\") pod \"collect-profiles-29561430-6xprx\" (UID: \"a6a79683-81b2-4273-8de1-a68c9daa15cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.395488 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b2cv\" (UniqueName: \"kubernetes.io/projected/a6a79683-81b2-4273-8de1-a68c9daa15cb-kube-api-access-8b2cv\") pod \"collect-profiles-29561430-6xprx\" (UID: \"a6a79683-81b2-4273-8de1-a68c9daa15cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.395658 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf9cz\" (UniqueName: \"kubernetes.io/projected/2f189480-c87b-4b74-a487-3d172c7de20a-kube-api-access-qf9cz\") pod \"auto-csr-approver-29561430-x9c48\" (UID: \"2f189480-c87b-4b74-a487-3d172c7de20a\") " pod="openshift-infra/auto-csr-approver-29561430-x9c48" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.395701 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6a79683-81b2-4273-8de1-a68c9daa15cb-config-volume\") pod \"collect-profiles-29561430-6xprx\" (UID: \"a6a79683-81b2-4273-8de1-a68c9daa15cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.498230 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6a79683-81b2-4273-8de1-a68c9daa15cb-secret-volume\") pod \"collect-profiles-29561430-6xprx\" (UID: \"a6a79683-81b2-4273-8de1-a68c9daa15cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.498324 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b2cv\" (UniqueName: \"kubernetes.io/projected/a6a79683-81b2-4273-8de1-a68c9daa15cb-kube-api-access-8b2cv\") pod \"collect-profiles-29561430-6xprx\" (UID: \"a6a79683-81b2-4273-8de1-a68c9daa15cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.498370 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf9cz\" (UniqueName: \"kubernetes.io/projected/2f189480-c87b-4b74-a487-3d172c7de20a-kube-api-access-qf9cz\") pod \"auto-csr-approver-29561430-x9c48\" (UID: \"2f189480-c87b-4b74-a487-3d172c7de20a\") " pod="openshift-infra/auto-csr-approver-29561430-x9c48" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.498386 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6a79683-81b2-4273-8de1-a68c9daa15cb-config-volume\") pod \"collect-profiles-29561430-6xprx\" (UID: \"a6a79683-81b2-4273-8de1-a68c9daa15cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.502289 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6a79683-81b2-4273-8de1-a68c9daa15cb-config-volume\") pod \"collect-profiles-29561430-6xprx\" (UID: \"a6a79683-81b2-4273-8de1-a68c9daa15cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.510334 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6a79683-81b2-4273-8de1-a68c9daa15cb-secret-volume\") pod \"collect-profiles-29561430-6xprx\" (UID: \"a6a79683-81b2-4273-8de1-a68c9daa15cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.529633 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b2cv\" (UniqueName: \"kubernetes.io/projected/a6a79683-81b2-4273-8de1-a68c9daa15cb-kube-api-access-8b2cv\") pod \"collect-profiles-29561430-6xprx\" (UID: \"a6a79683-81b2-4273-8de1-a68c9daa15cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.532034 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf9cz\" (UniqueName: \"kubernetes.io/projected/2f189480-c87b-4b74-a487-3d172c7de20a-kube-api-access-qf9cz\") pod \"auto-csr-approver-29561430-x9c48\" (UID: \"2f189480-c87b-4b74-a487-3d172c7de20a\") " pod="openshift-infra/auto-csr-approver-29561430-x9c48" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.537796 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561430-x9c48" Mar 16 18:30:00 crc kubenswrapper[4736]: I0316 18:30:00.548137 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" Mar 16 18:30:01 crc kubenswrapper[4736]: I0316 18:30:01.654509 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx"] Mar 16 18:30:01 crc kubenswrapper[4736]: I0316 18:30:01.675842 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561430-x9c48"] Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.267804 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" event={"ID":"a6a79683-81b2-4273-8de1-a68c9daa15cb","Type":"ContainerStarted","Data":"4b9d82792da7ee61386f5581c73f9a22e8e6327d047c96f061982b9e3d8c4ec4"} Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.268141 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" event={"ID":"a6a79683-81b2-4273-8de1-a68c9daa15cb","Type":"ContainerStarted","Data":"376871df7b7860d63dc9d3c5908256039c39cf330c8c198377c2386945aadce4"} Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.269823 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561430-x9c48" event={"ID":"2f189480-c87b-4b74-a487-3d172c7de20a","Type":"ContainerStarted","Data":"44d43fa98d9d039d53ee825dfc91709daeeba99852fad6f69ff55b215780e74f"} Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.288640 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" podStartSLOduration=2.286315296 podStartE2EDuration="2.286315296s" podCreationTimestamp="2026-03-16 18:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 18:30:02.284241179 +0000 UTC m=+11804.011631476" watchObservedRunningTime="2026-03-16 18:30:02.286315296 +0000 UTC m=+11804.013705583" Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.532561 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5kn7m"] Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.535846 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.565479 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kn7m"] Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.636570 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-catalog-content\") pod \"community-operators-5kn7m\" (UID: \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\") " pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.636614 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2dtq\" (UniqueName: \"kubernetes.io/projected/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-kube-api-access-d2dtq\") pod \"community-operators-5kn7m\" (UID: \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\") " pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.636768 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-utilities\") pod \"community-operators-5kn7m\" (UID: \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\") " pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.739049 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-utilities\") pod \"community-operators-5kn7m\" (UID: \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\") " pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.739235 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-catalog-content\") pod \"community-operators-5kn7m\" (UID: \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\") " pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.739257 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2dtq\" (UniqueName: \"kubernetes.io/projected/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-kube-api-access-d2dtq\") pod \"community-operators-5kn7m\" (UID: \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\") " pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.744636 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-catalog-content\") pod \"community-operators-5kn7m\" (UID: \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\") " pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.745982 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-utilities\") pod \"community-operators-5kn7m\" (UID: \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\") " pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.762051 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2dtq\" (UniqueName: \"kubernetes.io/projected/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-kube-api-access-d2dtq\") pod \"community-operators-5kn7m\" (UID: \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\") " pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:02 crc kubenswrapper[4736]: I0316 18:30:02.858865 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:03 crc kubenswrapper[4736]: I0316 18:30:03.278472 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" event={"ID":"a6a79683-81b2-4273-8de1-a68c9daa15cb","Type":"ContainerDied","Data":"4b9d82792da7ee61386f5581c73f9a22e8e6327d047c96f061982b9e3d8c4ec4"} Mar 16 18:30:03 crc kubenswrapper[4736]: I0316 18:30:03.278951 4736 generic.go:334] "Generic (PLEG): container finished" podID="a6a79683-81b2-4273-8de1-a68c9daa15cb" containerID="4b9d82792da7ee61386f5581c73f9a22e8e6327d047c96f061982b9e3d8c4ec4" exitCode=0 Mar 16 18:30:03 crc kubenswrapper[4736]: I0316 18:30:03.340278 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5kn7m"] Mar 16 18:30:04 crc kubenswrapper[4736]: I0316 18:30:04.289546 4736 generic.go:334] "Generic (PLEG): container finished" podID="0cc2cd3a-4f09-4923-83c9-dc06543c64e4" containerID="b3433b8aec044fb709e1b14c71ab8012426cc79f7b83b48c46a3a6a8a79623de" exitCode=0 Mar 16 18:30:04 crc kubenswrapper[4736]: I0316 18:30:04.289801 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kn7m" event={"ID":"0cc2cd3a-4f09-4923-83c9-dc06543c64e4","Type":"ContainerDied","Data":"b3433b8aec044fb709e1b14c71ab8012426cc79f7b83b48c46a3a6a8a79623de"} Mar 16 18:30:04 crc kubenswrapper[4736]: I0316 18:30:04.289888 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kn7m" event={"ID":"0cc2cd3a-4f09-4923-83c9-dc06543c64e4","Type":"ContainerStarted","Data":"427c4ec5437f03a728028f7d5a310776d459aa8e27aa216cdc86feabf0cda1d7"} Mar 16 18:30:04 crc kubenswrapper[4736]: I0316 18:30:04.818147 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" Mar 16 18:30:04 crc kubenswrapper[4736]: I0316 18:30:04.983590 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6a79683-81b2-4273-8de1-a68c9daa15cb-config-volume\") pod \"a6a79683-81b2-4273-8de1-a68c9daa15cb\" (UID: \"a6a79683-81b2-4273-8de1-a68c9daa15cb\") " Mar 16 18:30:04 crc kubenswrapper[4736]: I0316 18:30:04.983859 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6a79683-81b2-4273-8de1-a68c9daa15cb-secret-volume\") pod \"a6a79683-81b2-4273-8de1-a68c9daa15cb\" (UID: \"a6a79683-81b2-4273-8de1-a68c9daa15cb\") " Mar 16 18:30:04 crc kubenswrapper[4736]: I0316 18:30:04.983911 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b2cv\" (UniqueName: \"kubernetes.io/projected/a6a79683-81b2-4273-8de1-a68c9daa15cb-kube-api-access-8b2cv\") pod \"a6a79683-81b2-4273-8de1-a68c9daa15cb\" (UID: \"a6a79683-81b2-4273-8de1-a68c9daa15cb\") " Mar 16 18:30:05 crc kubenswrapper[4736]: I0316 18:30:05.001704 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6a79683-81b2-4273-8de1-a68c9daa15cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "a6a79683-81b2-4273-8de1-a68c9daa15cb" (UID: "a6a79683-81b2-4273-8de1-a68c9daa15cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 18:30:05 crc kubenswrapper[4736]: I0316 18:30:05.013019 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6a79683-81b2-4273-8de1-a68c9daa15cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a6a79683-81b2-4273-8de1-a68c9daa15cb" (UID: "a6a79683-81b2-4273-8de1-a68c9daa15cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:30:05 crc kubenswrapper[4736]: I0316 18:30:05.023202 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a79683-81b2-4273-8de1-a68c9daa15cb-kube-api-access-8b2cv" (OuterVolumeSpecName: "kube-api-access-8b2cv") pod "a6a79683-81b2-4273-8de1-a68c9daa15cb" (UID: "a6a79683-81b2-4273-8de1-a68c9daa15cb"). InnerVolumeSpecName "kube-api-access-8b2cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:30:05 crc kubenswrapper[4736]: I0316 18:30:05.086594 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6a79683-81b2-4273-8de1-a68c9daa15cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 18:30:05 crc kubenswrapper[4736]: I0316 18:30:05.086621 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a6a79683-81b2-4273-8de1-a68c9daa15cb-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 18:30:05 crc kubenswrapper[4736]: I0316 18:30:05.086633 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b2cv\" (UniqueName: \"kubernetes.io/projected/a6a79683-81b2-4273-8de1-a68c9daa15cb-kube-api-access-8b2cv\") on node \"crc\" DevicePath \"\"" Mar 16 18:30:05 crc kubenswrapper[4736]: I0316 18:30:05.301643 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561430-x9c48" event={"ID":"2f189480-c87b-4b74-a487-3d172c7de20a","Type":"ContainerStarted","Data":"8b8c9ca5b34a573b31cdcdc9ce6d95a50f86091303f1cf9a5d6af9453950d5c1"} Mar 16 18:30:05 crc kubenswrapper[4736]: I0316 18:30:05.304534 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" event={"ID":"a6a79683-81b2-4273-8de1-a68c9daa15cb","Type":"ContainerDied","Data":"376871df7b7860d63dc9d3c5908256039c39cf330c8c198377c2386945aadce4"} Mar 16 18:30:05 crc kubenswrapper[4736]: I0316 18:30:05.305262 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561430-6xprx" Mar 16 18:30:05 crc kubenswrapper[4736]: I0316 18:30:05.306144 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="376871df7b7860d63dc9d3c5908256039c39cf330c8c198377c2386945aadce4" Mar 16 18:30:05 crc kubenswrapper[4736]: I0316 18:30:05.324437 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561430-x9c48" podStartSLOduration=2.805403035 podStartE2EDuration="5.324416875s" podCreationTimestamp="2026-03-16 18:30:00 +0000 UTC" firstStartedPulling="2026-03-16 18:30:01.674395675 +0000 UTC m=+11803.401785962" lastFinishedPulling="2026-03-16 18:30:04.193409525 +0000 UTC m=+11805.920799802" observedRunningTime="2026-03-16 18:30:05.31873382 +0000 UTC m=+11807.046124097" watchObservedRunningTime="2026-03-16 18:30:05.324416875 +0000 UTC m=+11807.051807162" Mar 16 18:30:05 crc kubenswrapper[4736]: I0316 18:30:05.920797 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk"] Mar 16 18:30:05 crc kubenswrapper[4736]: I0316 18:30:05.943359 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561385-ljmfk"] Mar 16 18:30:06 crc kubenswrapper[4736]: I0316 18:30:06.314358 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kn7m" event={"ID":"0cc2cd3a-4f09-4923-83c9-dc06543c64e4","Type":"ContainerStarted","Data":"c317e99d156ef0358154339762ef951b75e130e15ba6b458650635d995b3a9c0"} Mar 16 18:30:06 crc kubenswrapper[4736]: I0316 18:30:06.995500 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf4cf0c2-3aa0-4f87-86f9-772752b42d5d" path="/var/lib/kubelet/pods/bf4cf0c2-3aa0-4f87-86f9-772752b42d5d/volumes" Mar 16 18:30:07 crc kubenswrapper[4736]: I0316 18:30:07.324134 4736 generic.go:334] "Generic (PLEG): container finished" podID="2f189480-c87b-4b74-a487-3d172c7de20a" containerID="8b8c9ca5b34a573b31cdcdc9ce6d95a50f86091303f1cf9a5d6af9453950d5c1" exitCode=0 Mar 16 18:30:07 crc kubenswrapper[4736]: I0316 18:30:07.324414 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561430-x9c48" event={"ID":"2f189480-c87b-4b74-a487-3d172c7de20a","Type":"ContainerDied","Data":"8b8c9ca5b34a573b31cdcdc9ce6d95a50f86091303f1cf9a5d6af9453950d5c1"} Mar 16 18:30:07 crc kubenswrapper[4736]: I0316 18:30:07.327205 4736 generic.go:334] "Generic (PLEG): container finished" podID="0cc2cd3a-4f09-4923-83c9-dc06543c64e4" containerID="c317e99d156ef0358154339762ef951b75e130e15ba6b458650635d995b3a9c0" exitCode=0 Mar 16 18:30:07 crc kubenswrapper[4736]: I0316 18:30:07.327242 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kn7m" event={"ID":"0cc2cd3a-4f09-4923-83c9-dc06543c64e4","Type":"ContainerDied","Data":"c317e99d156ef0358154339762ef951b75e130e15ba6b458650635d995b3a9c0"} Mar 16 18:30:08 crc kubenswrapper[4736]: I0316 18:30:08.337409 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kn7m" event={"ID":"0cc2cd3a-4f09-4923-83c9-dc06543c64e4","Type":"ContainerStarted","Data":"ea327f2d0ecb5ca71a327e2d4c59252faaa167cfa607c39359eb06da8e034c4d"} Mar 16 18:30:08 crc kubenswrapper[4736]: I0316 18:30:08.363521 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5kn7m" podStartSLOduration=2.777872617 podStartE2EDuration="6.363488341s" podCreationTimestamp="2026-03-16 18:30:02 +0000 UTC" firstStartedPulling="2026-03-16 18:30:04.291383327 +0000 UTC m=+11806.018773614" lastFinishedPulling="2026-03-16 18:30:07.876999051 +0000 UTC m=+11809.604389338" observedRunningTime="2026-03-16 18:30:08.353409415 +0000 UTC m=+11810.080799702" watchObservedRunningTime="2026-03-16 18:30:08.363488341 +0000 UTC m=+11810.090878628" Mar 16 18:30:08 crc kubenswrapper[4736]: I0316 18:30:08.507950 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:30:08 crc kubenswrapper[4736]: I0316 18:30:08.508000 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:30:08 crc kubenswrapper[4736]: I0316 18:30:08.508033 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 18:30:08 crc kubenswrapper[4736]: I0316 18:30:08.510429 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d0a720391c54d316f2f053aadbe78e9a28048d81dc5850440d9d91718b079e2"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 18:30:08 crc kubenswrapper[4736]: I0316 18:30:08.512717 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://7d0a720391c54d316f2f053aadbe78e9a28048d81dc5850440d9d91718b079e2" gracePeriod=600 Mar 16 18:30:08 crc kubenswrapper[4736]: I0316 18:30:08.797230 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561430-x9c48" Mar 16 18:30:08 crc kubenswrapper[4736]: I0316 18:30:08.963369 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf9cz\" (UniqueName: \"kubernetes.io/projected/2f189480-c87b-4b74-a487-3d172c7de20a-kube-api-access-qf9cz\") pod \"2f189480-c87b-4b74-a487-3d172c7de20a\" (UID: \"2f189480-c87b-4b74-a487-3d172c7de20a\") " Mar 16 18:30:08 crc kubenswrapper[4736]: I0316 18:30:08.985381 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f189480-c87b-4b74-a487-3d172c7de20a-kube-api-access-qf9cz" (OuterVolumeSpecName: "kube-api-access-qf9cz") pod "2f189480-c87b-4b74-a487-3d172c7de20a" (UID: "2f189480-c87b-4b74-a487-3d172c7de20a"). InnerVolumeSpecName "kube-api-access-qf9cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:30:09 crc kubenswrapper[4736]: I0316 18:30:09.066636 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf9cz\" (UniqueName: \"kubernetes.io/projected/2f189480-c87b-4b74-a487-3d172c7de20a-kube-api-access-qf9cz\") on node \"crc\" DevicePath \"\"" Mar 16 18:30:09 crc kubenswrapper[4736]: I0316 18:30:09.348171 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="7d0a720391c54d316f2f053aadbe78e9a28048d81dc5850440d9d91718b079e2" exitCode=0 Mar 16 18:30:09 crc kubenswrapper[4736]: I0316 18:30:09.348224 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"7d0a720391c54d316f2f053aadbe78e9a28048d81dc5850440d9d91718b079e2"} Mar 16 18:30:09 crc kubenswrapper[4736]: I0316 18:30:09.348251 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046"} Mar 16 18:30:09 crc kubenswrapper[4736]: I0316 18:30:09.348268 4736 scope.go:117] "RemoveContainer" containerID="019429b0db31b8d90a2b9fe42691b96d5d942c754a0e9abb55187052a6c9101f" Mar 16 18:30:09 crc kubenswrapper[4736]: I0316 18:30:09.352305 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561430-x9c48" Mar 16 18:30:09 crc kubenswrapper[4736]: I0316 18:30:09.352701 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561430-x9c48" event={"ID":"2f189480-c87b-4b74-a487-3d172c7de20a","Type":"ContainerDied","Data":"44d43fa98d9d039d53ee825dfc91709daeeba99852fad6f69ff55b215780e74f"} Mar 16 18:30:09 crc kubenswrapper[4736]: I0316 18:30:09.352719 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44d43fa98d9d039d53ee825dfc91709daeeba99852fad6f69ff55b215780e74f" Mar 16 18:30:09 crc kubenswrapper[4736]: I0316 18:30:09.413180 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561424-qd7xt"] Mar 16 18:30:09 crc kubenswrapper[4736]: I0316 18:30:09.422915 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561424-qd7xt"] Mar 16 18:30:09 crc kubenswrapper[4736]: I0316 18:30:09.460034 4736 scope.go:117] "RemoveContainer" containerID="3c62dbb0a450cd138ae1f2a14f82937edabcc97a8105d149f4566b4f38b35978" Mar 16 18:30:10 crc kubenswrapper[4736]: I0316 18:30:10.989097 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e" path="/var/lib/kubelet/pods/d8eb6ca2-0a9d-4dd8-b3f3-cebf828ec43e/volumes" Mar 16 18:30:12 crc kubenswrapper[4736]: I0316 18:30:12.860258 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:12 crc kubenswrapper[4736]: I0316 18:30:12.860882 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:13 crc kubenswrapper[4736]: I0316 18:30:13.918559 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5kn7m" podUID="0cc2cd3a-4f09-4923-83c9-dc06543c64e4" containerName="registry-server" probeResult="failure" output=< Mar 16 18:30:13 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:30:13 crc kubenswrapper[4736]: > Mar 16 18:30:22 crc kubenswrapper[4736]: I0316 18:30:22.940400 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:23 crc kubenswrapper[4736]: I0316 18:30:23.010688 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:23 crc kubenswrapper[4736]: I0316 18:30:23.202734 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kn7m"] Mar 16 18:30:24 crc kubenswrapper[4736]: I0316 18:30:24.502590 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5kn7m" podUID="0cc2cd3a-4f09-4923-83c9-dc06543c64e4" containerName="registry-server" containerID="cri-o://ea327f2d0ecb5ca71a327e2d4c59252faaa167cfa607c39359eb06da8e034c4d" gracePeriod=2 Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.282774 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.392149 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-utilities\") pod \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\" (UID: \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\") " Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.392884 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-utilities" (OuterVolumeSpecName: "utilities") pod "0cc2cd3a-4f09-4923-83c9-dc06543c64e4" (UID: "0cc2cd3a-4f09-4923-83c9-dc06543c64e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.393061 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-catalog-content\") pod \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\" (UID: \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\") " Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.393873 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2dtq\" (UniqueName: \"kubernetes.io/projected/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-kube-api-access-d2dtq\") pod \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\" (UID: \"0cc2cd3a-4f09-4923-83c9-dc06543c64e4\") " Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.394877 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.411713 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-kube-api-access-d2dtq" (OuterVolumeSpecName: "kube-api-access-d2dtq") pod "0cc2cd3a-4f09-4923-83c9-dc06543c64e4" (UID: "0cc2cd3a-4f09-4923-83c9-dc06543c64e4"). InnerVolumeSpecName "kube-api-access-d2dtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.454411 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cc2cd3a-4f09-4923-83c9-dc06543c64e4" (UID: "0cc2cd3a-4f09-4923-83c9-dc06543c64e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.496482 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.496518 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2dtq\" (UniqueName: \"kubernetes.io/projected/0cc2cd3a-4f09-4923-83c9-dc06543c64e4-kube-api-access-d2dtq\") on node \"crc\" DevicePath \"\"" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.512292 4736 generic.go:334] "Generic (PLEG): container finished" podID="0cc2cd3a-4f09-4923-83c9-dc06543c64e4" containerID="ea327f2d0ecb5ca71a327e2d4c59252faaa167cfa607c39359eb06da8e034c4d" exitCode=0 Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.512358 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kn7m" event={"ID":"0cc2cd3a-4f09-4923-83c9-dc06543c64e4","Type":"ContainerDied","Data":"ea327f2d0ecb5ca71a327e2d4c59252faaa167cfa607c39359eb06da8e034c4d"} Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.512387 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5kn7m" event={"ID":"0cc2cd3a-4f09-4923-83c9-dc06543c64e4","Type":"ContainerDied","Data":"427c4ec5437f03a728028f7d5a310776d459aa8e27aa216cdc86feabf0cda1d7"} Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.512405 4736 scope.go:117] "RemoveContainer" containerID="ea327f2d0ecb5ca71a327e2d4c59252faaa167cfa607c39359eb06da8e034c4d" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.512563 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5kn7m" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.558065 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5kn7m"] Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.565357 4736 scope.go:117] "RemoveContainer" containerID="c317e99d156ef0358154339762ef951b75e130e15ba6b458650635d995b3a9c0" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.570599 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5kn7m"] Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.592098 4736 scope.go:117] "RemoveContainer" containerID="b3433b8aec044fb709e1b14c71ab8012426cc79f7b83b48c46a3a6a8a79623de" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.635217 4736 scope.go:117] "RemoveContainer" containerID="ea327f2d0ecb5ca71a327e2d4c59252faaa167cfa607c39359eb06da8e034c4d" Mar 16 18:30:25 crc kubenswrapper[4736]: E0316 18:30:25.644764 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea327f2d0ecb5ca71a327e2d4c59252faaa167cfa607c39359eb06da8e034c4d\": container with ID starting with ea327f2d0ecb5ca71a327e2d4c59252faaa167cfa607c39359eb06da8e034c4d not found: ID does not exist" containerID="ea327f2d0ecb5ca71a327e2d4c59252faaa167cfa607c39359eb06da8e034c4d" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.644814 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea327f2d0ecb5ca71a327e2d4c59252faaa167cfa607c39359eb06da8e034c4d"} err="failed to get container status \"ea327f2d0ecb5ca71a327e2d4c59252faaa167cfa607c39359eb06da8e034c4d\": rpc error: code = NotFound desc = could not find container \"ea327f2d0ecb5ca71a327e2d4c59252faaa167cfa607c39359eb06da8e034c4d\": container with ID starting with ea327f2d0ecb5ca71a327e2d4c59252faaa167cfa607c39359eb06da8e034c4d not found: ID does not exist" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.644843 4736 scope.go:117] "RemoveContainer" containerID="c317e99d156ef0358154339762ef951b75e130e15ba6b458650635d995b3a9c0" Mar 16 18:30:25 crc kubenswrapper[4736]: E0316 18:30:25.645345 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c317e99d156ef0358154339762ef951b75e130e15ba6b458650635d995b3a9c0\": container with ID starting with c317e99d156ef0358154339762ef951b75e130e15ba6b458650635d995b3a9c0 not found: ID does not exist" containerID="c317e99d156ef0358154339762ef951b75e130e15ba6b458650635d995b3a9c0" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.645378 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c317e99d156ef0358154339762ef951b75e130e15ba6b458650635d995b3a9c0"} err="failed to get container status \"c317e99d156ef0358154339762ef951b75e130e15ba6b458650635d995b3a9c0\": rpc error: code = NotFound desc = could not find container \"c317e99d156ef0358154339762ef951b75e130e15ba6b458650635d995b3a9c0\": container with ID starting with c317e99d156ef0358154339762ef951b75e130e15ba6b458650635d995b3a9c0 not found: ID does not exist" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.645402 4736 scope.go:117] "RemoveContainer" containerID="b3433b8aec044fb709e1b14c71ab8012426cc79f7b83b48c46a3a6a8a79623de" Mar 16 18:30:25 crc kubenswrapper[4736]: E0316 18:30:25.645680 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3433b8aec044fb709e1b14c71ab8012426cc79f7b83b48c46a3a6a8a79623de\": container with ID starting with b3433b8aec044fb709e1b14c71ab8012426cc79f7b83b48c46a3a6a8a79623de not found: ID does not exist" containerID="b3433b8aec044fb709e1b14c71ab8012426cc79f7b83b48c46a3a6a8a79623de" Mar 16 18:30:25 crc kubenswrapper[4736]: I0316 18:30:25.645700 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3433b8aec044fb709e1b14c71ab8012426cc79f7b83b48c46a3a6a8a79623de"} err="failed to get container status \"b3433b8aec044fb709e1b14c71ab8012426cc79f7b83b48c46a3a6a8a79623de\": rpc error: code = NotFound desc = could not find container \"b3433b8aec044fb709e1b14c71ab8012426cc79f7b83b48c46a3a6a8a79623de\": container with ID starting with b3433b8aec044fb709e1b14c71ab8012426cc79f7b83b48c46a3a6a8a79623de not found: ID does not exist" Mar 16 18:30:26 crc kubenswrapper[4736]: I0316 18:30:26.997243 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cc2cd3a-4f09-4923-83c9-dc06543c64e4" path="/var/lib/kubelet/pods/0cc2cd3a-4f09-4923-83c9-dc06543c64e4/volumes" Mar 16 18:31:09 crc kubenswrapper[4736]: I0316 18:31:09.636062 4736 scope.go:117] "RemoveContainer" containerID="ed7d2b2f44c2ff5ecfb1fcc502bae5ee63d80c28b56f49c33be48bef39e27c53" Mar 16 18:31:09 crc kubenswrapper[4736]: I0316 18:31:09.766305 4736 scope.go:117] "RemoveContainer" containerID="334896d852125de930c521ec37b84a5bb22f9ab7704666ecc95ee7de83e384f0" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.322727 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561432-8c7ht"] Mar 16 18:32:00 crc kubenswrapper[4736]: E0316 18:32:00.329260 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cc2cd3a-4f09-4923-83c9-dc06543c64e4" containerName="extract-utilities" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.329306 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cc2cd3a-4f09-4923-83c9-dc06543c64e4" containerName="extract-utilities" Mar 16 18:32:00 crc kubenswrapper[4736]: E0316 18:32:00.329350 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cc2cd3a-4f09-4923-83c9-dc06543c64e4" containerName="extract-content" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.329364 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cc2cd3a-4f09-4923-83c9-dc06543c64e4" containerName="extract-content" Mar 16 18:32:00 crc kubenswrapper[4736]: E0316 18:32:00.329400 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cc2cd3a-4f09-4923-83c9-dc06543c64e4" containerName="registry-server" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.329413 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cc2cd3a-4f09-4923-83c9-dc06543c64e4" containerName="registry-server" Mar 16 18:32:00 crc kubenswrapper[4736]: E0316 18:32:00.329510 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f189480-c87b-4b74-a487-3d172c7de20a" containerName="oc" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.329523 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f189480-c87b-4b74-a487-3d172c7de20a" containerName="oc" Mar 16 18:32:00 crc kubenswrapper[4736]: E0316 18:32:00.329545 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a79683-81b2-4273-8de1-a68c9daa15cb" containerName="collect-profiles" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.329556 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a79683-81b2-4273-8de1-a68c9daa15cb" containerName="collect-profiles" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.331440 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f189480-c87b-4b74-a487-3d172c7de20a" containerName="oc" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.331934 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6a79683-81b2-4273-8de1-a68c9daa15cb" containerName="collect-profiles" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.331953 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cc2cd3a-4f09-4923-83c9-dc06543c64e4" containerName="registry-server" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.340334 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561432-8c7ht" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.372617 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.372615 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.372626 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.397314 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j59jq\" (UniqueName: \"kubernetes.io/projected/96eaf803-3df4-4ac7-8938-7514150cadba-kube-api-access-j59jq\") pod \"auto-csr-approver-29561432-8c7ht\" (UID: \"96eaf803-3df4-4ac7-8938-7514150cadba\") " pod="openshift-infra/auto-csr-approver-29561432-8c7ht" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.413431 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561432-8c7ht"] Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.499532 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j59jq\" (UniqueName: \"kubernetes.io/projected/96eaf803-3df4-4ac7-8938-7514150cadba-kube-api-access-j59jq\") pod \"auto-csr-approver-29561432-8c7ht\" (UID: \"96eaf803-3df4-4ac7-8938-7514150cadba\") " pod="openshift-infra/auto-csr-approver-29561432-8c7ht" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.543713 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j59jq\" (UniqueName: \"kubernetes.io/projected/96eaf803-3df4-4ac7-8938-7514150cadba-kube-api-access-j59jq\") pod \"auto-csr-approver-29561432-8c7ht\" (UID: \"96eaf803-3df4-4ac7-8938-7514150cadba\") " pod="openshift-infra/auto-csr-approver-29561432-8c7ht" Mar 16 18:32:00 crc kubenswrapper[4736]: I0316 18:32:00.682474 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561432-8c7ht" Mar 16 18:32:02 crc kubenswrapper[4736]: I0316 18:32:02.044841 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561432-8c7ht"] Mar 16 18:32:02 crc kubenswrapper[4736]: W0316 18:32:02.069282 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96eaf803_3df4_4ac7_8938_7514150cadba.slice/crio-5598d294296c6168f4e92ff94def6f4ff4cd41088506d9359c7afd4393c11c80 WatchSource:0}: Error finding container 5598d294296c6168f4e92ff94def6f4ff4cd41088506d9359c7afd4393c11c80: Status 404 returned error can't find the container with id 5598d294296c6168f4e92ff94def6f4ff4cd41088506d9359c7afd4393c11c80 Mar 16 18:32:02 crc kubenswrapper[4736]: I0316 18:32:02.090721 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 18:32:02 crc kubenswrapper[4736]: I0316 18:32:02.556441 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561432-8c7ht" event={"ID":"96eaf803-3df4-4ac7-8938-7514150cadba","Type":"ContainerStarted","Data":"5598d294296c6168f4e92ff94def6f4ff4cd41088506d9359c7afd4393c11c80"} Mar 16 18:32:04 crc kubenswrapper[4736]: I0316 18:32:04.580699 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561432-8c7ht" event={"ID":"96eaf803-3df4-4ac7-8938-7514150cadba","Type":"ContainerStarted","Data":"d19259c4d350ac02466796098b9507b53ed9ac36c451b9063704857b404063e9"} Mar 16 18:32:04 crc kubenswrapper[4736]: I0316 18:32:04.608127 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561432-8c7ht" podStartSLOduration=3.146485446 podStartE2EDuration="4.605178123s" podCreationTimestamp="2026-03-16 18:32:00 +0000 UTC" firstStartedPulling="2026-03-16 18:32:02.08602845 +0000 UTC m=+11923.813418737" lastFinishedPulling="2026-03-16 18:32:03.544721097 +0000 UTC m=+11925.272111414" observedRunningTime="2026-03-16 18:32:04.602669864 +0000 UTC m=+11926.330060171" watchObservedRunningTime="2026-03-16 18:32:04.605178123 +0000 UTC m=+11926.332568410" Mar 16 18:32:06 crc kubenswrapper[4736]: I0316 18:32:06.604346 4736 generic.go:334] "Generic (PLEG): container finished" podID="96eaf803-3df4-4ac7-8938-7514150cadba" containerID="d19259c4d350ac02466796098b9507b53ed9ac36c451b9063704857b404063e9" exitCode=0 Mar 16 18:32:06 crc kubenswrapper[4736]: I0316 18:32:06.604432 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561432-8c7ht" event={"ID":"96eaf803-3df4-4ac7-8938-7514150cadba","Type":"ContainerDied","Data":"d19259c4d350ac02466796098b9507b53ed9ac36c451b9063704857b404063e9"} Mar 16 18:32:08 crc kubenswrapper[4736]: I0316 18:32:08.177826 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561432-8c7ht" Mar 16 18:32:08 crc kubenswrapper[4736]: I0316 18:32:08.282959 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j59jq\" (UniqueName: \"kubernetes.io/projected/96eaf803-3df4-4ac7-8938-7514150cadba-kube-api-access-j59jq\") pod \"96eaf803-3df4-4ac7-8938-7514150cadba\" (UID: \"96eaf803-3df4-4ac7-8938-7514150cadba\") " Mar 16 18:32:08 crc kubenswrapper[4736]: I0316 18:32:08.296349 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96eaf803-3df4-4ac7-8938-7514150cadba-kube-api-access-j59jq" (OuterVolumeSpecName: "kube-api-access-j59jq") pod "96eaf803-3df4-4ac7-8938-7514150cadba" (UID: "96eaf803-3df4-4ac7-8938-7514150cadba"). InnerVolumeSpecName "kube-api-access-j59jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:32:08 crc kubenswrapper[4736]: I0316 18:32:08.385309 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j59jq\" (UniqueName: \"kubernetes.io/projected/96eaf803-3df4-4ac7-8938-7514150cadba-kube-api-access-j59jq\") on node \"crc\" DevicePath \"\"" Mar 16 18:32:08 crc kubenswrapper[4736]: I0316 18:32:08.509137 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:32:08 crc kubenswrapper[4736]: I0316 18:32:08.510369 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:32:08 crc kubenswrapper[4736]: I0316 18:32:08.627738 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561432-8c7ht" event={"ID":"96eaf803-3df4-4ac7-8938-7514150cadba","Type":"ContainerDied","Data":"5598d294296c6168f4e92ff94def6f4ff4cd41088506d9359c7afd4393c11c80"} Mar 16 18:32:08 crc kubenswrapper[4736]: I0316 18:32:08.627942 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5598d294296c6168f4e92ff94def6f4ff4cd41088506d9359c7afd4393c11c80" Mar 16 18:32:08 crc kubenswrapper[4736]: I0316 18:32:08.627953 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561432-8c7ht" Mar 16 18:32:08 crc kubenswrapper[4736]: I0316 18:32:08.735505 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561426-ljmnv"] Mar 16 18:32:08 crc kubenswrapper[4736]: I0316 18:32:08.746436 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561426-ljmnv"] Mar 16 18:32:08 crc kubenswrapper[4736]: I0316 18:32:08.989740 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f0c706-a0fd-4796-8602-38fd8cca14e0" path="/var/lib/kubelet/pods/e9f0c706-a0fd-4796-8602-38fd8cca14e0/volumes" Mar 16 18:32:09 crc kubenswrapper[4736]: I0316 18:32:09.905214 4736 scope.go:117] "RemoveContainer" containerID="58bd31c7f49cc46205052426da7b18ee9fac9f08eed75983c14133aee692dcd8" Mar 16 18:32:09 crc kubenswrapper[4736]: I0316 18:32:09.974156 4736 scope.go:117] "RemoveContainer" containerID="960872a26df4a98ec20e987d0b20bcb6483c090517cd24a4bc058b1b34bd3686" Mar 16 18:32:10 crc kubenswrapper[4736]: I0316 18:32:09.995976 4736 scope.go:117] "RemoveContainer" containerID="c2684d20c11709cda9ec27f10cd4b80794ed0bb6c9016bde46f98bbf27f2b974" Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.314680 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-28wn9"] Mar 16 18:32:20 crc kubenswrapper[4736]: E0316 18:32:20.315804 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96eaf803-3df4-4ac7-8938-7514150cadba" containerName="oc" Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.315818 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="96eaf803-3df4-4ac7-8938-7514150cadba" containerName="oc" Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.316040 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="96eaf803-3df4-4ac7-8938-7514150cadba" containerName="oc" Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.318628 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.333461 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28wn9"] Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.465492 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48655395-dc8f-4a4b-8068-c9c3c5452ed6-utilities\") pod \"redhat-marketplace-28wn9\" (UID: \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\") " pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.465607 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48655395-dc8f-4a4b-8068-c9c3c5452ed6-catalog-content\") pod \"redhat-marketplace-28wn9\" (UID: \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\") " pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.465791 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggb8h\" (UniqueName: \"kubernetes.io/projected/48655395-dc8f-4a4b-8068-c9c3c5452ed6-kube-api-access-ggb8h\") pod \"redhat-marketplace-28wn9\" (UID: \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\") " pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.567476 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48655395-dc8f-4a4b-8068-c9c3c5452ed6-utilities\") pod \"redhat-marketplace-28wn9\" (UID: \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\") " pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.567601 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48655395-dc8f-4a4b-8068-c9c3c5452ed6-catalog-content\") pod \"redhat-marketplace-28wn9\" (UID: \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\") " pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.567663 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggb8h\" (UniqueName: \"kubernetes.io/projected/48655395-dc8f-4a4b-8068-c9c3c5452ed6-kube-api-access-ggb8h\") pod \"redhat-marketplace-28wn9\" (UID: \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\") " pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.567969 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48655395-dc8f-4a4b-8068-c9c3c5452ed6-utilities\") pod \"redhat-marketplace-28wn9\" (UID: \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\") " pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.568093 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48655395-dc8f-4a4b-8068-c9c3c5452ed6-catalog-content\") pod \"redhat-marketplace-28wn9\" (UID: \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\") " pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.589889 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggb8h\" (UniqueName: \"kubernetes.io/projected/48655395-dc8f-4a4b-8068-c9c3c5452ed6-kube-api-access-ggb8h\") pod \"redhat-marketplace-28wn9\" (UID: \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\") " pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:20 crc kubenswrapper[4736]: I0316 18:32:20.649162 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:21 crc kubenswrapper[4736]: I0316 18:32:21.156577 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-28wn9"] Mar 16 18:32:21 crc kubenswrapper[4736]: I0316 18:32:21.770869 4736 generic.go:334] "Generic (PLEG): container finished" podID="48655395-dc8f-4a4b-8068-c9c3c5452ed6" containerID="84b6d53167f25ced7c5f681f751255e8a9ac7c1f75b3ff442edf7b35f249f20c" exitCode=0 Mar 16 18:32:21 crc kubenswrapper[4736]: I0316 18:32:21.771070 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28wn9" event={"ID":"48655395-dc8f-4a4b-8068-c9c3c5452ed6","Type":"ContainerDied","Data":"84b6d53167f25ced7c5f681f751255e8a9ac7c1f75b3ff442edf7b35f249f20c"} Mar 16 18:32:21 crc kubenswrapper[4736]: I0316 18:32:21.771283 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28wn9" event={"ID":"48655395-dc8f-4a4b-8068-c9c3c5452ed6","Type":"ContainerStarted","Data":"93cbba1396cee7192041ded5664de64afa9ad9eede52a925323ae0f7e57aeb84"} Mar 16 18:32:23 crc kubenswrapper[4736]: I0316 18:32:23.797395 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28wn9" event={"ID":"48655395-dc8f-4a4b-8068-c9c3c5452ed6","Type":"ContainerStarted","Data":"62df60914d248a10c0eaaa091a2370d22f6631de1ca32dbdd2d1d719d9b95c68"} Mar 16 18:32:24 crc kubenswrapper[4736]: I0316 18:32:24.811079 4736 generic.go:334] "Generic (PLEG): container finished" podID="48655395-dc8f-4a4b-8068-c9c3c5452ed6" containerID="62df60914d248a10c0eaaa091a2370d22f6631de1ca32dbdd2d1d719d9b95c68" exitCode=0 Mar 16 18:32:24 crc kubenswrapper[4736]: I0316 18:32:24.811383 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28wn9" event={"ID":"48655395-dc8f-4a4b-8068-c9c3c5452ed6","Type":"ContainerDied","Data":"62df60914d248a10c0eaaa091a2370d22f6631de1ca32dbdd2d1d719d9b95c68"} Mar 16 18:32:25 crc kubenswrapper[4736]: I0316 18:32:25.824407 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28wn9" event={"ID":"48655395-dc8f-4a4b-8068-c9c3c5452ed6","Type":"ContainerStarted","Data":"f7d83c7461f0af9a7890a42877639317e75fbf709c12ef2262183c110caee499"} Mar 16 18:32:25 crc kubenswrapper[4736]: I0316 18:32:25.850982 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-28wn9" podStartSLOduration=2.3198235560000002 podStartE2EDuration="5.850960874s" podCreationTimestamp="2026-03-16 18:32:20 +0000 UTC" firstStartedPulling="2026-03-16 18:32:21.773045032 +0000 UTC m=+11943.500435329" lastFinishedPulling="2026-03-16 18:32:25.30418233 +0000 UTC m=+11947.031572647" observedRunningTime="2026-03-16 18:32:25.839859431 +0000 UTC m=+11947.567249728" watchObservedRunningTime="2026-03-16 18:32:25.850960874 +0000 UTC m=+11947.578351171" Mar 16 18:32:30 crc kubenswrapper[4736]: I0316 18:32:30.650113 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:30 crc kubenswrapper[4736]: I0316 18:32:30.650550 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:31 crc kubenswrapper[4736]: I0316 18:32:31.735789 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-28wn9" podUID="48655395-dc8f-4a4b-8068-c9c3c5452ed6" containerName="registry-server" probeResult="failure" output=< Mar 16 18:32:31 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:32:31 crc kubenswrapper[4736]: > Mar 16 18:32:38 crc kubenswrapper[4736]: I0316 18:32:38.508126 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:32:38 crc kubenswrapper[4736]: I0316 18:32:38.508951 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:32:40 crc kubenswrapper[4736]: I0316 18:32:40.740012 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:40 crc kubenswrapper[4736]: I0316 18:32:40.812599 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:40 crc kubenswrapper[4736]: I0316 18:32:40.991363 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-28wn9"] Mar 16 18:32:41 crc kubenswrapper[4736]: I0316 18:32:41.984920 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-28wn9" podUID="48655395-dc8f-4a4b-8068-c9c3c5452ed6" containerName="registry-server" containerID="cri-o://f7d83c7461f0af9a7890a42877639317e75fbf709c12ef2262183c110caee499" gracePeriod=2 Mar 16 18:32:42 crc kubenswrapper[4736]: I0316 18:32:42.520192 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:42 crc kubenswrapper[4736]: I0316 18:32:42.631540 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48655395-dc8f-4a4b-8068-c9c3c5452ed6-catalog-content\") pod \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\" (UID: \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\") " Mar 16 18:32:42 crc kubenswrapper[4736]: I0316 18:32:42.631999 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggb8h\" (UniqueName: \"kubernetes.io/projected/48655395-dc8f-4a4b-8068-c9c3c5452ed6-kube-api-access-ggb8h\") pod \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\" (UID: \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\") " Mar 16 18:32:42 crc kubenswrapper[4736]: I0316 18:32:42.632213 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48655395-dc8f-4a4b-8068-c9c3c5452ed6-utilities\") pod \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\" (UID: \"48655395-dc8f-4a4b-8068-c9c3c5452ed6\") " Mar 16 18:32:42 crc kubenswrapper[4736]: I0316 18:32:42.635824 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48655395-dc8f-4a4b-8068-c9c3c5452ed6-utilities" (OuterVolumeSpecName: "utilities") pod "48655395-dc8f-4a4b-8068-c9c3c5452ed6" (UID: "48655395-dc8f-4a4b-8068-c9c3c5452ed6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:32:42 crc kubenswrapper[4736]: I0316 18:32:42.654485 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48655395-dc8f-4a4b-8068-c9c3c5452ed6-kube-api-access-ggb8h" (OuterVolumeSpecName: "kube-api-access-ggb8h") pod "48655395-dc8f-4a4b-8068-c9c3c5452ed6" (UID: "48655395-dc8f-4a4b-8068-c9c3c5452ed6"). InnerVolumeSpecName "kube-api-access-ggb8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:32:42 crc kubenswrapper[4736]: I0316 18:32:42.705924 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48655395-dc8f-4a4b-8068-c9c3c5452ed6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48655395-dc8f-4a4b-8068-c9c3c5452ed6" (UID: "48655395-dc8f-4a4b-8068-c9c3c5452ed6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:32:42 crc kubenswrapper[4736]: I0316 18:32:42.733953 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48655395-dc8f-4a4b-8068-c9c3c5452ed6-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:32:42 crc kubenswrapper[4736]: I0316 18:32:42.733980 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48655395-dc8f-4a4b-8068-c9c3c5452ed6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:32:42 crc kubenswrapper[4736]: I0316 18:32:42.733990 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggb8h\" (UniqueName: \"kubernetes.io/projected/48655395-dc8f-4a4b-8068-c9c3c5452ed6-kube-api-access-ggb8h\") on node \"crc\" DevicePath \"\"" Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.000287 4736 generic.go:334] "Generic (PLEG): container finished" podID="48655395-dc8f-4a4b-8068-c9c3c5452ed6" containerID="f7d83c7461f0af9a7890a42877639317e75fbf709c12ef2262183c110caee499" exitCode=0 Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.000346 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28wn9" event={"ID":"48655395-dc8f-4a4b-8068-c9c3c5452ed6","Type":"ContainerDied","Data":"f7d83c7461f0af9a7890a42877639317e75fbf709c12ef2262183c110caee499"} Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.000371 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-28wn9" event={"ID":"48655395-dc8f-4a4b-8068-c9c3c5452ed6","Type":"ContainerDied","Data":"93cbba1396cee7192041ded5664de64afa9ad9eede52a925323ae0f7e57aeb84"} Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.000366 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-28wn9" Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.000390 4736 scope.go:117] "RemoveContainer" containerID="f7d83c7461f0af9a7890a42877639317e75fbf709c12ef2262183c110caee499" Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.036988 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-28wn9"] Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.046448 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-28wn9"] Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.061623 4736 scope.go:117] "RemoveContainer" containerID="62df60914d248a10c0eaaa091a2370d22f6631de1ca32dbdd2d1d719d9b95c68" Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.099503 4736 scope.go:117] "RemoveContainer" containerID="84b6d53167f25ced7c5f681f751255e8a9ac7c1f75b3ff442edf7b35f249f20c" Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.142278 4736 scope.go:117] "RemoveContainer" containerID="f7d83c7461f0af9a7890a42877639317e75fbf709c12ef2262183c110caee499" Mar 16 18:32:43 crc kubenswrapper[4736]: E0316 18:32:43.145760 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7d83c7461f0af9a7890a42877639317e75fbf709c12ef2262183c110caee499\": container with ID starting with f7d83c7461f0af9a7890a42877639317e75fbf709c12ef2262183c110caee499 not found: ID does not exist" containerID="f7d83c7461f0af9a7890a42877639317e75fbf709c12ef2262183c110caee499" Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.145806 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7d83c7461f0af9a7890a42877639317e75fbf709c12ef2262183c110caee499"} err="failed to get container status \"f7d83c7461f0af9a7890a42877639317e75fbf709c12ef2262183c110caee499\": rpc error: code = NotFound desc = could not find container \"f7d83c7461f0af9a7890a42877639317e75fbf709c12ef2262183c110caee499\": container with ID starting with f7d83c7461f0af9a7890a42877639317e75fbf709c12ef2262183c110caee499 not found: ID does not exist" Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.145840 4736 scope.go:117] "RemoveContainer" containerID="62df60914d248a10c0eaaa091a2370d22f6631de1ca32dbdd2d1d719d9b95c68" Mar 16 18:32:43 crc kubenswrapper[4736]: E0316 18:32:43.146264 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62df60914d248a10c0eaaa091a2370d22f6631de1ca32dbdd2d1d719d9b95c68\": container with ID starting with 62df60914d248a10c0eaaa091a2370d22f6631de1ca32dbdd2d1d719d9b95c68 not found: ID does not exist" containerID="62df60914d248a10c0eaaa091a2370d22f6631de1ca32dbdd2d1d719d9b95c68" Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.146295 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62df60914d248a10c0eaaa091a2370d22f6631de1ca32dbdd2d1d719d9b95c68"} err="failed to get container status \"62df60914d248a10c0eaaa091a2370d22f6631de1ca32dbdd2d1d719d9b95c68\": rpc error: code = NotFound desc = could not find container \"62df60914d248a10c0eaaa091a2370d22f6631de1ca32dbdd2d1d719d9b95c68\": container with ID starting with 62df60914d248a10c0eaaa091a2370d22f6631de1ca32dbdd2d1d719d9b95c68 not found: ID does not exist" Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.146317 4736 scope.go:117] "RemoveContainer" containerID="84b6d53167f25ced7c5f681f751255e8a9ac7c1f75b3ff442edf7b35f249f20c" Mar 16 18:32:43 crc kubenswrapper[4736]: E0316 18:32:43.147170 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b6d53167f25ced7c5f681f751255e8a9ac7c1f75b3ff442edf7b35f249f20c\": container with ID starting with 84b6d53167f25ced7c5f681f751255e8a9ac7c1f75b3ff442edf7b35f249f20c not found: ID does not exist" containerID="84b6d53167f25ced7c5f681f751255e8a9ac7c1f75b3ff442edf7b35f249f20c" Mar 16 18:32:43 crc kubenswrapper[4736]: I0316 18:32:43.147264 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b6d53167f25ced7c5f681f751255e8a9ac7c1f75b3ff442edf7b35f249f20c"} err="failed to get container status \"84b6d53167f25ced7c5f681f751255e8a9ac7c1f75b3ff442edf7b35f249f20c\": rpc error: code = NotFound desc = could not find container \"84b6d53167f25ced7c5f681f751255e8a9ac7c1f75b3ff442edf7b35f249f20c\": container with ID starting with 84b6d53167f25ced7c5f681f751255e8a9ac7c1f75b3ff442edf7b35f249f20c not found: ID does not exist" Mar 16 18:32:44 crc kubenswrapper[4736]: I0316 18:32:44.991573 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48655395-dc8f-4a4b-8068-c9c3c5452ed6" path="/var/lib/kubelet/pods/48655395-dc8f-4a4b-8068-c9c3c5452ed6/volumes" Mar 16 18:33:08 crc kubenswrapper[4736]: I0316 18:33:08.508521 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:33:08 crc kubenswrapper[4736]: I0316 18:33:08.509045 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:33:08 crc kubenswrapper[4736]: I0316 18:33:08.509094 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 18:33:08 crc kubenswrapper[4736]: I0316 18:33:08.509968 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 18:33:08 crc kubenswrapper[4736]: I0316 18:33:08.510032 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" gracePeriod=600 Mar 16 18:33:08 crc kubenswrapper[4736]: E0316 18:33:08.643015 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:33:09 crc kubenswrapper[4736]: I0316 18:33:09.243986 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" exitCode=0 Mar 16 18:33:09 crc kubenswrapper[4736]: I0316 18:33:09.244040 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046"} Mar 16 18:33:09 crc kubenswrapper[4736]: I0316 18:33:09.244074 4736 scope.go:117] "RemoveContainer" containerID="7d0a720391c54d316f2f053aadbe78e9a28048d81dc5850440d9d91718b079e2" Mar 16 18:33:09 crc kubenswrapper[4736]: I0316 18:33:09.244758 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:33:09 crc kubenswrapper[4736]: E0316 18:33:09.244990 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:33:23 crc kubenswrapper[4736]: I0316 18:33:23.978828 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:33:23 crc kubenswrapper[4736]: E0316 18:33:23.980026 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:33:37 crc kubenswrapper[4736]: I0316 18:33:37.978791 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:33:37 crc kubenswrapper[4736]: E0316 18:33:37.979824 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:33:49 crc kubenswrapper[4736]: I0316 18:33:49.978166 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:33:49 crc kubenswrapper[4736]: E0316 18:33:49.978959 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:34:00 crc kubenswrapper[4736]: I0316 18:34:00.232793 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561434-nbs79"] Mar 16 18:34:00 crc kubenswrapper[4736]: E0316 18:34:00.241666 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48655395-dc8f-4a4b-8068-c9c3c5452ed6" containerName="extract-content" Mar 16 18:34:00 crc kubenswrapper[4736]: I0316 18:34:00.241688 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="48655395-dc8f-4a4b-8068-c9c3c5452ed6" containerName="extract-content" Mar 16 18:34:00 crc kubenswrapper[4736]: E0316 18:34:00.241712 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48655395-dc8f-4a4b-8068-c9c3c5452ed6" containerName="registry-server" Mar 16 18:34:00 crc kubenswrapper[4736]: I0316 18:34:00.241721 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="48655395-dc8f-4a4b-8068-c9c3c5452ed6" containerName="registry-server" Mar 16 18:34:00 crc kubenswrapper[4736]: E0316 18:34:00.241749 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48655395-dc8f-4a4b-8068-c9c3c5452ed6" containerName="extract-utilities" Mar 16 18:34:00 crc kubenswrapper[4736]: I0316 18:34:00.241756 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="48655395-dc8f-4a4b-8068-c9c3c5452ed6" containerName="extract-utilities" Mar 16 18:34:00 crc kubenswrapper[4736]: I0316 18:34:00.242800 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="48655395-dc8f-4a4b-8068-c9c3c5452ed6" containerName="registry-server" Mar 16 18:34:00 crc kubenswrapper[4736]: I0316 18:34:00.248718 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561434-nbs79" Mar 16 18:34:00 crc kubenswrapper[4736]: I0316 18:34:00.263658 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:34:00 crc kubenswrapper[4736]: I0316 18:34:00.263674 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:34:00 crc kubenswrapper[4736]: I0316 18:34:00.263748 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:34:00 crc kubenswrapper[4736]: I0316 18:34:00.269806 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561434-nbs79"] Mar 16 18:34:00 crc kubenswrapper[4736]: I0316 18:34:00.368351 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qj58\" (UniqueName: \"kubernetes.io/projected/940e4f5b-4c4e-483d-8173-da87b8c05e91-kube-api-access-7qj58\") pod \"auto-csr-approver-29561434-nbs79\" (UID: \"940e4f5b-4c4e-483d-8173-da87b8c05e91\") " pod="openshift-infra/auto-csr-approver-29561434-nbs79" Mar 16 18:34:00 crc kubenswrapper[4736]: I0316 18:34:00.469380 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qj58\" (UniqueName: \"kubernetes.io/projected/940e4f5b-4c4e-483d-8173-da87b8c05e91-kube-api-access-7qj58\") pod \"auto-csr-approver-29561434-nbs79\" (UID: \"940e4f5b-4c4e-483d-8173-da87b8c05e91\") " pod="openshift-infra/auto-csr-approver-29561434-nbs79" Mar 16 18:34:00 crc kubenswrapper[4736]: I0316 18:34:00.507871 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qj58\" (UniqueName: \"kubernetes.io/projected/940e4f5b-4c4e-483d-8173-da87b8c05e91-kube-api-access-7qj58\") pod \"auto-csr-approver-29561434-nbs79\" (UID: \"940e4f5b-4c4e-483d-8173-da87b8c05e91\") " pod="openshift-infra/auto-csr-approver-29561434-nbs79" Mar 16 18:34:00 crc kubenswrapper[4736]: I0316 18:34:00.575257 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561434-nbs79" Mar 16 18:34:01 crc kubenswrapper[4736]: I0316 18:34:01.874943 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561434-nbs79"] Mar 16 18:34:01 crc kubenswrapper[4736]: W0316 18:34:01.904355 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod940e4f5b_4c4e_483d_8173_da87b8c05e91.slice/crio-ecafaeb2957e9e959c08429db319f4b125864d0bfdb374bd0c095428719e0f22 WatchSource:0}: Error finding container ecafaeb2957e9e959c08429db319f4b125864d0bfdb374bd0c095428719e0f22: Status 404 returned error can't find the container with id ecafaeb2957e9e959c08429db319f4b125864d0bfdb374bd0c095428719e0f22 Mar 16 18:34:02 crc kubenswrapper[4736]: I0316 18:34:02.796620 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561434-nbs79" event={"ID":"940e4f5b-4c4e-483d-8173-da87b8c05e91","Type":"ContainerStarted","Data":"ecafaeb2957e9e959c08429db319f4b125864d0bfdb374bd0c095428719e0f22"} Mar 16 18:34:04 crc kubenswrapper[4736]: I0316 18:34:04.816582 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561434-nbs79" event={"ID":"940e4f5b-4c4e-483d-8173-da87b8c05e91","Type":"ContainerStarted","Data":"9784ac8777e9f5ec722bc0b3df38151aab4b9f3850a81864233a8f83ea89f8cc"} Mar 16 18:34:04 crc kubenswrapper[4736]: I0316 18:34:04.843603 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561434-nbs79" podStartSLOduration=3.821267394 podStartE2EDuration="4.83844736s" podCreationTimestamp="2026-03-16 18:34:00 +0000 UTC" firstStartedPulling="2026-03-16 18:34:01.917703583 +0000 UTC m=+12043.645093880" lastFinishedPulling="2026-03-16 18:34:02.934883559 +0000 UTC m=+12044.662273846" observedRunningTime="2026-03-16 18:34:04.833201187 +0000 UTC m=+12046.560591474" watchObservedRunningTime="2026-03-16 18:34:04.83844736 +0000 UTC m=+12046.565837647" Mar 16 18:34:04 crc kubenswrapper[4736]: I0316 18:34:04.978833 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:34:04 crc kubenswrapper[4736]: E0316 18:34:04.979621 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:34:05 crc kubenswrapper[4736]: I0316 18:34:05.829644 4736 generic.go:334] "Generic (PLEG): container finished" podID="940e4f5b-4c4e-483d-8173-da87b8c05e91" containerID="9784ac8777e9f5ec722bc0b3df38151aab4b9f3850a81864233a8f83ea89f8cc" exitCode=0 Mar 16 18:34:05 crc kubenswrapper[4736]: I0316 18:34:05.829699 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561434-nbs79" event={"ID":"940e4f5b-4c4e-483d-8173-da87b8c05e91","Type":"ContainerDied","Data":"9784ac8777e9f5ec722bc0b3df38151aab4b9f3850a81864233a8f83ea89f8cc"} Mar 16 18:34:07 crc kubenswrapper[4736]: I0316 18:34:07.259316 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561434-nbs79" Mar 16 18:34:07 crc kubenswrapper[4736]: I0316 18:34:07.405125 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qj58\" (UniqueName: \"kubernetes.io/projected/940e4f5b-4c4e-483d-8173-da87b8c05e91-kube-api-access-7qj58\") pod \"940e4f5b-4c4e-483d-8173-da87b8c05e91\" (UID: \"940e4f5b-4c4e-483d-8173-da87b8c05e91\") " Mar 16 18:34:07 crc kubenswrapper[4736]: I0316 18:34:07.434509 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/940e4f5b-4c4e-483d-8173-da87b8c05e91-kube-api-access-7qj58" (OuterVolumeSpecName: "kube-api-access-7qj58") pod "940e4f5b-4c4e-483d-8173-da87b8c05e91" (UID: "940e4f5b-4c4e-483d-8173-da87b8c05e91"). InnerVolumeSpecName "kube-api-access-7qj58". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:34:07 crc kubenswrapper[4736]: I0316 18:34:07.507486 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qj58\" (UniqueName: \"kubernetes.io/projected/940e4f5b-4c4e-483d-8173-da87b8c05e91-kube-api-access-7qj58\") on node \"crc\" DevicePath \"\"" Mar 16 18:34:07 crc kubenswrapper[4736]: I0316 18:34:07.847223 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561434-nbs79" event={"ID":"940e4f5b-4c4e-483d-8173-da87b8c05e91","Type":"ContainerDied","Data":"ecafaeb2957e9e959c08429db319f4b125864d0bfdb374bd0c095428719e0f22"} Mar 16 18:34:07 crc kubenswrapper[4736]: I0316 18:34:07.847501 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecafaeb2957e9e959c08429db319f4b125864d0bfdb374bd0c095428719e0f22" Mar 16 18:34:07 crc kubenswrapper[4736]: I0316 18:34:07.847468 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561434-nbs79" Mar 16 18:34:07 crc kubenswrapper[4736]: I0316 18:34:07.931856 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561428-gpw65"] Mar 16 18:34:07 crc kubenswrapper[4736]: I0316 18:34:07.943376 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561428-gpw65"] Mar 16 18:34:08 crc kubenswrapper[4736]: I0316 18:34:08.990940 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d" path="/var/lib/kubelet/pods/8d1c1e3d-b99c-4aaa-b7b1-ffe18b15db6d/volumes" Mar 16 18:34:10 crc kubenswrapper[4736]: I0316 18:34:10.277590 4736 scope.go:117] "RemoveContainer" containerID="ea7f65d77b54958e5e2f0f3293b2650fb9d1b07a98d4b34821b05d712819d346" Mar 16 18:34:15 crc kubenswrapper[4736]: I0316 18:34:15.977814 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:34:15 crc kubenswrapper[4736]: E0316 18:34:15.978504 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:34:30 crc kubenswrapper[4736]: I0316 18:34:30.978402 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:34:30 crc kubenswrapper[4736]: E0316 18:34:30.979370 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:34:38 crc kubenswrapper[4736]: I0316 18:34:38.202000 4736 generic.go:334] "Generic (PLEG): container finished" podID="252155f6-a310-43e1-bf80-1d17a2db2128" containerID="c6fd4d0d0888ee52fc3ce7ef3616d645b9b8efdc7e9d72c27c702ca295bf4e41" exitCode=0 Mar 16 18:34:38 crc kubenswrapper[4736]: I0316 18:34:38.202140 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"252155f6-a310-43e1-bf80-1d17a2db2128","Type":"ContainerDied","Data":"c6fd4d0d0888ee52fc3ce7ef3616d645b9b8efdc7e9d72c27c702ca295bf4e41"} Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.051370 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.124832 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-openstack-config-secret\") pod \"252155f6-a310-43e1-bf80-1d17a2db2128\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.124898 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"252155f6-a310-43e1-bf80-1d17a2db2128\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.124980 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-ssh-key\") pod \"252155f6-a310-43e1-bf80-1d17a2db2128\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.125026 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-ca-certs\") pod \"252155f6-a310-43e1-bf80-1d17a2db2128\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.125118 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/252155f6-a310-43e1-bf80-1d17a2db2128-config-data\") pod \"252155f6-a310-43e1-bf80-1d17a2db2128\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.125161 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/252155f6-a310-43e1-bf80-1d17a2db2128-test-operator-ephemeral-workdir\") pod \"252155f6-a310-43e1-bf80-1d17a2db2128\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.125214 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz7g5\" (UniqueName: \"kubernetes.io/projected/252155f6-a310-43e1-bf80-1d17a2db2128-kube-api-access-fz7g5\") pod \"252155f6-a310-43e1-bf80-1d17a2db2128\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.125245 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/252155f6-a310-43e1-bf80-1d17a2db2128-test-operator-ephemeral-temporary\") pod \"252155f6-a310-43e1-bf80-1d17a2db2128\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.125290 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/252155f6-a310-43e1-bf80-1d17a2db2128-openstack-config\") pod \"252155f6-a310-43e1-bf80-1d17a2db2128\" (UID: \"252155f6-a310-43e1-bf80-1d17a2db2128\") " Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.132168 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/252155f6-a310-43e1-bf80-1d17a2db2128-config-data" (OuterVolumeSpecName: "config-data") pod "252155f6-a310-43e1-bf80-1d17a2db2128" (UID: "252155f6-a310-43e1-bf80-1d17a2db2128"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.132497 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/252155f6-a310-43e1-bf80-1d17a2db2128-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "252155f6-a310-43e1-bf80-1d17a2db2128" (UID: "252155f6-a310-43e1-bf80-1d17a2db2128"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.144566 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/252155f6-a310-43e1-bf80-1d17a2db2128-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "252155f6-a310-43e1-bf80-1d17a2db2128" (UID: "252155f6-a310-43e1-bf80-1d17a2db2128"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.150043 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/252155f6-a310-43e1-bf80-1d17a2db2128-kube-api-access-fz7g5" (OuterVolumeSpecName: "kube-api-access-fz7g5") pod "252155f6-a310-43e1-bf80-1d17a2db2128" (UID: "252155f6-a310-43e1-bf80-1d17a2db2128"). InnerVolumeSpecName "kube-api-access-fz7g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.150482 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "252155f6-a310-43e1-bf80-1d17a2db2128" (UID: "252155f6-a310-43e1-bf80-1d17a2db2128"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.155158 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "252155f6-a310-43e1-bf80-1d17a2db2128" (UID: "252155f6-a310-43e1-bf80-1d17a2db2128"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.160797 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "252155f6-a310-43e1-bf80-1d17a2db2128" (UID: "252155f6-a310-43e1-bf80-1d17a2db2128"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.174578 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/252155f6-a310-43e1-bf80-1d17a2db2128-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "252155f6-a310-43e1-bf80-1d17a2db2128" (UID: "252155f6-a310-43e1-bf80-1d17a2db2128"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.177077 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "252155f6-a310-43e1-bf80-1d17a2db2128" (UID: "252155f6-a310-43e1-bf80-1d17a2db2128"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.224969 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"252155f6-a310-43e1-bf80-1d17a2db2128","Type":"ContainerDied","Data":"20fffcdbdd87913f5e08278186713ac34ac9a6fe976cdc719543b545cad2cfdf"} Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.225143 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20fffcdbdd87913f5e08278186713ac34ac9a6fe976cdc719543b545cad2cfdf" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.225169 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.231747 4736 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/252155f6-a310-43e1-bf80-1d17a2db2128-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.231894 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/252155f6-a310-43e1-bf80-1d17a2db2128-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.231959 4736 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.232043 4736 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.232158 4736 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.232227 4736 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/252155f6-a310-43e1-bf80-1d17a2db2128-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.232288 4736 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/252155f6-a310-43e1-bf80-1d17a2db2128-config-data\") on node \"crc\" DevicePath \"\"" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.232340 4736 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/252155f6-a310-43e1-bf80-1d17a2db2128-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.232397 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz7g5\" (UniqueName: \"kubernetes.io/projected/252155f6-a310-43e1-bf80-1d17a2db2128-kube-api-access-fz7g5\") on node \"crc\" DevicePath \"\"" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.252589 4736 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Mar 16 18:34:40 crc kubenswrapper[4736]: I0316 18:34:40.334052 4736 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Mar 16 18:34:45 crc kubenswrapper[4736]: I0316 18:34:45.978153 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:34:45 crc kubenswrapper[4736]: E0316 18:34:45.979339 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:34:48 crc kubenswrapper[4736]: I0316 18:34:48.813340 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 16 18:34:48 crc kubenswrapper[4736]: E0316 18:34:48.814610 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="252155f6-a310-43e1-bf80-1d17a2db2128" containerName="tempest-tests-tempest-tests-runner" Mar 16 18:34:48 crc kubenswrapper[4736]: I0316 18:34:48.814634 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="252155f6-a310-43e1-bf80-1d17a2db2128" containerName="tempest-tests-tempest-tests-runner" Mar 16 18:34:48 crc kubenswrapper[4736]: E0316 18:34:48.814681 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="940e4f5b-4c4e-483d-8173-da87b8c05e91" containerName="oc" Mar 16 18:34:48 crc kubenswrapper[4736]: I0316 18:34:48.814694 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="940e4f5b-4c4e-483d-8173-da87b8c05e91" containerName="oc" Mar 16 18:34:48 crc kubenswrapper[4736]: I0316 18:34:48.815080 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="940e4f5b-4c4e-483d-8173-da87b8c05e91" containerName="oc" Mar 16 18:34:48 crc kubenswrapper[4736]: I0316 18:34:48.815136 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="252155f6-a310-43e1-bf80-1d17a2db2128" containerName="tempest-tests-tempest-tests-runner" Mar 16 18:34:48 crc kubenswrapper[4736]: I0316 18:34:48.815882 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 16 18:34:48 crc kubenswrapper[4736]: I0316 18:34:48.819372 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zn5xx" Mar 16 18:34:48 crc kubenswrapper[4736]: I0316 18:34:48.831634 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 16 18:34:48 crc kubenswrapper[4736]: I0316 18:34:48.936170 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"75ad0a89-ba23-4203-b8a5-e6b3a4ee7c19\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 16 18:34:48 crc kubenswrapper[4736]: I0316 18:34:48.936231 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfw9x\" (UniqueName: \"kubernetes.io/projected/75ad0a89-ba23-4203-b8a5-e6b3a4ee7c19-kube-api-access-rfw9x\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"75ad0a89-ba23-4203-b8a5-e6b3a4ee7c19\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 16 18:34:49 crc kubenswrapper[4736]: I0316 18:34:49.037779 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"75ad0a89-ba23-4203-b8a5-e6b3a4ee7c19\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 16 18:34:49 crc kubenswrapper[4736]: I0316 18:34:49.038182 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfw9x\" (UniqueName: \"kubernetes.io/projected/75ad0a89-ba23-4203-b8a5-e6b3a4ee7c19-kube-api-access-rfw9x\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"75ad0a89-ba23-4203-b8a5-e6b3a4ee7c19\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 16 18:34:49 crc kubenswrapper[4736]: I0316 18:34:49.038935 4736 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"75ad0a89-ba23-4203-b8a5-e6b3a4ee7c19\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 16 18:34:49 crc kubenswrapper[4736]: I0316 18:34:49.066864 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfw9x\" (UniqueName: \"kubernetes.io/projected/75ad0a89-ba23-4203-b8a5-e6b3a4ee7c19-kube-api-access-rfw9x\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"75ad0a89-ba23-4203-b8a5-e6b3a4ee7c19\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 16 18:34:49 crc kubenswrapper[4736]: I0316 18:34:49.078005 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"75ad0a89-ba23-4203-b8a5-e6b3a4ee7c19\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 16 18:34:49 crc kubenswrapper[4736]: I0316 18:34:49.139772 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 16 18:34:49 crc kubenswrapper[4736]: I0316 18:34:49.665881 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 16 18:34:50 crc kubenswrapper[4736]: I0316 18:34:50.340687 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"75ad0a89-ba23-4203-b8a5-e6b3a4ee7c19","Type":"ContainerStarted","Data":"f2c4b4d0968ed79772931532eaf3e9dca2cab94a87a079d2eb3a31e9fc034bbd"} Mar 16 18:34:51 crc kubenswrapper[4736]: I0316 18:34:51.354330 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"75ad0a89-ba23-4203-b8a5-e6b3a4ee7c19","Type":"ContainerStarted","Data":"5db26ccb681831b2b47833615ff5e910feab6408b61b198c64f4d56e7a1e4252"} Mar 16 18:34:51 crc kubenswrapper[4736]: I0316 18:34:51.371716 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.114395359 podStartE2EDuration="3.371696004s" podCreationTimestamp="2026-03-16 18:34:48 +0000 UTC" firstStartedPulling="2026-03-16 18:34:49.671500699 +0000 UTC m=+12091.398891006" lastFinishedPulling="2026-03-16 18:34:50.928801354 +0000 UTC m=+12092.656191651" observedRunningTime="2026-03-16 18:34:51.367199901 +0000 UTC m=+12093.094590228" watchObservedRunningTime="2026-03-16 18:34:51.371696004 +0000 UTC m=+12093.099086301" Mar 16 18:34:56 crc kubenswrapper[4736]: I0316 18:34:56.983549 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:34:56 crc kubenswrapper[4736]: E0316 18:34:56.991239 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:35:10 crc kubenswrapper[4736]: I0316 18:35:10.978826 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:35:10 crc kubenswrapper[4736]: E0316 18:35:10.980039 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:35:12 crc kubenswrapper[4736]: I0316 18:35:12.104791 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ptxh8"] Mar 16 18:35:12 crc kubenswrapper[4736]: I0316 18:35:12.107575 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:12 crc kubenswrapper[4736]: I0316 18:35:12.119913 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ptxh8"] Mar 16 18:35:12 crc kubenswrapper[4736]: I0316 18:35:12.174032 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-catalog-content\") pod \"certified-operators-ptxh8\" (UID: \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\") " pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:12 crc kubenswrapper[4736]: I0316 18:35:12.174249 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb4xq\" (UniqueName: \"kubernetes.io/projected/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-kube-api-access-qb4xq\") pod \"certified-operators-ptxh8\" (UID: \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\") " pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:12 crc kubenswrapper[4736]: I0316 18:35:12.174313 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-utilities\") pod \"certified-operators-ptxh8\" (UID: \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\") " pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:12 crc kubenswrapper[4736]: I0316 18:35:12.276664 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-catalog-content\") pod \"certified-operators-ptxh8\" (UID: \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\") " pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:12 crc kubenswrapper[4736]: I0316 18:35:12.277012 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb4xq\" (UniqueName: \"kubernetes.io/projected/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-kube-api-access-qb4xq\") pod \"certified-operators-ptxh8\" (UID: \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\") " pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:12 crc kubenswrapper[4736]: I0316 18:35:12.277192 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-utilities\") pod \"certified-operators-ptxh8\" (UID: \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\") " pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:12 crc kubenswrapper[4736]: I0316 18:35:12.277430 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-catalog-content\") pod \"certified-operators-ptxh8\" (UID: \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\") " pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:12 crc kubenswrapper[4736]: I0316 18:35:12.277978 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-utilities\") pod \"certified-operators-ptxh8\" (UID: \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\") " pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:12 crc kubenswrapper[4736]: I0316 18:35:12.305093 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb4xq\" (UniqueName: \"kubernetes.io/projected/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-kube-api-access-qb4xq\") pod \"certified-operators-ptxh8\" (UID: \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\") " pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:12 crc kubenswrapper[4736]: I0316 18:35:12.465770 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:12 crc kubenswrapper[4736]: I0316 18:35:12.976378 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ptxh8"] Mar 16 18:35:13 crc kubenswrapper[4736]: I0316 18:35:13.621405 4736 generic.go:334] "Generic (PLEG): container finished" podID="bb2bfbd9-20ad-4887-ae28-402b64b1e69f" containerID="edf79265e7b55c440cecb1245a13758f88c5acb3f43c6a803546c438b014c8d5" exitCode=0 Mar 16 18:35:13 crc kubenswrapper[4736]: I0316 18:35:13.621515 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxh8" event={"ID":"bb2bfbd9-20ad-4887-ae28-402b64b1e69f","Type":"ContainerDied","Data":"edf79265e7b55c440cecb1245a13758f88c5acb3f43c6a803546c438b014c8d5"} Mar 16 18:35:13 crc kubenswrapper[4736]: I0316 18:35:13.621722 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxh8" event={"ID":"bb2bfbd9-20ad-4887-ae28-402b64b1e69f","Type":"ContainerStarted","Data":"d45b15cd8d55eade197d07aead86cd4d3a4a7606986410b182f210913dab0ab6"} Mar 16 18:35:13 crc kubenswrapper[4736]: I0316 18:35:13.927175 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vw8m2/must-gather-9456n"] Mar 16 18:35:13 crc kubenswrapper[4736]: I0316 18:35:13.929453 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/must-gather-9456n" Mar 16 18:35:13 crc kubenswrapper[4736]: I0316 18:35:13.932604 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vw8m2"/"openshift-service-ca.crt" Mar 16 18:35:13 crc kubenswrapper[4736]: I0316 18:35:13.932780 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vw8m2"/"kube-root-ca.crt" Mar 16 18:35:13 crc kubenswrapper[4736]: I0316 18:35:13.933392 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vw8m2"/"default-dockercfg-dvrj2" Mar 16 18:35:13 crc kubenswrapper[4736]: I0316 18:35:13.946414 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vw8m2/must-gather-9456n"] Mar 16 18:35:14 crc kubenswrapper[4736]: I0316 18:35:14.014617 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e738567-0291-4d67-99c4-7e217da6c59e-must-gather-output\") pod \"must-gather-9456n\" (UID: \"2e738567-0291-4d67-99c4-7e217da6c59e\") " pod="openshift-must-gather-vw8m2/must-gather-9456n" Mar 16 18:35:14 crc kubenswrapper[4736]: I0316 18:35:14.014737 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcbtj\" (UniqueName: \"kubernetes.io/projected/2e738567-0291-4d67-99c4-7e217da6c59e-kube-api-access-jcbtj\") pod \"must-gather-9456n\" (UID: \"2e738567-0291-4d67-99c4-7e217da6c59e\") " pod="openshift-must-gather-vw8m2/must-gather-9456n" Mar 16 18:35:14 crc kubenswrapper[4736]: I0316 18:35:14.116350 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e738567-0291-4d67-99c4-7e217da6c59e-must-gather-output\") pod \"must-gather-9456n\" (UID: \"2e738567-0291-4d67-99c4-7e217da6c59e\") " pod="openshift-must-gather-vw8m2/must-gather-9456n" Mar 16 18:35:14 crc kubenswrapper[4736]: I0316 18:35:14.116698 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcbtj\" (UniqueName: \"kubernetes.io/projected/2e738567-0291-4d67-99c4-7e217da6c59e-kube-api-access-jcbtj\") pod \"must-gather-9456n\" (UID: \"2e738567-0291-4d67-99c4-7e217da6c59e\") " pod="openshift-must-gather-vw8m2/must-gather-9456n" Mar 16 18:35:14 crc kubenswrapper[4736]: I0316 18:35:14.116858 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e738567-0291-4d67-99c4-7e217da6c59e-must-gather-output\") pod \"must-gather-9456n\" (UID: \"2e738567-0291-4d67-99c4-7e217da6c59e\") " pod="openshift-must-gather-vw8m2/must-gather-9456n" Mar 16 18:35:14 crc kubenswrapper[4736]: I0316 18:35:14.143965 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcbtj\" (UniqueName: \"kubernetes.io/projected/2e738567-0291-4d67-99c4-7e217da6c59e-kube-api-access-jcbtj\") pod \"must-gather-9456n\" (UID: \"2e738567-0291-4d67-99c4-7e217da6c59e\") " pod="openshift-must-gather-vw8m2/must-gather-9456n" Mar 16 18:35:14 crc kubenswrapper[4736]: I0316 18:35:14.247508 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/must-gather-9456n" Mar 16 18:35:14 crc kubenswrapper[4736]: I0316 18:35:14.781024 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vw8m2/must-gather-9456n"] Mar 16 18:35:14 crc kubenswrapper[4736]: W0316 18:35:14.788791 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e738567_0291_4d67_99c4_7e217da6c59e.slice/crio-bf7db84775ab403738ce0f6196f0fa4e143deab742e6389215bd4d0fdc49e777 WatchSource:0}: Error finding container bf7db84775ab403738ce0f6196f0fa4e143deab742e6389215bd4d0fdc49e777: Status 404 returned error can't find the container with id bf7db84775ab403738ce0f6196f0fa4e143deab742e6389215bd4d0fdc49e777 Mar 16 18:35:15 crc kubenswrapper[4736]: I0316 18:35:15.647628 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vw8m2/must-gather-9456n" event={"ID":"2e738567-0291-4d67-99c4-7e217da6c59e","Type":"ContainerStarted","Data":"bf7db84775ab403738ce0f6196f0fa4e143deab742e6389215bd4d0fdc49e777"} Mar 16 18:35:16 crc kubenswrapper[4736]: I0316 18:35:16.658237 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxh8" event={"ID":"bb2bfbd9-20ad-4887-ae28-402b64b1e69f","Type":"ContainerStarted","Data":"d87b57b87145ffb417a94a5c469c461af6cbc9d8a771a12bf408240dcd8e4e17"} Mar 16 18:35:18 crc kubenswrapper[4736]: I0316 18:35:18.722722 4736 generic.go:334] "Generic (PLEG): container finished" podID="bb2bfbd9-20ad-4887-ae28-402b64b1e69f" containerID="d87b57b87145ffb417a94a5c469c461af6cbc9d8a771a12bf408240dcd8e4e17" exitCode=0 Mar 16 18:35:18 crc kubenswrapper[4736]: I0316 18:35:18.722942 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxh8" event={"ID":"bb2bfbd9-20ad-4887-ae28-402b64b1e69f","Type":"ContainerDied","Data":"d87b57b87145ffb417a94a5c469c461af6cbc9d8a771a12bf408240dcd8e4e17"} Mar 16 18:35:23 crc kubenswrapper[4736]: I0316 18:35:23.783006 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vw8m2/must-gather-9456n" event={"ID":"2e738567-0291-4d67-99c4-7e217da6c59e","Type":"ContainerStarted","Data":"cc74e4677eb5727b05e510c227d72a10e756d633ae9731bb75e7d6ea8f48f8a6"} Mar 16 18:35:23 crc kubenswrapper[4736]: I0316 18:35:23.783609 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vw8m2/must-gather-9456n" event={"ID":"2e738567-0291-4d67-99c4-7e217da6c59e","Type":"ContainerStarted","Data":"54b219aa76ddbff5bcd8b54fc2a77d9297ba54844ff6d6b0db49c81b3b866fac"} Mar 16 18:35:23 crc kubenswrapper[4736]: I0316 18:35:23.786755 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxh8" event={"ID":"bb2bfbd9-20ad-4887-ae28-402b64b1e69f","Type":"ContainerStarted","Data":"2cfe446ca08fb68f9e25d6a95c3650230bd05597b1761e23733476a2b46695c3"} Mar 16 18:35:23 crc kubenswrapper[4736]: I0316 18:35:23.805990 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vw8m2/must-gather-9456n" podStartSLOduration=2.741047687 podStartE2EDuration="10.80596639s" podCreationTimestamp="2026-03-16 18:35:13 +0000 UTC" firstStartedPulling="2026-03-16 18:35:14.791342892 +0000 UTC m=+12116.518733179" lastFinishedPulling="2026-03-16 18:35:22.856261595 +0000 UTC m=+12124.583651882" observedRunningTime="2026-03-16 18:35:23.797314574 +0000 UTC m=+12125.524704901" watchObservedRunningTime="2026-03-16 18:35:23.80596639 +0000 UTC m=+12125.533356697" Mar 16 18:35:23 crc kubenswrapper[4736]: I0316 18:35:23.860480 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ptxh8" podStartSLOduration=2.629595859 podStartE2EDuration="11.860466866s" podCreationTimestamp="2026-03-16 18:35:12 +0000 UTC" firstStartedPulling="2026-03-16 18:35:13.624573976 +0000 UTC m=+12115.351964273" lastFinishedPulling="2026-03-16 18:35:22.855444983 +0000 UTC m=+12124.582835280" observedRunningTime="2026-03-16 18:35:23.854804542 +0000 UTC m=+12125.582194839" watchObservedRunningTime="2026-03-16 18:35:23.860466866 +0000 UTC m=+12125.587857143" Mar 16 18:35:24 crc kubenswrapper[4736]: I0316 18:35:24.977987 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:35:24 crc kubenswrapper[4736]: E0316 18:35:24.978504 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:35:29 crc kubenswrapper[4736]: I0316 18:35:29.575752 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vw8m2/crc-debug-vr96t"] Mar 16 18:35:29 crc kubenswrapper[4736]: I0316 18:35:29.578975 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/crc-debug-vr96t" Mar 16 18:35:29 crc kubenswrapper[4736]: I0316 18:35:29.654642 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/87b6c5ae-069e-4bcc-96b8-74a495512e51-host\") pod \"crc-debug-vr96t\" (UID: \"87b6c5ae-069e-4bcc-96b8-74a495512e51\") " pod="openshift-must-gather-vw8m2/crc-debug-vr96t" Mar 16 18:35:29 crc kubenswrapper[4736]: I0316 18:35:29.655302 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mkzs\" (UniqueName: \"kubernetes.io/projected/87b6c5ae-069e-4bcc-96b8-74a495512e51-kube-api-access-5mkzs\") pod \"crc-debug-vr96t\" (UID: \"87b6c5ae-069e-4bcc-96b8-74a495512e51\") " pod="openshift-must-gather-vw8m2/crc-debug-vr96t" Mar 16 18:35:29 crc kubenswrapper[4736]: I0316 18:35:29.760447 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/87b6c5ae-069e-4bcc-96b8-74a495512e51-host\") pod \"crc-debug-vr96t\" (UID: \"87b6c5ae-069e-4bcc-96b8-74a495512e51\") " pod="openshift-must-gather-vw8m2/crc-debug-vr96t" Mar 16 18:35:29 crc kubenswrapper[4736]: I0316 18:35:29.760602 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mkzs\" (UniqueName: \"kubernetes.io/projected/87b6c5ae-069e-4bcc-96b8-74a495512e51-kube-api-access-5mkzs\") pod \"crc-debug-vr96t\" (UID: \"87b6c5ae-069e-4bcc-96b8-74a495512e51\") " pod="openshift-must-gather-vw8m2/crc-debug-vr96t" Mar 16 18:35:29 crc kubenswrapper[4736]: I0316 18:35:29.763058 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/87b6c5ae-069e-4bcc-96b8-74a495512e51-host\") pod \"crc-debug-vr96t\" (UID: \"87b6c5ae-069e-4bcc-96b8-74a495512e51\") " pod="openshift-must-gather-vw8m2/crc-debug-vr96t" Mar 16 18:35:29 crc kubenswrapper[4736]: I0316 18:35:29.792402 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mkzs\" (UniqueName: \"kubernetes.io/projected/87b6c5ae-069e-4bcc-96b8-74a495512e51-kube-api-access-5mkzs\") pod \"crc-debug-vr96t\" (UID: \"87b6c5ae-069e-4bcc-96b8-74a495512e51\") " pod="openshift-must-gather-vw8m2/crc-debug-vr96t" Mar 16 18:35:29 crc kubenswrapper[4736]: I0316 18:35:29.895650 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/crc-debug-vr96t" Mar 16 18:35:30 crc kubenswrapper[4736]: I0316 18:35:30.881349 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vw8m2/crc-debug-vr96t" event={"ID":"87b6c5ae-069e-4bcc-96b8-74a495512e51","Type":"ContainerStarted","Data":"4eb28aaec48e4a904de6948ce96944ae14ed41fa5c02d5a0c5a3fe4583dafe63"} Mar 16 18:35:32 crc kubenswrapper[4736]: I0316 18:35:32.466926 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:32 crc kubenswrapper[4736]: I0316 18:35:32.467316 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:33 crc kubenswrapper[4736]: I0316 18:35:33.532779 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ptxh8" podUID="bb2bfbd9-20ad-4887-ae28-402b64b1e69f" containerName="registry-server" probeResult="failure" output=< Mar 16 18:35:33 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:35:33 crc kubenswrapper[4736]: > Mar 16 18:35:33 crc kubenswrapper[4736]: I0316 18:35:33.800200 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j2zcp"] Mar 16 18:35:33 crc kubenswrapper[4736]: I0316 18:35:33.801987 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:35:33 crc kubenswrapper[4736]: I0316 18:35:33.816147 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2zcp"] Mar 16 18:35:33 crc kubenswrapper[4736]: I0316 18:35:33.848179 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf56d473-433a-4c6a-90c9-3b77de1c550e-utilities\") pod \"redhat-operators-j2zcp\" (UID: \"cf56d473-433a-4c6a-90c9-3b77de1c550e\") " pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:35:33 crc kubenswrapper[4736]: I0316 18:35:33.848446 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4vr4\" (UniqueName: \"kubernetes.io/projected/cf56d473-433a-4c6a-90c9-3b77de1c550e-kube-api-access-q4vr4\") pod \"redhat-operators-j2zcp\" (UID: \"cf56d473-433a-4c6a-90c9-3b77de1c550e\") " pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:35:33 crc kubenswrapper[4736]: I0316 18:35:33.848482 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf56d473-433a-4c6a-90c9-3b77de1c550e-catalog-content\") pod \"redhat-operators-j2zcp\" (UID: \"cf56d473-433a-4c6a-90c9-3b77de1c550e\") " pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:35:33 crc kubenswrapper[4736]: I0316 18:35:33.949305 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf56d473-433a-4c6a-90c9-3b77de1c550e-catalog-content\") pod \"redhat-operators-j2zcp\" (UID: \"cf56d473-433a-4c6a-90c9-3b77de1c550e\") " pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:35:33 crc kubenswrapper[4736]: I0316 18:35:33.949701 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf56d473-433a-4c6a-90c9-3b77de1c550e-utilities\") pod \"redhat-operators-j2zcp\" (UID: \"cf56d473-433a-4c6a-90c9-3b77de1c550e\") " pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:35:33 crc kubenswrapper[4736]: I0316 18:35:33.949836 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4vr4\" (UniqueName: \"kubernetes.io/projected/cf56d473-433a-4c6a-90c9-3b77de1c550e-kube-api-access-q4vr4\") pod \"redhat-operators-j2zcp\" (UID: \"cf56d473-433a-4c6a-90c9-3b77de1c550e\") " pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:35:33 crc kubenswrapper[4736]: I0316 18:35:33.950571 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf56d473-433a-4c6a-90c9-3b77de1c550e-catalog-content\") pod \"redhat-operators-j2zcp\" (UID: \"cf56d473-433a-4c6a-90c9-3b77de1c550e\") " pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:35:33 crc kubenswrapper[4736]: I0316 18:35:33.950926 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf56d473-433a-4c6a-90c9-3b77de1c550e-utilities\") pod \"redhat-operators-j2zcp\" (UID: \"cf56d473-433a-4c6a-90c9-3b77de1c550e\") " pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:35:33 crc kubenswrapper[4736]: I0316 18:35:33.970220 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4vr4\" (UniqueName: \"kubernetes.io/projected/cf56d473-433a-4c6a-90c9-3b77de1c550e-kube-api-access-q4vr4\") pod \"redhat-operators-j2zcp\" (UID: \"cf56d473-433a-4c6a-90c9-3b77de1c550e\") " pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:35:34 crc kubenswrapper[4736]: I0316 18:35:34.129617 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:35:34 crc kubenswrapper[4736]: I0316 18:35:34.673978 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2zcp"] Mar 16 18:35:34 crc kubenswrapper[4736]: I0316 18:35:34.923036 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2zcp" event={"ID":"cf56d473-433a-4c6a-90c9-3b77de1c550e","Type":"ContainerStarted","Data":"bac71afb787e640f1f9b4db56a3f1f78d5a2556d430d8437f66e1e96763a49bf"} Mar 16 18:35:35 crc kubenswrapper[4736]: I0316 18:35:35.959229 4736 generic.go:334] "Generic (PLEG): container finished" podID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerID="7496eb23486bf030f5edb406e443d8ff9f140092b563ab469a947b0186d98e5c" exitCode=0 Mar 16 18:35:35 crc kubenswrapper[4736]: I0316 18:35:35.959765 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2zcp" event={"ID":"cf56d473-433a-4c6a-90c9-3b77de1c550e","Type":"ContainerDied","Data":"7496eb23486bf030f5edb406e443d8ff9f140092b563ab469a947b0186d98e5c"} Mar 16 18:35:38 crc kubenswrapper[4736]: I0316 18:35:38.995302 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:35:38 crc kubenswrapper[4736]: E0316 18:35:38.996174 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:35:42 crc kubenswrapper[4736]: I0316 18:35:42.519513 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:42 crc kubenswrapper[4736]: I0316 18:35:42.570403 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:43 crc kubenswrapper[4736]: I0316 18:35:43.020720 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vw8m2/crc-debug-vr96t" event={"ID":"87b6c5ae-069e-4bcc-96b8-74a495512e51","Type":"ContainerStarted","Data":"f019c9549d9b69a88f4d5a16c33301b706f65bf302e2ecf825a16a11a135612e"} Mar 16 18:35:43 crc kubenswrapper[4736]: I0316 18:35:43.042543 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vw8m2/crc-debug-vr96t" podStartSLOduration=1.729923662 podStartE2EDuration="14.042520928s" podCreationTimestamp="2026-03-16 18:35:29 +0000 UTC" firstStartedPulling="2026-03-16 18:35:29.93466873 +0000 UTC m=+12131.662059017" lastFinishedPulling="2026-03-16 18:35:42.247265996 +0000 UTC m=+12143.974656283" observedRunningTime="2026-03-16 18:35:43.035859695 +0000 UTC m=+12144.763249972" watchObservedRunningTime="2026-03-16 18:35:43.042520928 +0000 UTC m=+12144.769911215" Mar 16 18:35:43 crc kubenswrapper[4736]: I0316 18:35:43.322023 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ptxh8"] Mar 16 18:35:44 crc kubenswrapper[4736]: I0316 18:35:44.030645 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2zcp" event={"ID":"cf56d473-433a-4c6a-90c9-3b77de1c550e","Type":"ContainerStarted","Data":"b5c906719840a3aaf7a09b8fddc7f09a7f32b08dcd05cbb185620138203d25b2"} Mar 16 18:35:44 crc kubenswrapper[4736]: I0316 18:35:44.033847 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ptxh8" podUID="bb2bfbd9-20ad-4887-ae28-402b64b1e69f" containerName="registry-server" containerID="cri-o://2cfe446ca08fb68f9e25d6a95c3650230bd05597b1761e23733476a2b46695c3" gracePeriod=2 Mar 16 18:35:45 crc kubenswrapper[4736]: I0316 18:35:45.051846 4736 generic.go:334] "Generic (PLEG): container finished" podID="bb2bfbd9-20ad-4887-ae28-402b64b1e69f" containerID="2cfe446ca08fb68f9e25d6a95c3650230bd05597b1761e23733476a2b46695c3" exitCode=0 Mar 16 18:35:45 crc kubenswrapper[4736]: I0316 18:35:45.051943 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxh8" event={"ID":"bb2bfbd9-20ad-4887-ae28-402b64b1e69f","Type":"ContainerDied","Data":"2cfe446ca08fb68f9e25d6a95c3650230bd05597b1761e23733476a2b46695c3"} Mar 16 18:35:45 crc kubenswrapper[4736]: I0316 18:35:45.126493 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:45 crc kubenswrapper[4736]: I0316 18:35:45.282874 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb4xq\" (UniqueName: \"kubernetes.io/projected/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-kube-api-access-qb4xq\") pod \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\" (UID: \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\") " Mar 16 18:35:45 crc kubenswrapper[4736]: I0316 18:35:45.283176 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-utilities\") pod \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\" (UID: \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\") " Mar 16 18:35:45 crc kubenswrapper[4736]: I0316 18:35:45.283301 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-catalog-content\") pod \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\" (UID: \"bb2bfbd9-20ad-4887-ae28-402b64b1e69f\") " Mar 16 18:35:45 crc kubenswrapper[4736]: I0316 18:35:45.283804 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-utilities" (OuterVolumeSpecName: "utilities") pod "bb2bfbd9-20ad-4887-ae28-402b64b1e69f" (UID: "bb2bfbd9-20ad-4887-ae28-402b64b1e69f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:35:45 crc kubenswrapper[4736]: I0316 18:35:45.293176 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-kube-api-access-qb4xq" (OuterVolumeSpecName: "kube-api-access-qb4xq") pod "bb2bfbd9-20ad-4887-ae28-402b64b1e69f" (UID: "bb2bfbd9-20ad-4887-ae28-402b64b1e69f"). InnerVolumeSpecName "kube-api-access-qb4xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:35:45 crc kubenswrapper[4736]: I0316 18:35:45.356232 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb2bfbd9-20ad-4887-ae28-402b64b1e69f" (UID: "bb2bfbd9-20ad-4887-ae28-402b64b1e69f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:35:45 crc kubenswrapper[4736]: I0316 18:35:45.385801 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:35:45 crc kubenswrapper[4736]: I0316 18:35:45.385837 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:35:45 crc kubenswrapper[4736]: I0316 18:35:45.385852 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb4xq\" (UniqueName: \"kubernetes.io/projected/bb2bfbd9-20ad-4887-ae28-402b64b1e69f-kube-api-access-qb4xq\") on node \"crc\" DevicePath \"\"" Mar 16 18:35:46 crc kubenswrapper[4736]: I0316 18:35:46.064134 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxh8" event={"ID":"bb2bfbd9-20ad-4887-ae28-402b64b1e69f","Type":"ContainerDied","Data":"d45b15cd8d55eade197d07aead86cd4d3a4a7606986410b182f210913dab0ab6"} Mar 16 18:35:46 crc kubenswrapper[4736]: I0316 18:35:46.064489 4736 scope.go:117] "RemoveContainer" containerID="2cfe446ca08fb68f9e25d6a95c3650230bd05597b1761e23733476a2b46695c3" Mar 16 18:35:46 crc kubenswrapper[4736]: I0316 18:35:46.064176 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptxh8" Mar 16 18:35:46 crc kubenswrapper[4736]: I0316 18:35:46.094864 4736 scope.go:117] "RemoveContainer" containerID="d87b57b87145ffb417a94a5c469c461af6cbc9d8a771a12bf408240dcd8e4e17" Mar 16 18:35:46 crc kubenswrapper[4736]: I0316 18:35:46.138245 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ptxh8"] Mar 16 18:35:46 crc kubenswrapper[4736]: I0316 18:35:46.142892 4736 scope.go:117] "RemoveContainer" containerID="edf79265e7b55c440cecb1245a13758f88c5acb3f43c6a803546c438b014c8d5" Mar 16 18:35:46 crc kubenswrapper[4736]: I0316 18:35:46.171394 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ptxh8"] Mar 16 18:35:46 crc kubenswrapper[4736]: I0316 18:35:46.988556 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb2bfbd9-20ad-4887-ae28-402b64b1e69f" path="/var/lib/kubelet/pods/bb2bfbd9-20ad-4887-ae28-402b64b1e69f/volumes" Mar 16 18:35:49 crc kubenswrapper[4736]: I0316 18:35:49.091306 4736 generic.go:334] "Generic (PLEG): container finished" podID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerID="b5c906719840a3aaf7a09b8fddc7f09a7f32b08dcd05cbb185620138203d25b2" exitCode=0 Mar 16 18:35:49 crc kubenswrapper[4736]: I0316 18:35:49.091412 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2zcp" event={"ID":"cf56d473-433a-4c6a-90c9-3b77de1c550e","Type":"ContainerDied","Data":"b5c906719840a3aaf7a09b8fddc7f09a7f32b08dcd05cbb185620138203d25b2"} Mar 16 18:35:50 crc kubenswrapper[4736]: I0316 18:35:50.979073 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:35:50 crc kubenswrapper[4736]: E0316 18:35:50.979922 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:35:52 crc kubenswrapper[4736]: I0316 18:35:52.135711 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2zcp" event={"ID":"cf56d473-433a-4c6a-90c9-3b77de1c550e","Type":"ContainerStarted","Data":"8e44867b61de36527d9fe8d55e80a65df38b63b5c68898d49d1927bee4ebbcb6"} Mar 16 18:35:52 crc kubenswrapper[4736]: I0316 18:35:52.175967 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j2zcp" podStartSLOduration=10.268916312 podStartE2EDuration="19.175947446s" podCreationTimestamp="2026-03-16 18:35:33 +0000 UTC" firstStartedPulling="2026-03-16 18:35:42.189582962 +0000 UTC m=+12143.916973249" lastFinishedPulling="2026-03-16 18:35:51.096614076 +0000 UTC m=+12152.824004383" observedRunningTime="2026-03-16 18:35:52.164909725 +0000 UTC m=+12153.892300012" watchObservedRunningTime="2026-03-16 18:35:52.175947446 +0000 UTC m=+12153.903337733" Mar 16 18:35:54 crc kubenswrapper[4736]: I0316 18:35:54.130140 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:35:54 crc kubenswrapper[4736]: I0316 18:35:54.130670 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:35:55 crc kubenswrapper[4736]: I0316 18:35:55.180314 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j2zcp" podUID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerName="registry-server" probeResult="failure" output=< Mar 16 18:35:55 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:35:55 crc kubenswrapper[4736]: > Mar 16 18:36:00 crc kubenswrapper[4736]: I0316 18:36:00.150241 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561436-7qf94"] Mar 16 18:36:00 crc kubenswrapper[4736]: E0316 18:36:00.151027 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb2bfbd9-20ad-4887-ae28-402b64b1e69f" containerName="registry-server" Mar 16 18:36:00 crc kubenswrapper[4736]: I0316 18:36:00.151039 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb2bfbd9-20ad-4887-ae28-402b64b1e69f" containerName="registry-server" Mar 16 18:36:00 crc kubenswrapper[4736]: E0316 18:36:00.151061 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb2bfbd9-20ad-4887-ae28-402b64b1e69f" containerName="extract-content" Mar 16 18:36:00 crc kubenswrapper[4736]: I0316 18:36:00.151067 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb2bfbd9-20ad-4887-ae28-402b64b1e69f" containerName="extract-content" Mar 16 18:36:00 crc kubenswrapper[4736]: E0316 18:36:00.151075 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb2bfbd9-20ad-4887-ae28-402b64b1e69f" containerName="extract-utilities" Mar 16 18:36:00 crc kubenswrapper[4736]: I0316 18:36:00.151081 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb2bfbd9-20ad-4887-ae28-402b64b1e69f" containerName="extract-utilities" Mar 16 18:36:00 crc kubenswrapper[4736]: I0316 18:36:00.151301 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb2bfbd9-20ad-4887-ae28-402b64b1e69f" containerName="registry-server" Mar 16 18:36:00 crc kubenswrapper[4736]: I0316 18:36:00.151915 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561436-7qf94" Mar 16 18:36:00 crc kubenswrapper[4736]: I0316 18:36:00.158160 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561436-7qf94"] Mar 16 18:36:00 crc kubenswrapper[4736]: I0316 18:36:00.160013 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:36:00 crc kubenswrapper[4736]: I0316 18:36:00.160303 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:36:00 crc kubenswrapper[4736]: I0316 18:36:00.160515 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:36:00 crc kubenswrapper[4736]: I0316 18:36:00.293419 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cwns\" (UniqueName: \"kubernetes.io/projected/9884ed3c-e0e7-4e2b-baff-102c4f3dab68-kube-api-access-4cwns\") pod \"auto-csr-approver-29561436-7qf94\" (UID: \"9884ed3c-e0e7-4e2b-baff-102c4f3dab68\") " pod="openshift-infra/auto-csr-approver-29561436-7qf94" Mar 16 18:36:00 crc kubenswrapper[4736]: I0316 18:36:00.395309 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cwns\" (UniqueName: \"kubernetes.io/projected/9884ed3c-e0e7-4e2b-baff-102c4f3dab68-kube-api-access-4cwns\") pod \"auto-csr-approver-29561436-7qf94\" (UID: \"9884ed3c-e0e7-4e2b-baff-102c4f3dab68\") " pod="openshift-infra/auto-csr-approver-29561436-7qf94" Mar 16 18:36:00 crc kubenswrapper[4736]: I0316 18:36:00.418449 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cwns\" (UniqueName: \"kubernetes.io/projected/9884ed3c-e0e7-4e2b-baff-102c4f3dab68-kube-api-access-4cwns\") pod \"auto-csr-approver-29561436-7qf94\" (UID: \"9884ed3c-e0e7-4e2b-baff-102c4f3dab68\") " pod="openshift-infra/auto-csr-approver-29561436-7qf94" Mar 16 18:36:00 crc kubenswrapper[4736]: I0316 18:36:00.481388 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561436-7qf94" Mar 16 18:36:01 crc kubenswrapper[4736]: I0316 18:36:01.119176 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561436-7qf94"] Mar 16 18:36:01 crc kubenswrapper[4736]: I0316 18:36:01.215139 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561436-7qf94" event={"ID":"9884ed3c-e0e7-4e2b-baff-102c4f3dab68","Type":"ContainerStarted","Data":"129dbc09639f148e2b67991fd873a7c7544a99c7f5c6fc0014fa05b0c731c918"} Mar 16 18:36:02 crc kubenswrapper[4736]: I0316 18:36:02.979422 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:36:02 crc kubenswrapper[4736]: E0316 18:36:02.980022 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:36:03 crc kubenswrapper[4736]: I0316 18:36:03.231323 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561436-7qf94" event={"ID":"9884ed3c-e0e7-4e2b-baff-102c4f3dab68","Type":"ContainerStarted","Data":"892af5277c94b3e9c63fa7d163d5a810c7822333a926029d5a719078ef895b52"} Mar 16 18:36:03 crc kubenswrapper[4736]: I0316 18:36:03.248669 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561436-7qf94" podStartSLOduration=2.020284706 podStartE2EDuration="3.248653141s" podCreationTimestamp="2026-03-16 18:36:00 +0000 UTC" firstStartedPulling="2026-03-16 18:36:01.125476948 +0000 UTC m=+12162.852867235" lastFinishedPulling="2026-03-16 18:36:02.353845393 +0000 UTC m=+12164.081235670" observedRunningTime="2026-03-16 18:36:03.247477228 +0000 UTC m=+12164.974867515" watchObservedRunningTime="2026-03-16 18:36:03.248653141 +0000 UTC m=+12164.976043428" Mar 16 18:36:05 crc kubenswrapper[4736]: I0316 18:36:05.181176 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j2zcp" podUID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerName="registry-server" probeResult="failure" output=< Mar 16 18:36:05 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:36:05 crc kubenswrapper[4736]: > Mar 16 18:36:05 crc kubenswrapper[4736]: I0316 18:36:05.253388 4736 generic.go:334] "Generic (PLEG): container finished" podID="9884ed3c-e0e7-4e2b-baff-102c4f3dab68" containerID="892af5277c94b3e9c63fa7d163d5a810c7822333a926029d5a719078ef895b52" exitCode=0 Mar 16 18:36:05 crc kubenswrapper[4736]: I0316 18:36:05.253435 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561436-7qf94" event={"ID":"9884ed3c-e0e7-4e2b-baff-102c4f3dab68","Type":"ContainerDied","Data":"892af5277c94b3e9c63fa7d163d5a810c7822333a926029d5a719078ef895b52"} Mar 16 18:36:06 crc kubenswrapper[4736]: I0316 18:36:06.616576 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561436-7qf94" Mar 16 18:36:06 crc kubenswrapper[4736]: I0316 18:36:06.713367 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cwns\" (UniqueName: \"kubernetes.io/projected/9884ed3c-e0e7-4e2b-baff-102c4f3dab68-kube-api-access-4cwns\") pod \"9884ed3c-e0e7-4e2b-baff-102c4f3dab68\" (UID: \"9884ed3c-e0e7-4e2b-baff-102c4f3dab68\") " Mar 16 18:36:06 crc kubenswrapper[4736]: I0316 18:36:06.719288 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9884ed3c-e0e7-4e2b-baff-102c4f3dab68-kube-api-access-4cwns" (OuterVolumeSpecName: "kube-api-access-4cwns") pod "9884ed3c-e0e7-4e2b-baff-102c4f3dab68" (UID: "9884ed3c-e0e7-4e2b-baff-102c4f3dab68"). InnerVolumeSpecName "kube-api-access-4cwns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:36:06 crc kubenswrapper[4736]: I0316 18:36:06.815765 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cwns\" (UniqueName: \"kubernetes.io/projected/9884ed3c-e0e7-4e2b-baff-102c4f3dab68-kube-api-access-4cwns\") on node \"crc\" DevicePath \"\"" Mar 16 18:36:07 crc kubenswrapper[4736]: I0316 18:36:07.269857 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561436-7qf94" event={"ID":"9884ed3c-e0e7-4e2b-baff-102c4f3dab68","Type":"ContainerDied","Data":"129dbc09639f148e2b67991fd873a7c7544a99c7f5c6fc0014fa05b0c731c918"} Mar 16 18:36:07 crc kubenswrapper[4736]: I0316 18:36:07.269919 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="129dbc09639f148e2b67991fd873a7c7544a99c7f5c6fc0014fa05b0c731c918" Mar 16 18:36:07 crc kubenswrapper[4736]: I0316 18:36:07.270003 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561436-7qf94" Mar 16 18:36:07 crc kubenswrapper[4736]: I0316 18:36:07.340095 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561430-x9c48"] Mar 16 18:36:07 crc kubenswrapper[4736]: I0316 18:36:07.351855 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561430-x9c48"] Mar 16 18:36:08 crc kubenswrapper[4736]: I0316 18:36:08.989511 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f189480-c87b-4b74-a487-3d172c7de20a" path="/var/lib/kubelet/pods/2f189480-c87b-4b74-a487-3d172c7de20a/volumes" Mar 16 18:36:10 crc kubenswrapper[4736]: I0316 18:36:10.509975 4736 scope.go:117] "RemoveContainer" containerID="8b8c9ca5b34a573b31cdcdc9ce6d95a50f86091303f1cf9a5d6af9453950d5c1" Mar 16 18:36:15 crc kubenswrapper[4736]: I0316 18:36:15.176774 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j2zcp" podUID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerName="registry-server" probeResult="failure" output=< Mar 16 18:36:15 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:36:15 crc kubenswrapper[4736]: > Mar 16 18:36:16 crc kubenswrapper[4736]: I0316 18:36:16.979494 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:36:16 crc kubenswrapper[4736]: E0316 18:36:16.980095 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:36:25 crc kubenswrapper[4736]: I0316 18:36:25.186811 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j2zcp" podUID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerName="registry-server" probeResult="failure" output=< Mar 16 18:36:25 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:36:25 crc kubenswrapper[4736]: > Mar 16 18:36:27 crc kubenswrapper[4736]: I0316 18:36:27.981041 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:36:27 crc kubenswrapper[4736]: E0316 18:36:27.981798 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:36:32 crc kubenswrapper[4736]: I0316 18:36:32.493230 4736 generic.go:334] "Generic (PLEG): container finished" podID="87b6c5ae-069e-4bcc-96b8-74a495512e51" containerID="f019c9549d9b69a88f4d5a16c33301b706f65bf302e2ecf825a16a11a135612e" exitCode=0 Mar 16 18:36:32 crc kubenswrapper[4736]: I0316 18:36:32.493823 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vw8m2/crc-debug-vr96t" event={"ID":"87b6c5ae-069e-4bcc-96b8-74a495512e51","Type":"ContainerDied","Data":"f019c9549d9b69a88f4d5a16c33301b706f65bf302e2ecf825a16a11a135612e"} Mar 16 18:36:33 crc kubenswrapper[4736]: I0316 18:36:33.628549 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/crc-debug-vr96t" Mar 16 18:36:33 crc kubenswrapper[4736]: I0316 18:36:33.669166 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vw8m2/crc-debug-vr96t"] Mar 16 18:36:33 crc kubenswrapper[4736]: I0316 18:36:33.681642 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vw8m2/crc-debug-vr96t"] Mar 16 18:36:33 crc kubenswrapper[4736]: I0316 18:36:33.812985 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mkzs\" (UniqueName: \"kubernetes.io/projected/87b6c5ae-069e-4bcc-96b8-74a495512e51-kube-api-access-5mkzs\") pod \"87b6c5ae-069e-4bcc-96b8-74a495512e51\" (UID: \"87b6c5ae-069e-4bcc-96b8-74a495512e51\") " Mar 16 18:36:33 crc kubenswrapper[4736]: I0316 18:36:33.813129 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/87b6c5ae-069e-4bcc-96b8-74a495512e51-host\") pod \"87b6c5ae-069e-4bcc-96b8-74a495512e51\" (UID: \"87b6c5ae-069e-4bcc-96b8-74a495512e51\") " Mar 16 18:36:33 crc kubenswrapper[4736]: I0316 18:36:33.813356 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87b6c5ae-069e-4bcc-96b8-74a495512e51-host" (OuterVolumeSpecName: "host") pod "87b6c5ae-069e-4bcc-96b8-74a495512e51" (UID: "87b6c5ae-069e-4bcc-96b8-74a495512e51"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 18:36:33 crc kubenswrapper[4736]: I0316 18:36:33.813664 4736 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/87b6c5ae-069e-4bcc-96b8-74a495512e51-host\") on node \"crc\" DevicePath \"\"" Mar 16 18:36:33 crc kubenswrapper[4736]: I0316 18:36:33.830454 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b6c5ae-069e-4bcc-96b8-74a495512e51-kube-api-access-5mkzs" (OuterVolumeSpecName: "kube-api-access-5mkzs") pod "87b6c5ae-069e-4bcc-96b8-74a495512e51" (UID: "87b6c5ae-069e-4bcc-96b8-74a495512e51"). InnerVolumeSpecName "kube-api-access-5mkzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:36:33 crc kubenswrapper[4736]: I0316 18:36:33.915341 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mkzs\" (UniqueName: \"kubernetes.io/projected/87b6c5ae-069e-4bcc-96b8-74a495512e51-kube-api-access-5mkzs\") on node \"crc\" DevicePath \"\"" Mar 16 18:36:34 crc kubenswrapper[4736]: I0316 18:36:34.190176 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:36:34 crc kubenswrapper[4736]: I0316 18:36:34.243997 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:36:34 crc kubenswrapper[4736]: I0316 18:36:34.547058 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/crc-debug-vr96t" Mar 16 18:36:34 crc kubenswrapper[4736]: I0316 18:36:34.547162 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eb28aaec48e4a904de6948ce96944ae14ed41fa5c02d5a0c5a3fe4583dafe63" Mar 16 18:36:34 crc kubenswrapper[4736]: I0316 18:36:34.848248 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vw8m2/crc-debug-wpd6c"] Mar 16 18:36:34 crc kubenswrapper[4736]: E0316 18:36:34.848602 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9884ed3c-e0e7-4e2b-baff-102c4f3dab68" containerName="oc" Mar 16 18:36:34 crc kubenswrapper[4736]: I0316 18:36:34.848615 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="9884ed3c-e0e7-4e2b-baff-102c4f3dab68" containerName="oc" Mar 16 18:36:34 crc kubenswrapper[4736]: E0316 18:36:34.848639 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b6c5ae-069e-4bcc-96b8-74a495512e51" containerName="container-00" Mar 16 18:36:34 crc kubenswrapper[4736]: I0316 18:36:34.848646 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b6c5ae-069e-4bcc-96b8-74a495512e51" containerName="container-00" Mar 16 18:36:34 crc kubenswrapper[4736]: I0316 18:36:34.848874 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="9884ed3c-e0e7-4e2b-baff-102c4f3dab68" containerName="oc" Mar 16 18:36:34 crc kubenswrapper[4736]: I0316 18:36:34.848894 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b6c5ae-069e-4bcc-96b8-74a495512e51" containerName="container-00" Mar 16 18:36:34 crc kubenswrapper[4736]: I0316 18:36:34.850007 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/crc-debug-wpd6c" Mar 16 18:36:34 crc kubenswrapper[4736]: I0316 18:36:34.940942 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0624b1a3-d82e-4301-baaf-b1765f14bc2f-host\") pod \"crc-debug-wpd6c\" (UID: \"0624b1a3-d82e-4301-baaf-b1765f14bc2f\") " pod="openshift-must-gather-vw8m2/crc-debug-wpd6c" Mar 16 18:36:34 crc kubenswrapper[4736]: I0316 18:36:34.941316 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p65h4\" (UniqueName: \"kubernetes.io/projected/0624b1a3-d82e-4301-baaf-b1765f14bc2f-kube-api-access-p65h4\") pod \"crc-debug-wpd6c\" (UID: \"0624b1a3-d82e-4301-baaf-b1765f14bc2f\") " pod="openshift-must-gather-vw8m2/crc-debug-wpd6c" Mar 16 18:36:34 crc kubenswrapper[4736]: I0316 18:36:34.990685 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b6c5ae-069e-4bcc-96b8-74a495512e51" path="/var/lib/kubelet/pods/87b6c5ae-069e-4bcc-96b8-74a495512e51/volumes" Mar 16 18:36:35 crc kubenswrapper[4736]: I0316 18:36:35.017467 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2zcp"] Mar 16 18:36:35 crc kubenswrapper[4736]: I0316 18:36:35.043419 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0624b1a3-d82e-4301-baaf-b1765f14bc2f-host\") pod \"crc-debug-wpd6c\" (UID: \"0624b1a3-d82e-4301-baaf-b1765f14bc2f\") " pod="openshift-must-gather-vw8m2/crc-debug-wpd6c" Mar 16 18:36:35 crc kubenswrapper[4736]: I0316 18:36:35.043542 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0624b1a3-d82e-4301-baaf-b1765f14bc2f-host\") pod \"crc-debug-wpd6c\" (UID: \"0624b1a3-d82e-4301-baaf-b1765f14bc2f\") " pod="openshift-must-gather-vw8m2/crc-debug-wpd6c" Mar 16 18:36:35 crc kubenswrapper[4736]: I0316 18:36:35.043597 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p65h4\" (UniqueName: \"kubernetes.io/projected/0624b1a3-d82e-4301-baaf-b1765f14bc2f-kube-api-access-p65h4\") pod \"crc-debug-wpd6c\" (UID: \"0624b1a3-d82e-4301-baaf-b1765f14bc2f\") " pod="openshift-must-gather-vw8m2/crc-debug-wpd6c" Mar 16 18:36:35 crc kubenswrapper[4736]: I0316 18:36:35.066182 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p65h4\" (UniqueName: \"kubernetes.io/projected/0624b1a3-d82e-4301-baaf-b1765f14bc2f-kube-api-access-p65h4\") pod \"crc-debug-wpd6c\" (UID: \"0624b1a3-d82e-4301-baaf-b1765f14bc2f\") " pod="openshift-must-gather-vw8m2/crc-debug-wpd6c" Mar 16 18:36:35 crc kubenswrapper[4736]: I0316 18:36:35.165943 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/crc-debug-wpd6c" Mar 16 18:36:35 crc kubenswrapper[4736]: I0316 18:36:35.559670 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vw8m2/crc-debug-wpd6c" event={"ID":"0624b1a3-d82e-4301-baaf-b1765f14bc2f","Type":"ContainerStarted","Data":"0dbaa8d80dc3c758ba6e2249401fadfab6a91d88bfa43836aee9296e3a5fc002"} Mar 16 18:36:35 crc kubenswrapper[4736]: I0316 18:36:35.560100 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vw8m2/crc-debug-wpd6c" event={"ID":"0624b1a3-d82e-4301-baaf-b1765f14bc2f","Type":"ContainerStarted","Data":"45bd39b684810bd682374fdfd2c308fe322ef093bdcf964d7804ab0eef1d83df"} Mar 16 18:36:35 crc kubenswrapper[4736]: I0316 18:36:35.563387 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j2zcp" podUID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerName="registry-server" containerID="cri-o://8e44867b61de36527d9fe8d55e80a65df38b63b5c68898d49d1927bee4ebbcb6" gracePeriod=2 Mar 16 18:36:35 crc kubenswrapper[4736]: I0316 18:36:35.588345 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vw8m2/crc-debug-wpd6c" podStartSLOduration=1.588318385 podStartE2EDuration="1.588318385s" podCreationTimestamp="2026-03-16 18:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 18:36:35.575363971 +0000 UTC m=+12197.302754288" watchObservedRunningTime="2026-03-16 18:36:35.588318385 +0000 UTC m=+12197.315708712" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.025386 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.164270 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4vr4\" (UniqueName: \"kubernetes.io/projected/cf56d473-433a-4c6a-90c9-3b77de1c550e-kube-api-access-q4vr4\") pod \"cf56d473-433a-4c6a-90c9-3b77de1c550e\" (UID: \"cf56d473-433a-4c6a-90c9-3b77de1c550e\") " Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.164566 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf56d473-433a-4c6a-90c9-3b77de1c550e-catalog-content\") pod \"cf56d473-433a-4c6a-90c9-3b77de1c550e\" (UID: \"cf56d473-433a-4c6a-90c9-3b77de1c550e\") " Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.164615 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf56d473-433a-4c6a-90c9-3b77de1c550e-utilities\") pod \"cf56d473-433a-4c6a-90c9-3b77de1c550e\" (UID: \"cf56d473-433a-4c6a-90c9-3b77de1c550e\") " Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.165609 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf56d473-433a-4c6a-90c9-3b77de1c550e-utilities" (OuterVolumeSpecName: "utilities") pod "cf56d473-433a-4c6a-90c9-3b77de1c550e" (UID: "cf56d473-433a-4c6a-90c9-3b77de1c550e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.170613 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf56d473-433a-4c6a-90c9-3b77de1c550e-kube-api-access-q4vr4" (OuterVolumeSpecName: "kube-api-access-q4vr4") pod "cf56d473-433a-4c6a-90c9-3b77de1c550e" (UID: "cf56d473-433a-4c6a-90c9-3b77de1c550e"). InnerVolumeSpecName "kube-api-access-q4vr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.268185 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf56d473-433a-4c6a-90c9-3b77de1c550e-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.268456 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4vr4\" (UniqueName: \"kubernetes.io/projected/cf56d473-433a-4c6a-90c9-3b77de1c550e-kube-api-access-q4vr4\") on node \"crc\" DevicePath \"\"" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.335981 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf56d473-433a-4c6a-90c9-3b77de1c550e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf56d473-433a-4c6a-90c9-3b77de1c550e" (UID: "cf56d473-433a-4c6a-90c9-3b77de1c550e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.370697 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf56d473-433a-4c6a-90c9-3b77de1c550e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.570564 4736 generic.go:334] "Generic (PLEG): container finished" podID="0624b1a3-d82e-4301-baaf-b1765f14bc2f" containerID="0dbaa8d80dc3c758ba6e2249401fadfab6a91d88bfa43836aee9296e3a5fc002" exitCode=0 Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.570623 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vw8m2/crc-debug-wpd6c" event={"ID":"0624b1a3-d82e-4301-baaf-b1765f14bc2f","Type":"ContainerDied","Data":"0dbaa8d80dc3c758ba6e2249401fadfab6a91d88bfa43836aee9296e3a5fc002"} Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.577683 4736 generic.go:334] "Generic (PLEG): container finished" podID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerID="8e44867b61de36527d9fe8d55e80a65df38b63b5c68898d49d1927bee4ebbcb6" exitCode=0 Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.577747 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2zcp" event={"ID":"cf56d473-433a-4c6a-90c9-3b77de1c550e","Type":"ContainerDied","Data":"8e44867b61de36527d9fe8d55e80a65df38b63b5c68898d49d1927bee4ebbcb6"} Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.577785 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2zcp" event={"ID":"cf56d473-433a-4c6a-90c9-3b77de1c550e","Type":"ContainerDied","Data":"bac71afb787e640f1f9b4db56a3f1f78d5a2556d430d8437f66e1e96763a49bf"} Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.577807 4736 scope.go:117] "RemoveContainer" containerID="8e44867b61de36527d9fe8d55e80a65df38b63b5c68898d49d1927bee4ebbcb6" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.577893 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2zcp" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.608268 4736 scope.go:117] "RemoveContainer" containerID="b5c906719840a3aaf7a09b8fddc7f09a7f32b08dcd05cbb185620138203d25b2" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.629196 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2zcp"] Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.635244 4736 scope.go:117] "RemoveContainer" containerID="7496eb23486bf030f5edb406e443d8ff9f140092b563ab469a947b0186d98e5c" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.637777 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j2zcp"] Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.697571 4736 scope.go:117] "RemoveContainer" containerID="8e44867b61de36527d9fe8d55e80a65df38b63b5c68898d49d1927bee4ebbcb6" Mar 16 18:36:36 crc kubenswrapper[4736]: E0316 18:36:36.705528 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e44867b61de36527d9fe8d55e80a65df38b63b5c68898d49d1927bee4ebbcb6\": container with ID starting with 8e44867b61de36527d9fe8d55e80a65df38b63b5c68898d49d1927bee4ebbcb6 not found: ID does not exist" containerID="8e44867b61de36527d9fe8d55e80a65df38b63b5c68898d49d1927bee4ebbcb6" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.705575 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e44867b61de36527d9fe8d55e80a65df38b63b5c68898d49d1927bee4ebbcb6"} err="failed to get container status \"8e44867b61de36527d9fe8d55e80a65df38b63b5c68898d49d1927bee4ebbcb6\": rpc error: code = NotFound desc = could not find container \"8e44867b61de36527d9fe8d55e80a65df38b63b5c68898d49d1927bee4ebbcb6\": container with ID starting with 8e44867b61de36527d9fe8d55e80a65df38b63b5c68898d49d1927bee4ebbcb6 not found: ID does not exist" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.705600 4736 scope.go:117] "RemoveContainer" containerID="b5c906719840a3aaf7a09b8fddc7f09a7f32b08dcd05cbb185620138203d25b2" Mar 16 18:36:36 crc kubenswrapper[4736]: E0316 18:36:36.710284 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5c906719840a3aaf7a09b8fddc7f09a7f32b08dcd05cbb185620138203d25b2\": container with ID starting with b5c906719840a3aaf7a09b8fddc7f09a7f32b08dcd05cbb185620138203d25b2 not found: ID does not exist" containerID="b5c906719840a3aaf7a09b8fddc7f09a7f32b08dcd05cbb185620138203d25b2" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.710311 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5c906719840a3aaf7a09b8fddc7f09a7f32b08dcd05cbb185620138203d25b2"} err="failed to get container status \"b5c906719840a3aaf7a09b8fddc7f09a7f32b08dcd05cbb185620138203d25b2\": rpc error: code = NotFound desc = could not find container \"b5c906719840a3aaf7a09b8fddc7f09a7f32b08dcd05cbb185620138203d25b2\": container with ID starting with b5c906719840a3aaf7a09b8fddc7f09a7f32b08dcd05cbb185620138203d25b2 not found: ID does not exist" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.710329 4736 scope.go:117] "RemoveContainer" containerID="7496eb23486bf030f5edb406e443d8ff9f140092b563ab469a947b0186d98e5c" Mar 16 18:36:36 crc kubenswrapper[4736]: E0316 18:36:36.710961 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7496eb23486bf030f5edb406e443d8ff9f140092b563ab469a947b0186d98e5c\": container with ID starting with 7496eb23486bf030f5edb406e443d8ff9f140092b563ab469a947b0186d98e5c not found: ID does not exist" containerID="7496eb23486bf030f5edb406e443d8ff9f140092b563ab469a947b0186d98e5c" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.711001 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7496eb23486bf030f5edb406e443d8ff9f140092b563ab469a947b0186d98e5c"} err="failed to get container status \"7496eb23486bf030f5edb406e443d8ff9f140092b563ab469a947b0186d98e5c\": rpc error: code = NotFound desc = could not find container \"7496eb23486bf030f5edb406e443d8ff9f140092b563ab469a947b0186d98e5c\": container with ID starting with 7496eb23486bf030f5edb406e443d8ff9f140092b563ab469a947b0186d98e5c not found: ID does not exist" Mar 16 18:36:36 crc kubenswrapper[4736]: I0316 18:36:36.988993 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf56d473-433a-4c6a-90c9-3b77de1c550e" path="/var/lib/kubelet/pods/cf56d473-433a-4c6a-90c9-3b77de1c550e/volumes" Mar 16 18:36:37 crc kubenswrapper[4736]: I0316 18:36:37.690431 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/crc-debug-wpd6c" Mar 16 18:36:37 crc kubenswrapper[4736]: I0316 18:36:37.795879 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0624b1a3-d82e-4301-baaf-b1765f14bc2f-host\") pod \"0624b1a3-d82e-4301-baaf-b1765f14bc2f\" (UID: \"0624b1a3-d82e-4301-baaf-b1765f14bc2f\") " Mar 16 18:36:37 crc kubenswrapper[4736]: I0316 18:36:37.795987 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0624b1a3-d82e-4301-baaf-b1765f14bc2f-host" (OuterVolumeSpecName: "host") pod "0624b1a3-d82e-4301-baaf-b1765f14bc2f" (UID: "0624b1a3-d82e-4301-baaf-b1765f14bc2f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 18:36:37 crc kubenswrapper[4736]: I0316 18:36:37.796139 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p65h4\" (UniqueName: \"kubernetes.io/projected/0624b1a3-d82e-4301-baaf-b1765f14bc2f-kube-api-access-p65h4\") pod \"0624b1a3-d82e-4301-baaf-b1765f14bc2f\" (UID: \"0624b1a3-d82e-4301-baaf-b1765f14bc2f\") " Mar 16 18:36:37 crc kubenswrapper[4736]: I0316 18:36:37.796695 4736 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0624b1a3-d82e-4301-baaf-b1765f14bc2f-host\") on node \"crc\" DevicePath \"\"" Mar 16 18:36:37 crc kubenswrapper[4736]: I0316 18:36:37.813295 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0624b1a3-d82e-4301-baaf-b1765f14bc2f-kube-api-access-p65h4" (OuterVolumeSpecName: "kube-api-access-p65h4") pod "0624b1a3-d82e-4301-baaf-b1765f14bc2f" (UID: "0624b1a3-d82e-4301-baaf-b1765f14bc2f"). InnerVolumeSpecName "kube-api-access-p65h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:36:37 crc kubenswrapper[4736]: I0316 18:36:37.860418 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vw8m2/crc-debug-wpd6c"] Mar 16 18:36:37 crc kubenswrapper[4736]: I0316 18:36:37.871339 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vw8m2/crc-debug-wpd6c"] Mar 16 18:36:37 crc kubenswrapper[4736]: I0316 18:36:37.898552 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p65h4\" (UniqueName: \"kubernetes.io/projected/0624b1a3-d82e-4301-baaf-b1765f14bc2f-kube-api-access-p65h4\") on node \"crc\" DevicePath \"\"" Mar 16 18:36:38 crc kubenswrapper[4736]: I0316 18:36:38.613879 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45bd39b684810bd682374fdfd2c308fe322ef093bdcf964d7804ab0eef1d83df" Mar 16 18:36:38 crc kubenswrapper[4736]: I0316 18:36:38.613978 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/crc-debug-wpd6c" Mar 16 18:36:38 crc kubenswrapper[4736]: I0316 18:36:38.991345 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0624b1a3-d82e-4301-baaf-b1765f14bc2f" path="/var/lib/kubelet/pods/0624b1a3-d82e-4301-baaf-b1765f14bc2f/volumes" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.031840 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vw8m2/crc-debug-m5x75"] Mar 16 18:36:39 crc kubenswrapper[4736]: E0316 18:36:39.033404 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0624b1a3-d82e-4301-baaf-b1765f14bc2f" containerName="container-00" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.033545 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="0624b1a3-d82e-4301-baaf-b1765f14bc2f" containerName="container-00" Mar 16 18:36:39 crc kubenswrapper[4736]: E0316 18:36:39.033639 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerName="extract-utilities" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.033719 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerName="extract-utilities" Mar 16 18:36:39 crc kubenswrapper[4736]: E0316 18:36:39.033836 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerName="extract-content" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.033954 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerName="extract-content" Mar 16 18:36:39 crc kubenswrapper[4736]: E0316 18:36:39.034075 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerName="registry-server" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.034224 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerName="registry-server" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.034850 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf56d473-433a-4c6a-90c9-3b77de1c550e" containerName="registry-server" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.035000 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="0624b1a3-d82e-4301-baaf-b1765f14bc2f" containerName="container-00" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.036280 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/crc-debug-m5x75" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.123526 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/100c2b1e-6c7a-4a2f-98be-14c8d52b5474-host\") pod \"crc-debug-m5x75\" (UID: \"100c2b1e-6c7a-4a2f-98be-14c8d52b5474\") " pod="openshift-must-gather-vw8m2/crc-debug-m5x75" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.123675 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfqht\" (UniqueName: \"kubernetes.io/projected/100c2b1e-6c7a-4a2f-98be-14c8d52b5474-kube-api-access-hfqht\") pod \"crc-debug-m5x75\" (UID: \"100c2b1e-6c7a-4a2f-98be-14c8d52b5474\") " pod="openshift-must-gather-vw8m2/crc-debug-m5x75" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.225677 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/100c2b1e-6c7a-4a2f-98be-14c8d52b5474-host\") pod \"crc-debug-m5x75\" (UID: \"100c2b1e-6c7a-4a2f-98be-14c8d52b5474\") " pod="openshift-must-gather-vw8m2/crc-debug-m5x75" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.225796 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfqht\" (UniqueName: \"kubernetes.io/projected/100c2b1e-6c7a-4a2f-98be-14c8d52b5474-kube-api-access-hfqht\") pod \"crc-debug-m5x75\" (UID: \"100c2b1e-6c7a-4a2f-98be-14c8d52b5474\") " pod="openshift-must-gather-vw8m2/crc-debug-m5x75" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.225843 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/100c2b1e-6c7a-4a2f-98be-14c8d52b5474-host\") pod \"crc-debug-m5x75\" (UID: \"100c2b1e-6c7a-4a2f-98be-14c8d52b5474\") " pod="openshift-must-gather-vw8m2/crc-debug-m5x75" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.246961 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfqht\" (UniqueName: \"kubernetes.io/projected/100c2b1e-6c7a-4a2f-98be-14c8d52b5474-kube-api-access-hfqht\") pod \"crc-debug-m5x75\" (UID: \"100c2b1e-6c7a-4a2f-98be-14c8d52b5474\") " pod="openshift-must-gather-vw8m2/crc-debug-m5x75" Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.353730 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/crc-debug-m5x75" Mar 16 18:36:39 crc kubenswrapper[4736]: W0316 18:36:39.383811 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod100c2b1e_6c7a_4a2f_98be_14c8d52b5474.slice/crio-7080756061a732559a0c17f2e8ad7ea6a9b3586753d16cad8dbde0fd938dc4fb WatchSource:0}: Error finding container 7080756061a732559a0c17f2e8ad7ea6a9b3586753d16cad8dbde0fd938dc4fb: Status 404 returned error can't find the container with id 7080756061a732559a0c17f2e8ad7ea6a9b3586753d16cad8dbde0fd938dc4fb Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.623124 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vw8m2/crc-debug-m5x75" event={"ID":"100c2b1e-6c7a-4a2f-98be-14c8d52b5474","Type":"ContainerStarted","Data":"7080756061a732559a0c17f2e8ad7ea6a9b3586753d16cad8dbde0fd938dc4fb"} Mar 16 18:36:39 crc kubenswrapper[4736]: I0316 18:36:39.977890 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:36:39 crc kubenswrapper[4736]: E0316 18:36:39.978567 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:36:40 crc kubenswrapper[4736]: I0316 18:36:40.638849 4736 generic.go:334] "Generic (PLEG): container finished" podID="100c2b1e-6c7a-4a2f-98be-14c8d52b5474" containerID="63a6682ec53aeef12cca0965fd786c343120ca65c15abccf7bf141a2a12f8d26" exitCode=0 Mar 16 18:36:40 crc kubenswrapper[4736]: I0316 18:36:40.638908 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vw8m2/crc-debug-m5x75" event={"ID":"100c2b1e-6c7a-4a2f-98be-14c8d52b5474","Type":"ContainerDied","Data":"63a6682ec53aeef12cca0965fd786c343120ca65c15abccf7bf141a2a12f8d26"} Mar 16 18:36:40 crc kubenswrapper[4736]: I0316 18:36:40.701564 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vw8m2/crc-debug-m5x75"] Mar 16 18:36:40 crc kubenswrapper[4736]: I0316 18:36:40.714039 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vw8m2/crc-debug-m5x75"] Mar 16 18:36:41 crc kubenswrapper[4736]: I0316 18:36:41.759394 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/crc-debug-m5x75" Mar 16 18:36:41 crc kubenswrapper[4736]: I0316 18:36:41.877700 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/100c2b1e-6c7a-4a2f-98be-14c8d52b5474-host\") pod \"100c2b1e-6c7a-4a2f-98be-14c8d52b5474\" (UID: \"100c2b1e-6c7a-4a2f-98be-14c8d52b5474\") " Mar 16 18:36:41 crc kubenswrapper[4736]: I0316 18:36:41.877810 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/100c2b1e-6c7a-4a2f-98be-14c8d52b5474-host" (OuterVolumeSpecName: "host") pod "100c2b1e-6c7a-4a2f-98be-14c8d52b5474" (UID: "100c2b1e-6c7a-4a2f-98be-14c8d52b5474"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 18:36:41 crc kubenswrapper[4736]: I0316 18:36:41.877877 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfqht\" (UniqueName: \"kubernetes.io/projected/100c2b1e-6c7a-4a2f-98be-14c8d52b5474-kube-api-access-hfqht\") pod \"100c2b1e-6c7a-4a2f-98be-14c8d52b5474\" (UID: \"100c2b1e-6c7a-4a2f-98be-14c8d52b5474\") " Mar 16 18:36:41 crc kubenswrapper[4736]: I0316 18:36:41.878280 4736 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/100c2b1e-6c7a-4a2f-98be-14c8d52b5474-host\") on node \"crc\" DevicePath \"\"" Mar 16 18:36:41 crc kubenswrapper[4736]: I0316 18:36:41.889765 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100c2b1e-6c7a-4a2f-98be-14c8d52b5474-kube-api-access-hfqht" (OuterVolumeSpecName: "kube-api-access-hfqht") pod "100c2b1e-6c7a-4a2f-98be-14c8d52b5474" (UID: "100c2b1e-6c7a-4a2f-98be-14c8d52b5474"). InnerVolumeSpecName "kube-api-access-hfqht". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:36:41 crc kubenswrapper[4736]: I0316 18:36:41.979713 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfqht\" (UniqueName: \"kubernetes.io/projected/100c2b1e-6c7a-4a2f-98be-14c8d52b5474-kube-api-access-hfqht\") on node \"crc\" DevicePath \"\"" Mar 16 18:36:42 crc kubenswrapper[4736]: I0316 18:36:42.657332 4736 scope.go:117] "RemoveContainer" containerID="63a6682ec53aeef12cca0965fd786c343120ca65c15abccf7bf141a2a12f8d26" Mar 16 18:36:42 crc kubenswrapper[4736]: I0316 18:36:42.657381 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/crc-debug-m5x75" Mar 16 18:36:42 crc kubenswrapper[4736]: I0316 18:36:42.988909 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="100c2b1e-6c7a-4a2f-98be-14c8d52b5474" path="/var/lib/kubelet/pods/100c2b1e-6c7a-4a2f-98be-14c8d52b5474/volumes" Mar 16 18:36:51 crc kubenswrapper[4736]: I0316 18:36:51.977558 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:36:51 crc kubenswrapper[4736]: E0316 18:36:51.978251 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:36:59 crc kubenswrapper[4736]: I0316 18:36:59.208702 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59554d7c7d-jq5k7_3c9e0de3-6386-4733-bc2b-b2eec48d8098/barbican-api/0.log" Mar 16 18:36:59 crc kubenswrapper[4736]: I0316 18:36:59.277693 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59554d7c7d-jq5k7_3c9e0de3-6386-4733-bc2b-b2eec48d8098/barbican-api-log/0.log" Mar 16 18:36:59 crc kubenswrapper[4736]: I0316 18:36:59.425506 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5868c68fb4-ww9v7_ad07fdc6-06e5-4045-8049-783bc6e6d5c6/barbican-keystone-listener/0.log" Mar 16 18:36:59 crc kubenswrapper[4736]: I0316 18:36:59.612307 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-648fb9b5bc-8f55h_ee82b20e-e4ad-4267-9845-3c5838fa1e0f/barbican-worker/0.log" Mar 16 18:36:59 crc kubenswrapper[4736]: I0316 18:36:59.627130 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5868c68fb4-ww9v7_ad07fdc6-06e5-4045-8049-783bc6e6d5c6/barbican-keystone-listener-log/0.log" Mar 16 18:36:59 crc kubenswrapper[4736]: I0316 18:36:59.692900 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-648fb9b5bc-8f55h_ee82b20e-e4ad-4267-9845-3c5838fa1e0f/barbican-worker-log/0.log" Mar 16 18:36:59 crc kubenswrapper[4736]: I0316 18:36:59.887302 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm_bac31a5a-12b7-4a43-b596-91352137545b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:00 crc kubenswrapper[4736]: I0316 18:37:00.012549 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_321d2397-bb79-4799-8725-95081269785f/ceilometer-central-agent/1.log" Mar 16 18:37:00 crc kubenswrapper[4736]: I0316 18:37:00.182292 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_321d2397-bb79-4799-8725-95081269785f/ceilometer-central-agent/0.log" Mar 16 18:37:00 crc kubenswrapper[4736]: I0316 18:37:00.197944 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_321d2397-bb79-4799-8725-95081269785f/ceilometer-notification-agent/0.log" Mar 16 18:37:00 crc kubenswrapper[4736]: I0316 18:37:00.233995 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_321d2397-bb79-4799-8725-95081269785f/sg-core/0.log" Mar 16 18:37:00 crc kubenswrapper[4736]: I0316 18:37:00.248373 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_321d2397-bb79-4799-8725-95081269785f/proxy-httpd/0.log" Mar 16 18:37:00 crc kubenswrapper[4736]: I0316 18:37:00.484788 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9/cinder-api-log/0.log" Mar 16 18:37:00 crc kubenswrapper[4736]: I0316 18:37:00.542054 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9/cinder-api/0.log" Mar 16 18:37:00 crc kubenswrapper[4736]: I0316 18:37:00.705086 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1d62491e-6f65-49ab-8baf-3c653e7df95e/cinder-scheduler/0.log" Mar 16 18:37:00 crc kubenswrapper[4736]: I0316 18:37:00.842876 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1d62491e-6f65-49ab-8baf-3c653e7df95e/probe/0.log" Mar 16 18:37:00 crc kubenswrapper[4736]: I0316 18:37:00.911484 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-shf2t_123f23c5-bce7-4080-a7da-1bce3b43d685/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:01 crc kubenswrapper[4736]: I0316 18:37:01.117530 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-thsth_09227c49-1a61-4c9c-827d-336efc0fe550/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:01 crc kubenswrapper[4736]: I0316 18:37:01.201102 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-577bbc9c65-pclks_8de44873-db27-4ee1-bade-ac87cec3c328/init/0.log" Mar 16 18:37:01 crc kubenswrapper[4736]: I0316 18:37:01.409572 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-577bbc9c65-pclks_8de44873-db27-4ee1-bade-ac87cec3c328/init/0.log" Mar 16 18:37:01 crc kubenswrapper[4736]: I0316 18:37:01.492093 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xxldx_ad968503-ce02-492c-a946-1d0e986a99ff/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:01 crc kubenswrapper[4736]: I0316 18:37:01.685144 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-577bbc9c65-pclks_8de44873-db27-4ee1-bade-ac87cec3c328/dnsmasq-dns/0.log" Mar 16 18:37:01 crc kubenswrapper[4736]: I0316 18:37:01.776280 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5c913371-a3e8-4e40-a1e3-69f93eeef930/glance-httpd/0.log" Mar 16 18:37:01 crc kubenswrapper[4736]: I0316 18:37:01.826771 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5c913371-a3e8-4e40-a1e3-69f93eeef930/glance-log/0.log" Mar 16 18:37:01 crc kubenswrapper[4736]: I0316 18:37:01.995459 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ce1ada3f-941d-4468-8a04-0c780a84148b/glance-httpd/0.log" Mar 16 18:37:02 crc kubenswrapper[4736]: I0316 18:37:02.053855 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ce1ada3f-941d-4468-8a04-0c780a84148b/glance-log/0.log" Mar 16 18:37:02 crc kubenswrapper[4736]: I0316 18:37:02.584363 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-678d85b7f7-bdd5r_5498e21f-9b52-4ecb-9c10-7a688723d57f/heat-engine/0.log" Mar 16 18:37:02 crc kubenswrapper[4736]: I0316 18:37:02.909251 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-ff55bcd5b-psrsc_4a2c18b8-790c-4bb8-ac86-c70f0220ab3f/horizon/2.log" Mar 16 18:37:02 crc kubenswrapper[4736]: I0316 18:37:02.979055 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:37:02 crc kubenswrapper[4736]: E0316 18:37:02.979442 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:37:03 crc kubenswrapper[4736]: I0316 18:37:03.097320 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-ff55bcd5b-psrsc_4a2c18b8-790c-4bb8-ac86-c70f0220ab3f/horizon/1.log" Mar 16 18:37:03 crc kubenswrapper[4736]: I0316 18:37:03.695095 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9_7f216da9-755f-42e5-8058-15af7388a669/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:03 crc kubenswrapper[4736]: I0316 18:37:03.968299 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-f5xbq_3477848e-08a4-4e82-a565-d5e83bf58c7d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:04 crc kubenswrapper[4736]: I0316 18:37:04.039304 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-75b949cc99-d78kz_f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7/heat-api/0.log" Mar 16 18:37:04 crc kubenswrapper[4736]: I0316 18:37:04.335072 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7b4b9fc8dc-4c2rv_b85b52b6-77da-47f3-96b7-5230c7804524/heat-cfnapi/0.log" Mar 16 18:37:04 crc kubenswrapper[4736]: I0316 18:37:04.401814 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29561281-4v2pq_1be53923-c22e-42a2-936a-5dd4a6484821/keystone-cron/0.log" Mar 16 18:37:04 crc kubenswrapper[4736]: I0316 18:37:04.592409 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29561341-2v97d_e106e4db-3f81-4cba-9c0a-697acced07dd/keystone-cron/0.log" Mar 16 18:37:04 crc kubenswrapper[4736]: I0316 18:37:04.859156 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29561401-q7h6f_0b25b5f7-f358-40e8-91df-9398c8719033/keystone-cron/0.log" Mar 16 18:37:05 crc kubenswrapper[4736]: I0316 18:37:05.025598 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_02f0ab2b-3871-4319-a39a-2c1d13a8c6e6/kube-state-metrics/0.log" Mar 16 18:37:05 crc kubenswrapper[4736]: I0316 18:37:05.301740 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h_97bb28be-aaed-4b82-9df1-cb24c9dd48e3/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:05 crc kubenswrapper[4736]: I0316 18:37:05.516459 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-ff55bcd5b-psrsc_4a2c18b8-790c-4bb8-ac86-c70f0220ab3f/horizon-log/0.log" Mar 16 18:37:05 crc kubenswrapper[4736]: I0316 18:37:05.873263 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75f7dd5797-28cnx_a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c/neutron-httpd/0.log" Mar 16 18:37:06 crc kubenswrapper[4736]: I0316 18:37:06.139966 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x_7e4df23c-a76d-497d-b0c1-0b3264ed20ce/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:06 crc kubenswrapper[4736]: I0316 18:37:06.215077 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-654fb4cdb6-6lld5_ea7a8a52-515b-45e3-8c30-4fd52d65cdc6/keystone-api/0.log" Mar 16 18:37:06 crc kubenswrapper[4736]: I0316 18:37:06.764513 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75f7dd5797-28cnx_a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c/neutron-api/0.log" Mar 16 18:37:07 crc kubenswrapper[4736]: I0316 18:37:07.233184 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_24e0aaa8-4eb1-408e-b98a-f99ce1f8e909/nova-cell0-conductor-conductor/0.log" Mar 16 18:37:07 crc kubenswrapper[4736]: I0316 18:37:07.374616 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_202b09c4-bf70-46c9-aff5-b536e3f7ef9d/nova-cell1-conductor-conductor/0.log" Mar 16 18:37:07 crc kubenswrapper[4736]: I0316 18:37:07.983055 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b1ca423d-d8ce-437c-9fca-1b57025ab173/nova-cell1-novncproxy-novncproxy/0.log" Mar 16 18:37:08 crc kubenswrapper[4736]: I0316 18:37:08.027695 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-bbqh7_4020b89b-a736-4914-9ea6-969e75a9b526/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:08 crc kubenswrapper[4736]: I0316 18:37:08.416941 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b839907f-5ee5-450e-b483-ace8fd0fb0d5/nova-metadata-log/0.log" Mar 16 18:37:09 crc kubenswrapper[4736]: I0316 18:37:09.551740 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ffe59cdd-6766-4d9b-a82c-0287d028a8d0/nova-api-log/0.log" Mar 16 18:37:09 crc kubenswrapper[4736]: I0316 18:37:09.659664 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c38eb8c1-13d7-4ef2-b026-d55b36f56919/nova-scheduler-scheduler/0.log" Mar 16 18:37:09 crc kubenswrapper[4736]: I0316 18:37:09.966996 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9243d80f-05dc-4dff-a328-780f64a121af/mysql-bootstrap/0.log" Mar 16 18:37:10 crc kubenswrapper[4736]: I0316 18:37:10.214376 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9243d80f-05dc-4dff-a328-780f64a121af/galera/0.log" Mar 16 18:37:10 crc kubenswrapper[4736]: I0316 18:37:10.230954 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9243d80f-05dc-4dff-a328-780f64a121af/mysql-bootstrap/0.log" Mar 16 18:37:10 crc kubenswrapper[4736]: I0316 18:37:10.478004 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_51e06fc2-19ee-4e32-8118-d4596cb6b124/mysql-bootstrap/0.log" Mar 16 18:37:10 crc kubenswrapper[4736]: I0316 18:37:10.681666 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b839907f-5ee5-450e-b483-ace8fd0fb0d5/nova-metadata-metadata/0.log" Mar 16 18:37:10 crc kubenswrapper[4736]: I0316 18:37:10.708124 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_51e06fc2-19ee-4e32-8118-d4596cb6b124/mysql-bootstrap/0.log" Mar 16 18:37:10 crc kubenswrapper[4736]: I0316 18:37:10.733792 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_51e06fc2-19ee-4e32-8118-d4596cb6b124/galera/0.log" Mar 16 18:37:10 crc kubenswrapper[4736]: I0316 18:37:10.977010 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_421bab10-ac4a-458f-98e3-18cd0adef038/openstackclient/0.log" Mar 16 18:37:11 crc kubenswrapper[4736]: I0316 18:37:11.200454 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9wkhh_b3d93764-b264-4e7d-87fe-ea95bd3fb252/ovn-controller/0.log" Mar 16 18:37:11 crc kubenswrapper[4736]: I0316 18:37:11.256735 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-m74rl_8ff193b6-fc55-427d-b256-a9b253fa60c4/openstack-network-exporter/0.log" Mar 16 18:37:11 crc kubenswrapper[4736]: I0316 18:37:11.614827 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jchb9_197c602f-0abb-430a-8011-a454072994fd/ovsdb-server-init/0.log" Mar 16 18:37:11 crc kubenswrapper[4736]: I0316 18:37:11.736783 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ffe59cdd-6766-4d9b-a82c-0287d028a8d0/nova-api-api/0.log" Mar 16 18:37:11 crc kubenswrapper[4736]: I0316 18:37:11.822757 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jchb9_197c602f-0abb-430a-8011-a454072994fd/ovsdb-server-init/0.log" Mar 16 18:37:11 crc kubenswrapper[4736]: I0316 18:37:11.833003 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jchb9_197c602f-0abb-430a-8011-a454072994fd/ovs-vswitchd/0.log" Mar 16 18:37:11 crc kubenswrapper[4736]: I0316 18:37:11.890221 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jchb9_197c602f-0abb-430a-8011-a454072994fd/ovsdb-server/0.log" Mar 16 18:37:12 crc kubenswrapper[4736]: I0316 18:37:12.096986 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xt72q_cd71853b-a1d3-4429-90b3-4cee241cfa21/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:12 crc kubenswrapper[4736]: I0316 18:37:12.152973 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_38835fa0-dde3-4eb4-8ec0-7627436b49ca/openstack-network-exporter/0.log" Mar 16 18:37:12 crc kubenswrapper[4736]: I0316 18:37:12.323338 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_38835fa0-dde3-4eb4-8ec0-7627436b49ca/ovn-northd/0.log" Mar 16 18:37:12 crc kubenswrapper[4736]: I0316 18:37:12.407135 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3228db46-56d3-4e82-8973-77a049c7e003/openstack-network-exporter/0.log" Mar 16 18:37:12 crc kubenswrapper[4736]: I0316 18:37:12.574637 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3228db46-56d3-4e82-8973-77a049c7e003/ovsdbserver-nb/0.log" Mar 16 18:37:12 crc kubenswrapper[4736]: I0316 18:37:12.637778 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_adfa9156-d077-4b45-af4d-cc113fbff209/ovsdbserver-sb/0.log" Mar 16 18:37:12 crc kubenswrapper[4736]: I0316 18:37:12.658432 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_adfa9156-d077-4b45-af4d-cc113fbff209/openstack-network-exporter/0.log" Mar 16 18:37:13 crc kubenswrapper[4736]: I0316 18:37:13.182298 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da4145d1-110a-477c-ba28-813d6c53db11/setup-container/0.log" Mar 16 18:37:13 crc kubenswrapper[4736]: I0316 18:37:13.361279 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-784f554468-tgz6j_f744bb37-172a-4e29-b348-5b70d53c5d16/placement-api/0.log" Mar 16 18:37:13 crc kubenswrapper[4736]: I0316 18:37:13.448429 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da4145d1-110a-477c-ba28-813d6c53db11/rabbitmq/0.log" Mar 16 18:37:13 crc kubenswrapper[4736]: I0316 18:37:13.503311 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da4145d1-110a-477c-ba28-813d6c53db11/setup-container/0.log" Mar 16 18:37:13 crc kubenswrapper[4736]: I0316 18:37:13.625301 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-784f554468-tgz6j_f744bb37-172a-4e29-b348-5b70d53c5d16/placement-log/0.log" Mar 16 18:37:13 crc kubenswrapper[4736]: I0316 18:37:13.730814 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0b8db200-f455-4868-8ffd-7f129434034e/setup-container/0.log" Mar 16 18:37:13 crc kubenswrapper[4736]: I0316 18:37:13.933868 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0b8db200-f455-4868-8ffd-7f129434034e/setup-container/0.log" Mar 16 18:37:13 crc kubenswrapper[4736]: I0316 18:37:13.992131 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0b8db200-f455-4868-8ffd-7f129434034e/rabbitmq/0.log" Mar 16 18:37:13 crc kubenswrapper[4736]: I0316 18:37:13.999229 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb_431e1d6e-e9a2-4414-b37e-9612991eb00c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:14 crc kubenswrapper[4736]: I0316 18:37:14.319935 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-lzcb8_a44a03ad-9259-452c-8234-2ee8f93d66be/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:14 crc kubenswrapper[4736]: I0316 18:37:14.337083 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n_37a5c3b4-a904-4d80-8823-97fa52f36de3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:14 crc kubenswrapper[4736]: I0316 18:37:14.551207 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-trqck_9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:14 crc kubenswrapper[4736]: I0316 18:37:14.654226 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-clhnp_b21d0120-de7a-44aa-a9a7-469ff2670bd4/ssh-known-hosts-edpm-deployment/0.log" Mar 16 18:37:14 crc kubenswrapper[4736]: I0316 18:37:14.961346 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-678dd4f677-jxtsk_bccee937-d642-4483-87fb-033b157cf68c/proxy-server/0.log" Mar 16 18:37:15 crc kubenswrapper[4736]: I0316 18:37:15.060145 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-6f7cb_48b165ae-e228-45fa-a5d3-6d1f8c8f43b1/swift-ring-rebalance/0.log" Mar 16 18:37:15 crc kubenswrapper[4736]: I0316 18:37:15.305033 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/account-auditor/0.log" Mar 16 18:37:15 crc kubenswrapper[4736]: I0316 18:37:15.326771 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/account-reaper/0.log" Mar 16 18:37:15 crc kubenswrapper[4736]: I0316 18:37:15.425023 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-678dd4f677-jxtsk_bccee937-d642-4483-87fb-033b157cf68c/proxy-httpd/0.log" Mar 16 18:37:15 crc kubenswrapper[4736]: I0316 18:37:15.575331 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/container-auditor/0.log" Mar 16 18:37:15 crc kubenswrapper[4736]: I0316 18:37:15.593526 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/account-replicator/0.log" Mar 16 18:37:15 crc kubenswrapper[4736]: I0316 18:37:15.633164 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/account-server/0.log" Mar 16 18:37:15 crc kubenswrapper[4736]: I0316 18:37:15.779566 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/container-server/0.log" Mar 16 18:37:15 crc kubenswrapper[4736]: I0316 18:37:15.798979 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/container-replicator/0.log" Mar 16 18:37:15 crc kubenswrapper[4736]: I0316 18:37:15.859885 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/container-updater/0.log" Mar 16 18:37:15 crc kubenswrapper[4736]: I0316 18:37:15.957523 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/object-auditor/0.log" Mar 16 18:37:16 crc kubenswrapper[4736]: I0316 18:37:16.010883 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/object-expirer/0.log" Mar 16 18:37:16 crc kubenswrapper[4736]: I0316 18:37:16.210411 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/object-replicator/0.log" Mar 16 18:37:16 crc kubenswrapper[4736]: I0316 18:37:16.217767 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/rsync/0.log" Mar 16 18:37:16 crc kubenswrapper[4736]: I0316 18:37:16.220641 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/object-server/0.log" Mar 16 18:37:16 crc kubenswrapper[4736]: I0316 18:37:16.247024 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/object-updater/0.log" Mar 16 18:37:16 crc kubenswrapper[4736]: I0316 18:37:16.562559 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/swift-recon-cron/0.log" Mar 16 18:37:16 crc kubenswrapper[4736]: I0316 18:37:16.712880 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl_bf51b7ea-25d5-4fa2-9abe-db781c31f96f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:16 crc kubenswrapper[4736]: I0316 18:37:16.867394 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-multi-thread-testing_50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31/tempest-tests-tempest-tests-runner/0.log" Mar 16 18:37:16 crc kubenswrapper[4736]: I0316 18:37:16.883525 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-thread-testing_252155f6-a310-43e1-bf80-1d17a2db2128/tempest-tests-tempest-tests-runner/0.log" Mar 16 18:37:16 crc kubenswrapper[4736]: I0316 18:37:16.980166 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:37:16 crc kubenswrapper[4736]: E0316 18:37:16.981036 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:37:17 crc kubenswrapper[4736]: I0316 18:37:17.098898 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_75ad0a89-ba23-4203-b8a5-e6b3a4ee7c19/test-operator-logs-container/0.log" Mar 16 18:37:17 crc kubenswrapper[4736]: I0316 18:37:17.224472 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj_6f274275-8257-4335-b3d8-a2441d5ddf1e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:37:27 crc kubenswrapper[4736]: I0316 18:37:27.979811 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:37:27 crc kubenswrapper[4736]: E0316 18:37:27.980676 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:37:32 crc kubenswrapper[4736]: I0316 18:37:32.588193 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7fa40817-425b-4ee8-9c3b-e7e109307837/memcached/0.log" Mar 16 18:37:39 crc kubenswrapper[4736]: I0316 18:37:39.978405 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:37:39 crc kubenswrapper[4736]: E0316 18:37:39.980078 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:37:51 crc kubenswrapper[4736]: I0316 18:37:51.208564 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59bc569d95-w4ppt_99a35a5a-103f-4e00-9b39-d4f86531f5f7/manager/0.log" Mar 16 18:37:51 crc kubenswrapper[4736]: I0316 18:37:51.318366 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt_6e002ccd-bc1b-4542-9e87-4086de4291c9/util/0.log" Mar 16 18:37:51 crc kubenswrapper[4736]: I0316 18:37:51.612527 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt_6e002ccd-bc1b-4542-9e87-4086de4291c9/util/0.log" Mar 16 18:37:51 crc kubenswrapper[4736]: I0316 18:37:51.624052 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt_6e002ccd-bc1b-4542-9e87-4086de4291c9/pull/0.log" Mar 16 18:37:51 crc kubenswrapper[4736]: I0316 18:37:51.684416 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt_6e002ccd-bc1b-4542-9e87-4086de4291c9/pull/0.log" Mar 16 18:37:51 crc kubenswrapper[4736]: I0316 18:37:51.896827 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt_6e002ccd-bc1b-4542-9e87-4086de4291c9/util/0.log" Mar 16 18:37:51 crc kubenswrapper[4736]: I0316 18:37:51.950726 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt_6e002ccd-bc1b-4542-9e87-4086de4291c9/pull/0.log" Mar 16 18:37:51 crc kubenswrapper[4736]: I0316 18:37:51.993883 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt_6e002ccd-bc1b-4542-9e87-4086de4291c9/extract/0.log" Mar 16 18:37:52 crc kubenswrapper[4736]: I0316 18:37:52.343784 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-588d4d986b-c6tc2_2d48b057-960e-445a-bc66-b6d3dbfb56f9/manager/0.log" Mar 16 18:37:52 crc kubenswrapper[4736]: I0316 18:37:52.524382 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-79df6bcc97-z9l9q_9d7909e1-3088-4a9e-b2ac-286927abd741/manager/0.log" Mar 16 18:37:52 crc kubenswrapper[4736]: I0316 18:37:52.722502 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-67dd5f86f5-d6m9n_1ae22b3c-97a5-4592-b263-557131818155/manager/0.log" Mar 16 18:37:52 crc kubenswrapper[4736]: I0316 18:37:52.895763 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-8464cc45fb-fd7xj_aac26090-af84-496a-afdf-efdb24694811/manager/0.log" Mar 16 18:37:53 crc kubenswrapper[4736]: I0316 18:37:53.342620 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6f787dddc9-9kzj2_f8308a1a-301e-40b9-8a0e-b7e267e74a10/manager/0.log" Mar 16 18:37:53 crc kubenswrapper[4736]: I0316 18:37:53.543640 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7b9c774f96-9b78c_634ac783-1fe6-4191-b432-f22ad5d84357/manager/0.log" Mar 16 18:37:53 crc kubenswrapper[4736]: I0316 18:37:53.703003 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-768b96df4c-7gd6n_d77bc7ac-fb08-4603-8453-677c6be6916d/manager/0.log" Mar 16 18:37:53 crc kubenswrapper[4736]: I0316 18:37:53.840232 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-55f864c847-sjpl5_569449b8-1135-4dd6-b6fe-ad66844b413e/manager/0.log" Mar 16 18:37:53 crc kubenswrapper[4736]: I0316 18:37:53.977732 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:37:53 crc kubenswrapper[4736]: E0316 18:37:53.978180 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:37:54 crc kubenswrapper[4736]: I0316 18:37:54.166761 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67ccfc9778-bgvjq_285f243f-b886-440f-8a92-b1ddf60bf6e6/manager/0.log" Mar 16 18:37:54 crc kubenswrapper[4736]: I0316 18:37:54.798466 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-767865f676-bqghw_99d86cbe-cf17-42a7-bc5b-d692609fff64/manager/0.log" Mar 16 18:37:55 crc kubenswrapper[4736]: I0316 18:37:55.012517 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5d488d59fb-7hrfc_b1ae843c-f1b5-4ee2-8300-55f93941ba2b/manager/0.log" Mar 16 18:37:55 crc kubenswrapper[4736]: I0316 18:37:55.040551 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d58dc466-tgqsm_8163ef92-862a-4de1-a443-8ac84a5ba0c9/manager/0.log" Mar 16 18:37:55 crc kubenswrapper[4736]: I0316 18:37:55.072740 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5b9f45d989-47kkg_e7971b38-1b13-4984-a055-2cc52b34bf6b/manager/0.log" Mar 16 18:37:55 crc kubenswrapper[4736]: I0316 18:37:55.410173 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-89d64c458-pc5vv_62d536a1-c184-4077-a6f8-4285c3ebe5db/manager/0.log" Mar 16 18:37:55 crc kubenswrapper[4736]: I0316 18:37:55.475704 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5dbd94f64-hsp7x_34b67803-050a-457b-80ff-64455949a26d/operator/0.log" Mar 16 18:37:55 crc kubenswrapper[4736]: I0316 18:37:55.861754 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ztkrd_aeb1e197-872b-4ade-b3e4-425a5e52433f/registry-server/0.log" Mar 16 18:37:56 crc kubenswrapper[4736]: I0316 18:37:56.157269 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-884679f54-vfqg8_40be2c61-bd71-46b6-b837-abf09d8d5aeb/manager/0.log" Mar 16 18:37:56 crc kubenswrapper[4736]: I0316 18:37:56.477793 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5784578c99-6fjhm_6cbcdd30-245d-4732-8986-77f861f1f568/manager/0.log" Mar 16 18:37:56 crc kubenswrapper[4736]: I0316 18:37:56.728451 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5467877-vhgh7_0a9b1e66-192c-4eab-a960-7fbd08759f54/manager/0.log" Mar 16 18:37:56 crc kubenswrapper[4736]: I0316 18:37:56.785397 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-dblvg_0a609c84-6f6b-48ae-a12b-d604e7b91c36/operator/0.log" Mar 16 18:37:56 crc kubenswrapper[4736]: I0316 18:37:56.810274 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-c674c5965-7lk9g_534a3ae8-6587-4e8a-b454-b084edbfeb21/manager/0.log" Mar 16 18:37:56 crc kubenswrapper[4736]: I0316 18:37:56.966657 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-d6b694c5-6z9rj_fff6882e-3a77-462f-b12e-25192ea56328/manager/0.log" Mar 16 18:37:57 crc kubenswrapper[4736]: I0316 18:37:57.047328 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-5ngf9_93d0e3bc-0e33-4254-b52e-31f28fdff357/manager/0.log" Mar 16 18:37:57 crc kubenswrapper[4736]: I0316 18:37:57.110912 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c4d75f7f9-pj69z_bdcce941-5cae-42fe-9dc5-a71e1e55790e/manager/0.log" Mar 16 18:38:00 crc kubenswrapper[4736]: I0316 18:38:00.144028 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561438-jgrtd"] Mar 16 18:38:00 crc kubenswrapper[4736]: E0316 18:38:00.144916 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100c2b1e-6c7a-4a2f-98be-14c8d52b5474" containerName="container-00" Mar 16 18:38:00 crc kubenswrapper[4736]: I0316 18:38:00.144928 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="100c2b1e-6c7a-4a2f-98be-14c8d52b5474" containerName="container-00" Mar 16 18:38:00 crc kubenswrapper[4736]: I0316 18:38:00.145147 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="100c2b1e-6c7a-4a2f-98be-14c8d52b5474" containerName="container-00" Mar 16 18:38:00 crc kubenswrapper[4736]: I0316 18:38:00.145702 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561438-jgrtd" Mar 16 18:38:00 crc kubenswrapper[4736]: I0316 18:38:00.156441 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561438-jgrtd"] Mar 16 18:38:00 crc kubenswrapper[4736]: I0316 18:38:00.159809 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:38:00 crc kubenswrapper[4736]: I0316 18:38:00.159814 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:38:00 crc kubenswrapper[4736]: I0316 18:38:00.160242 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:38:00 crc kubenswrapper[4736]: I0316 18:38:00.206993 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6j7p\" (UniqueName: \"kubernetes.io/projected/4abb45b1-25f1-4f81-a0ea-d582bbad2168-kube-api-access-k6j7p\") pod \"auto-csr-approver-29561438-jgrtd\" (UID: \"4abb45b1-25f1-4f81-a0ea-d582bbad2168\") " pod="openshift-infra/auto-csr-approver-29561438-jgrtd" Mar 16 18:38:00 crc kubenswrapper[4736]: I0316 18:38:00.308803 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6j7p\" (UniqueName: \"kubernetes.io/projected/4abb45b1-25f1-4f81-a0ea-d582bbad2168-kube-api-access-k6j7p\") pod \"auto-csr-approver-29561438-jgrtd\" (UID: \"4abb45b1-25f1-4f81-a0ea-d582bbad2168\") " pod="openshift-infra/auto-csr-approver-29561438-jgrtd" Mar 16 18:38:00 crc kubenswrapper[4736]: I0316 18:38:00.334488 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6j7p\" (UniqueName: \"kubernetes.io/projected/4abb45b1-25f1-4f81-a0ea-d582bbad2168-kube-api-access-k6j7p\") pod \"auto-csr-approver-29561438-jgrtd\" (UID: \"4abb45b1-25f1-4f81-a0ea-d582bbad2168\") " pod="openshift-infra/auto-csr-approver-29561438-jgrtd" Mar 16 18:38:00 crc kubenswrapper[4736]: I0316 18:38:00.464964 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561438-jgrtd" Mar 16 18:38:01 crc kubenswrapper[4736]: I0316 18:38:01.384755 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561438-jgrtd"] Mar 16 18:38:01 crc kubenswrapper[4736]: I0316 18:38:01.400396 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 18:38:02 crc kubenswrapper[4736]: I0316 18:38:02.414332 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561438-jgrtd" event={"ID":"4abb45b1-25f1-4f81-a0ea-d582bbad2168","Type":"ContainerStarted","Data":"8cc57d41237b598eabaeb16d20b540fa00223fad137c2c6c87fc9f1e89866835"} Mar 16 18:38:03 crc kubenswrapper[4736]: I0316 18:38:03.422188 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561438-jgrtd" event={"ID":"4abb45b1-25f1-4f81-a0ea-d582bbad2168","Type":"ContainerStarted","Data":"5e7f94777afecdd775f24aece1426382789e426219575b2668181df7cbd16c08"} Mar 16 18:38:03 crc kubenswrapper[4736]: I0316 18:38:03.439409 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561438-jgrtd" podStartSLOduration=2.441670065 podStartE2EDuration="3.4393937s" podCreationTimestamp="2026-03-16 18:38:00 +0000 UTC" firstStartedPulling="2026-03-16 18:38:01.396310631 +0000 UTC m=+12283.123700918" lastFinishedPulling="2026-03-16 18:38:02.394034256 +0000 UTC m=+12284.121424553" observedRunningTime="2026-03-16 18:38:03.433030276 +0000 UTC m=+12285.160420563" watchObservedRunningTime="2026-03-16 18:38:03.4393937 +0000 UTC m=+12285.166783987" Mar 16 18:38:05 crc kubenswrapper[4736]: I0316 18:38:05.439203 4736 generic.go:334] "Generic (PLEG): container finished" podID="4abb45b1-25f1-4f81-a0ea-d582bbad2168" containerID="5e7f94777afecdd775f24aece1426382789e426219575b2668181df7cbd16c08" exitCode=0 Mar 16 18:38:05 crc kubenswrapper[4736]: I0316 18:38:05.439294 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561438-jgrtd" event={"ID":"4abb45b1-25f1-4f81-a0ea-d582bbad2168","Type":"ContainerDied","Data":"5e7f94777afecdd775f24aece1426382789e426219575b2668181df7cbd16c08"} Mar 16 18:38:06 crc kubenswrapper[4736]: I0316 18:38:06.838858 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561438-jgrtd" Mar 16 18:38:06 crc kubenswrapper[4736]: I0316 18:38:06.931335 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6j7p\" (UniqueName: \"kubernetes.io/projected/4abb45b1-25f1-4f81-a0ea-d582bbad2168-kube-api-access-k6j7p\") pod \"4abb45b1-25f1-4f81-a0ea-d582bbad2168\" (UID: \"4abb45b1-25f1-4f81-a0ea-d582bbad2168\") " Mar 16 18:38:06 crc kubenswrapper[4736]: I0316 18:38:06.938490 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4abb45b1-25f1-4f81-a0ea-d582bbad2168-kube-api-access-k6j7p" (OuterVolumeSpecName: "kube-api-access-k6j7p") pod "4abb45b1-25f1-4f81-a0ea-d582bbad2168" (UID: "4abb45b1-25f1-4f81-a0ea-d582bbad2168"). InnerVolumeSpecName "kube-api-access-k6j7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:38:07 crc kubenswrapper[4736]: I0316 18:38:07.033982 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6j7p\" (UniqueName: \"kubernetes.io/projected/4abb45b1-25f1-4f81-a0ea-d582bbad2168-kube-api-access-k6j7p\") on node \"crc\" DevicePath \"\"" Mar 16 18:38:07 crc kubenswrapper[4736]: I0316 18:38:07.455920 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561438-jgrtd" event={"ID":"4abb45b1-25f1-4f81-a0ea-d582bbad2168","Type":"ContainerDied","Data":"8cc57d41237b598eabaeb16d20b540fa00223fad137c2c6c87fc9f1e89866835"} Mar 16 18:38:07 crc kubenswrapper[4736]: I0316 18:38:07.455969 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cc57d41237b598eabaeb16d20b540fa00223fad137c2c6c87fc9f1e89866835" Mar 16 18:38:07 crc kubenswrapper[4736]: I0316 18:38:07.455981 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561438-jgrtd" Mar 16 18:38:07 crc kubenswrapper[4736]: I0316 18:38:07.520433 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561432-8c7ht"] Mar 16 18:38:07 crc kubenswrapper[4736]: I0316 18:38:07.527471 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561432-8c7ht"] Mar 16 18:38:07 crc kubenswrapper[4736]: I0316 18:38:07.978029 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:38:07 crc kubenswrapper[4736]: E0316 18:38:07.978307 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:38:08 crc kubenswrapper[4736]: I0316 18:38:08.990323 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96eaf803-3df4-4ac7-8938-7514150cadba" path="/var/lib/kubelet/pods/96eaf803-3df4-4ac7-8938-7514150cadba/volumes" Mar 16 18:38:10 crc kubenswrapper[4736]: I0316 18:38:10.716188 4736 scope.go:117] "RemoveContainer" containerID="d19259c4d350ac02466796098b9507b53ed9ac36c451b9063704857b404063e9" Mar 16 18:38:20 crc kubenswrapper[4736]: I0316 18:38:20.978413 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:38:21 crc kubenswrapper[4736]: I0316 18:38:21.596040 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"5097a79e54b5c08f99f9bf301e89c1f69efbb4fecb46a12af41f8d21c7fa3311"} Mar 16 18:38:22 crc kubenswrapper[4736]: I0316 18:38:22.137367 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zxgkb_dca4fa92-819d-4973-87b1-b6282946f072/control-plane-machine-set-operator/0.log" Mar 16 18:38:22 crc kubenswrapper[4736]: I0316 18:38:22.304407 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dswrb_aee91c3b-8c99-4023-a891-2aaa3ab5ebcc/kube-rbac-proxy/0.log" Mar 16 18:38:22 crc kubenswrapper[4736]: I0316 18:38:22.340413 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dswrb_aee91c3b-8c99-4023-a891-2aaa3ab5ebcc/machine-api-operator/0.log" Mar 16 18:38:37 crc kubenswrapper[4736]: I0316 18:38:37.620333 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-775xw_855eb880-6d37-4d3c-a863-d4cb7520dc47/cert-manager-controller/0.log" Mar 16 18:38:37 crc kubenswrapper[4736]: I0316 18:38:37.756917 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-nvl2s_8c13d851-26c9-4a4f-8ffc-a94a10784cf2/cert-manager-cainjector/0.log" Mar 16 18:38:37 crc kubenswrapper[4736]: I0316 18:38:37.822703 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-qrt7l_d2eb8b3d-8b48-4110-bab7-66fc20948ee5/cert-manager-webhook/0.log" Mar 16 18:38:52 crc kubenswrapper[4736]: I0316 18:38:52.697175 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-tsmxn_3822e5b4-5129-4b3b-8bf3-7262a5ad4cde/nmstate-console-plugin/0.log" Mar 16 18:38:52 crc kubenswrapper[4736]: I0316 18:38:52.888649 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tbvd2_eea9e7aa-6f24-4b45-b7b4-347a38dccb64/nmstate-handler/0.log" Mar 16 18:38:53 crc kubenswrapper[4736]: I0316 18:38:53.000997 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-nhkfx_b184bdf0-82cb-428b-96ce-f4ebbada7645/kube-rbac-proxy/0.log" Mar 16 18:38:53 crc kubenswrapper[4736]: I0316 18:38:53.024905 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-nhkfx_b184bdf0-82cb-428b-96ce-f4ebbada7645/nmstate-metrics/0.log" Mar 16 18:38:53 crc kubenswrapper[4736]: I0316 18:38:53.243522 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-5cjp6_46173c99-f17a-4099-a210-397cf7b8cd18/nmstate-operator/0.log" Mar 16 18:38:53 crc kubenswrapper[4736]: I0316 18:38:53.320388 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-xbp5l_88c72b3e-a013-4f44-ae5f-93e44846f22a/nmstate-webhook/0.log" Mar 16 18:39:18 crc kubenswrapper[4736]: I0316 18:39:18.873598 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5b95679b96-mfd85" podUID="109b033e-a4ea-474a-9e79-e895cc75666e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.46:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 16 18:39:26 crc kubenswrapper[4736]: I0316 18:39:26.200162 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-fxq9c_29db2924-7903-45c8-9f87-a4e3e070a4a3/controller/0.log" Mar 16 18:39:26 crc kubenswrapper[4736]: I0316 18:39:26.224570 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-fxq9c_29db2924-7903-45c8-9f87-a4e3e070a4a3/kube-rbac-proxy/0.log" Mar 16 18:39:26 crc kubenswrapper[4736]: I0316 18:39:26.498007 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-frr-files/0.log" Mar 16 18:39:26 crc kubenswrapper[4736]: I0316 18:39:26.690527 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-frr-files/0.log" Mar 16 18:39:26 crc kubenswrapper[4736]: I0316 18:39:26.693066 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-metrics/0.log" Mar 16 18:39:26 crc kubenswrapper[4736]: I0316 18:39:26.731859 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-reloader/0.log" Mar 16 18:39:26 crc kubenswrapper[4736]: I0316 18:39:26.794828 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-reloader/0.log" Mar 16 18:39:26 crc kubenswrapper[4736]: I0316 18:39:26.943089 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-frr-files/0.log" Mar 16 18:39:26 crc kubenswrapper[4736]: I0316 18:39:26.948393 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-reloader/0.log" Mar 16 18:39:27 crc kubenswrapper[4736]: I0316 18:39:27.038338 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-metrics/0.log" Mar 16 18:39:27 crc kubenswrapper[4736]: I0316 18:39:27.040915 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-metrics/0.log" Mar 16 18:39:27 crc kubenswrapper[4736]: I0316 18:39:27.201287 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-frr-files/0.log" Mar 16 18:39:27 crc kubenswrapper[4736]: I0316 18:39:27.273204 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-metrics/0.log" Mar 16 18:39:27 crc kubenswrapper[4736]: I0316 18:39:27.290450 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-reloader/0.log" Mar 16 18:39:27 crc kubenswrapper[4736]: I0316 18:39:27.316708 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/controller/0.log" Mar 16 18:39:27 crc kubenswrapper[4736]: I0316 18:39:27.570146 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/frr-metrics/0.log" Mar 16 18:39:27 crc kubenswrapper[4736]: I0316 18:39:27.613589 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/kube-rbac-proxy/0.log" Mar 16 18:39:27 crc kubenswrapper[4736]: I0316 18:39:27.783678 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/kube-rbac-proxy-frr/0.log" Mar 16 18:39:27 crc kubenswrapper[4736]: I0316 18:39:27.910963 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/reloader/0.log" Mar 16 18:39:28 crc kubenswrapper[4736]: I0316 18:39:28.179566 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-vxgc7_21bc5f54-2767-431f-add2-433724ea4408/frr-k8s-webhook-server/0.log" Mar 16 18:39:28 crc kubenswrapper[4736]: I0316 18:39:28.434565 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b95679b96-mfd85_109b033e-a4ea-474a-9e79-e895cc75666e/manager/0.log" Mar 16 18:39:28 crc kubenswrapper[4736]: I0316 18:39:28.723352 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-9c55cfcd7-trkfb_b7e58b81-1f06-4844-adbe-ade114adc726/webhook-server/0.log" Mar 16 18:39:29 crc kubenswrapper[4736]: I0316 18:39:29.004816 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-djs8w_5ab46a17-c761-4952-b743-9ede5877674a/kube-rbac-proxy/0.log" Mar 16 18:39:29 crc kubenswrapper[4736]: I0316 18:39:29.847901 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/frr/1.log" Mar 16 18:39:29 crc kubenswrapper[4736]: I0316 18:39:29.888871 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-djs8w_5ab46a17-c761-4952-b743-9ede5877674a/speaker/0.log" Mar 16 18:39:29 crc kubenswrapper[4736]: I0316 18:39:29.961305 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/frr/0.log" Mar 16 18:39:45 crc kubenswrapper[4736]: I0316 18:39:45.498647 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd_f2926967-d729-4a26-8c6a-350f3a0419e1/util/0.log" Mar 16 18:39:45 crc kubenswrapper[4736]: I0316 18:39:45.716941 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd_f2926967-d729-4a26-8c6a-350f3a0419e1/util/0.log" Mar 16 18:39:45 crc kubenswrapper[4736]: I0316 18:39:45.736769 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd_f2926967-d729-4a26-8c6a-350f3a0419e1/pull/0.log" Mar 16 18:39:45 crc kubenswrapper[4736]: I0316 18:39:45.795126 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd_f2926967-d729-4a26-8c6a-350f3a0419e1/pull/0.log" Mar 16 18:39:45 crc kubenswrapper[4736]: I0316 18:39:45.945768 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd_f2926967-d729-4a26-8c6a-350f3a0419e1/util/0.log" Mar 16 18:39:46 crc kubenswrapper[4736]: I0316 18:39:46.005454 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd_f2926967-d729-4a26-8c6a-350f3a0419e1/extract/0.log" Mar 16 18:39:46 crc kubenswrapper[4736]: I0316 18:39:46.059027 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd_f2926967-d729-4a26-8c6a-350f3a0419e1/pull/0.log" Mar 16 18:39:46 crc kubenswrapper[4736]: I0316 18:39:46.158748 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk_c5b483ba-1563-4424-bd65-5e489514f5e5/util/0.log" Mar 16 18:39:46 crc kubenswrapper[4736]: I0316 18:39:46.326175 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk_c5b483ba-1563-4424-bd65-5e489514f5e5/util/0.log" Mar 16 18:39:46 crc kubenswrapper[4736]: I0316 18:39:46.358533 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk_c5b483ba-1563-4424-bd65-5e489514f5e5/pull/0.log" Mar 16 18:39:46 crc kubenswrapper[4736]: I0316 18:39:46.406254 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk_c5b483ba-1563-4424-bd65-5e489514f5e5/pull/0.log" Mar 16 18:39:46 crc kubenswrapper[4736]: I0316 18:39:46.581308 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk_c5b483ba-1563-4424-bd65-5e489514f5e5/util/0.log" Mar 16 18:39:46 crc kubenswrapper[4736]: I0316 18:39:46.623766 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk_c5b483ba-1563-4424-bd65-5e489514f5e5/extract/0.log" Mar 16 18:39:46 crc kubenswrapper[4736]: I0316 18:39:46.645993 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk_c5b483ba-1563-4424-bd65-5e489514f5e5/pull/0.log" Mar 16 18:39:46 crc kubenswrapper[4736]: I0316 18:39:46.764643 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j9zbt_64d704b6-37a3-4ea1-bbe6-af675569eb7a/extract-utilities/0.log" Mar 16 18:39:47 crc kubenswrapper[4736]: I0316 18:39:47.047644 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j9zbt_64d704b6-37a3-4ea1-bbe6-af675569eb7a/extract-content/0.log" Mar 16 18:39:47 crc kubenswrapper[4736]: I0316 18:39:47.054890 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j9zbt_64d704b6-37a3-4ea1-bbe6-af675569eb7a/extract-content/0.log" Mar 16 18:39:47 crc kubenswrapper[4736]: I0316 18:39:47.068921 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j9zbt_64d704b6-37a3-4ea1-bbe6-af675569eb7a/extract-utilities/0.log" Mar 16 18:39:47 crc kubenswrapper[4736]: I0316 18:39:47.248593 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j9zbt_64d704b6-37a3-4ea1-bbe6-af675569eb7a/extract-utilities/0.log" Mar 16 18:39:47 crc kubenswrapper[4736]: I0316 18:39:47.289379 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j9zbt_64d704b6-37a3-4ea1-bbe6-af675569eb7a/extract-content/0.log" Mar 16 18:39:47 crc kubenswrapper[4736]: I0316 18:39:47.525723 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xvwm9_e43252e2-f02a-4803-b945-9ce6e746e104/extract-utilities/0.log" Mar 16 18:39:47 crc kubenswrapper[4736]: I0316 18:39:47.810383 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xvwm9_e43252e2-f02a-4803-b945-9ce6e746e104/extract-content/0.log" Mar 16 18:39:47 crc kubenswrapper[4736]: I0316 18:39:47.858661 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xvwm9_e43252e2-f02a-4803-b945-9ce6e746e104/extract-utilities/0.log" Mar 16 18:39:47 crc kubenswrapper[4736]: I0316 18:39:47.976786 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xvwm9_e43252e2-f02a-4803-b945-9ce6e746e104/extract-content/0.log" Mar 16 18:39:48 crc kubenswrapper[4736]: I0316 18:39:48.214298 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xvwm9_e43252e2-f02a-4803-b945-9ce6e746e104/extract-content/0.log" Mar 16 18:39:48 crc kubenswrapper[4736]: I0316 18:39:48.236745 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xvwm9_e43252e2-f02a-4803-b945-9ce6e746e104/extract-utilities/0.log" Mar 16 18:39:48 crc kubenswrapper[4736]: I0316 18:39:48.485199 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4qcc8_7c040e8d-b247-49a6-93bd-f928c704b135/marketplace-operator/0.log" Mar 16 18:39:48 crc kubenswrapper[4736]: I0316 18:39:48.796557 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xvwm9_e43252e2-f02a-4803-b945-9ce6e746e104/registry-server/0.log" Mar 16 18:39:48 crc kubenswrapper[4736]: I0316 18:39:48.836176 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5kgqs_9fba4db8-97e3-4e96-b22b-0aca71b4217f/extract-utilities/0.log" Mar 16 18:39:49 crc kubenswrapper[4736]: I0316 18:39:49.063346 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j9zbt_64d704b6-37a3-4ea1-bbe6-af675569eb7a/registry-server/0.log" Mar 16 18:39:49 crc kubenswrapper[4736]: I0316 18:39:49.138444 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5kgqs_9fba4db8-97e3-4e96-b22b-0aca71b4217f/extract-utilities/0.log" Mar 16 18:39:49 crc kubenswrapper[4736]: I0316 18:39:49.158617 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5kgqs_9fba4db8-97e3-4e96-b22b-0aca71b4217f/extract-content/0.log" Mar 16 18:39:49 crc kubenswrapper[4736]: I0316 18:39:49.188601 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5kgqs_9fba4db8-97e3-4e96-b22b-0aca71b4217f/extract-content/0.log" Mar 16 18:39:49 crc kubenswrapper[4736]: I0316 18:39:49.385965 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5kgqs_9fba4db8-97e3-4e96-b22b-0aca71b4217f/extract-utilities/0.log" Mar 16 18:39:49 crc kubenswrapper[4736]: I0316 18:39:49.460414 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5kgqs_9fba4db8-97e3-4e96-b22b-0aca71b4217f/extract-content/0.log" Mar 16 18:39:49 crc kubenswrapper[4736]: I0316 18:39:49.747047 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5kgqs_9fba4db8-97e3-4e96-b22b-0aca71b4217f/registry-server/0.log" Mar 16 18:39:49 crc kubenswrapper[4736]: I0316 18:39:49.907905 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kffc4_8fdffd9a-9fd4-4ec8-a660-cbd4c759b375/extract-utilities/0.log" Mar 16 18:39:50 crc kubenswrapper[4736]: I0316 18:39:50.034509 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kffc4_8fdffd9a-9fd4-4ec8-a660-cbd4c759b375/extract-content/0.log" Mar 16 18:39:50 crc kubenswrapper[4736]: I0316 18:39:50.057641 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kffc4_8fdffd9a-9fd4-4ec8-a660-cbd4c759b375/extract-utilities/0.log" Mar 16 18:39:50 crc kubenswrapper[4736]: I0316 18:39:50.059354 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kffc4_8fdffd9a-9fd4-4ec8-a660-cbd4c759b375/extract-content/0.log" Mar 16 18:39:50 crc kubenswrapper[4736]: I0316 18:39:50.305066 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kffc4_8fdffd9a-9fd4-4ec8-a660-cbd4c759b375/extract-content/0.log" Mar 16 18:39:50 crc kubenswrapper[4736]: I0316 18:39:50.306719 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kffc4_8fdffd9a-9fd4-4ec8-a660-cbd4c759b375/extract-utilities/0.log" Mar 16 18:39:50 crc kubenswrapper[4736]: I0316 18:39:50.829509 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kffc4_8fdffd9a-9fd4-4ec8-a660-cbd4c759b375/registry-server/0.log" Mar 16 18:40:00 crc kubenswrapper[4736]: I0316 18:40:00.358047 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561440-78r7d"] Mar 16 18:40:00 crc kubenswrapper[4736]: E0316 18:40:00.365195 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4abb45b1-25f1-4f81-a0ea-d582bbad2168" containerName="oc" Mar 16 18:40:00 crc kubenswrapper[4736]: I0316 18:40:00.365235 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4abb45b1-25f1-4f81-a0ea-d582bbad2168" containerName="oc" Mar 16 18:40:00 crc kubenswrapper[4736]: I0316 18:40:00.368082 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4abb45b1-25f1-4f81-a0ea-d582bbad2168" containerName="oc" Mar 16 18:40:00 crc kubenswrapper[4736]: I0316 18:40:00.375404 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561440-78r7d" Mar 16 18:40:00 crc kubenswrapper[4736]: I0316 18:40:00.389146 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:40:00 crc kubenswrapper[4736]: I0316 18:40:00.389150 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:40:00 crc kubenswrapper[4736]: I0316 18:40:00.389156 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:40:00 crc kubenswrapper[4736]: I0316 18:40:00.443588 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntntj\" (UniqueName: \"kubernetes.io/projected/11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156-kube-api-access-ntntj\") pod \"auto-csr-approver-29561440-78r7d\" (UID: \"11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156\") " pod="openshift-infra/auto-csr-approver-29561440-78r7d" Mar 16 18:40:00 crc kubenswrapper[4736]: I0316 18:40:00.470932 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561440-78r7d"] Mar 16 18:40:00 crc kubenswrapper[4736]: I0316 18:40:00.558313 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntntj\" (UniqueName: \"kubernetes.io/projected/11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156-kube-api-access-ntntj\") pod \"auto-csr-approver-29561440-78r7d\" (UID: \"11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156\") " pod="openshift-infra/auto-csr-approver-29561440-78r7d" Mar 16 18:40:00 crc kubenswrapper[4736]: I0316 18:40:00.594407 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntntj\" (UniqueName: \"kubernetes.io/projected/11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156-kube-api-access-ntntj\") pod \"auto-csr-approver-29561440-78r7d\" (UID: \"11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156\") " pod="openshift-infra/auto-csr-approver-29561440-78r7d" Mar 16 18:40:00 crc kubenswrapper[4736]: I0316 18:40:00.715003 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561440-78r7d" Mar 16 18:40:02 crc kubenswrapper[4736]: I0316 18:40:02.050325 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561440-78r7d"] Mar 16 18:40:02 crc kubenswrapper[4736]: W0316 18:40:02.075062 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11bd4e5b_b7a7_43b6_b5e4_d29b0ecea156.slice/crio-aa1ca7f724fb65d5cd701bb4967523b92521978d59d28083e78dbc3fa6fcee4f WatchSource:0}: Error finding container aa1ca7f724fb65d5cd701bb4967523b92521978d59d28083e78dbc3fa6fcee4f: Status 404 returned error can't find the container with id aa1ca7f724fb65d5cd701bb4967523b92521978d59d28083e78dbc3fa6fcee4f Mar 16 18:40:02 crc kubenswrapper[4736]: I0316 18:40:02.620224 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561440-78r7d" event={"ID":"11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156","Type":"ContainerStarted","Data":"aa1ca7f724fb65d5cd701bb4967523b92521978d59d28083e78dbc3fa6fcee4f"} Mar 16 18:40:05 crc kubenswrapper[4736]: I0316 18:40:05.646667 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561440-78r7d" event={"ID":"11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156","Type":"ContainerStarted","Data":"1520920d9bfe76a5cbf0008494ecd01ce0f017bf9ef2066c1bb70fa06ab86d01"} Mar 16 18:40:05 crc kubenswrapper[4736]: I0316 18:40:05.662647 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561440-78r7d" podStartSLOduration=3.762021775 podStartE2EDuration="5.661868046s" podCreationTimestamp="2026-03-16 18:40:00 +0000 UTC" firstStartedPulling="2026-03-16 18:40:02.093464992 +0000 UTC m=+12403.820855279" lastFinishedPulling="2026-03-16 18:40:03.993311263 +0000 UTC m=+12405.720701550" observedRunningTime="2026-03-16 18:40:05.656471499 +0000 UTC m=+12407.383861786" watchObservedRunningTime="2026-03-16 18:40:05.661868046 +0000 UTC m=+12407.389258333" Mar 16 18:40:06 crc kubenswrapper[4736]: I0316 18:40:06.659654 4736 generic.go:334] "Generic (PLEG): container finished" podID="11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156" containerID="1520920d9bfe76a5cbf0008494ecd01ce0f017bf9ef2066c1bb70fa06ab86d01" exitCode=0 Mar 16 18:40:06 crc kubenswrapper[4736]: I0316 18:40:06.660346 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561440-78r7d" event={"ID":"11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156","Type":"ContainerDied","Data":"1520920d9bfe76a5cbf0008494ecd01ce0f017bf9ef2066c1bb70fa06ab86d01"} Mar 16 18:40:08 crc kubenswrapper[4736]: I0316 18:40:08.124712 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561440-78r7d" Mar 16 18:40:08 crc kubenswrapper[4736]: I0316 18:40:08.214720 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntntj\" (UniqueName: \"kubernetes.io/projected/11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156-kube-api-access-ntntj\") pod \"11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156\" (UID: \"11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156\") " Mar 16 18:40:08 crc kubenswrapper[4736]: I0316 18:40:08.225302 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156-kube-api-access-ntntj" (OuterVolumeSpecName: "kube-api-access-ntntj") pod "11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156" (UID: "11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156"). InnerVolumeSpecName "kube-api-access-ntntj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:40:08 crc kubenswrapper[4736]: I0316 18:40:08.317635 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntntj\" (UniqueName: \"kubernetes.io/projected/11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156-kube-api-access-ntntj\") on node \"crc\" DevicePath \"\"" Mar 16 18:40:08 crc kubenswrapper[4736]: I0316 18:40:08.678926 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561440-78r7d" event={"ID":"11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156","Type":"ContainerDied","Data":"aa1ca7f724fb65d5cd701bb4967523b92521978d59d28083e78dbc3fa6fcee4f"} Mar 16 18:40:08 crc kubenswrapper[4736]: I0316 18:40:08.678973 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa1ca7f724fb65d5cd701bb4967523b92521978d59d28083e78dbc3fa6fcee4f" Mar 16 18:40:08 crc kubenswrapper[4736]: I0316 18:40:08.678987 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561440-78r7d" Mar 16 18:40:08 crc kubenswrapper[4736]: I0316 18:40:08.789254 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561434-nbs79"] Mar 16 18:40:08 crc kubenswrapper[4736]: I0316 18:40:08.799296 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561434-nbs79"] Mar 16 18:40:08 crc kubenswrapper[4736]: I0316 18:40:08.993832 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="940e4f5b-4c4e-483d-8173-da87b8c05e91" path="/var/lib/kubelet/pods/940e4f5b-4c4e-483d-8173-da87b8c05e91/volumes" Mar 16 18:40:10 crc kubenswrapper[4736]: I0316 18:40:10.855467 4736 scope.go:117] "RemoveContainer" containerID="9784ac8777e9f5ec722bc0b3df38151aab4b9f3850a81864233a8f83ea89f8cc" Mar 16 18:40:18 crc kubenswrapper[4736]: I0316 18:40:18.973673 4736 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-kffc4" podUID="8fdffd9a-9fd4-4ec8-a660-cbd4c759b375" containerName="registry-server" probeResult="failure" output=< Mar 16 18:40:18 crc kubenswrapper[4736]: timeout: health rpc did not complete within 1s Mar 16 18:40:18 crc kubenswrapper[4736]: > Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.241581 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m4pdk"] Mar 16 18:40:21 crc kubenswrapper[4736]: E0316 18:40:21.242551 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156" containerName="oc" Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.242563 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156" containerName="oc" Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.242755 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156" containerName="oc" Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.251308 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.408570 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nnml\" (UniqueName: \"kubernetes.io/projected/0291b8d7-ac75-48c3-9080-62e1bc49bb9f-kube-api-access-9nnml\") pod \"community-operators-m4pdk\" (UID: \"0291b8d7-ac75-48c3-9080-62e1bc49bb9f\") " pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.408922 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0291b8d7-ac75-48c3-9080-62e1bc49bb9f-catalog-content\") pod \"community-operators-m4pdk\" (UID: \"0291b8d7-ac75-48c3-9080-62e1bc49bb9f\") " pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.408977 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0291b8d7-ac75-48c3-9080-62e1bc49bb9f-utilities\") pod \"community-operators-m4pdk\" (UID: \"0291b8d7-ac75-48c3-9080-62e1bc49bb9f\") " pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.412523 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m4pdk"] Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.510945 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0291b8d7-ac75-48c3-9080-62e1bc49bb9f-catalog-content\") pod \"community-operators-m4pdk\" (UID: \"0291b8d7-ac75-48c3-9080-62e1bc49bb9f\") " pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.511022 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0291b8d7-ac75-48c3-9080-62e1bc49bb9f-utilities\") pod \"community-operators-m4pdk\" (UID: \"0291b8d7-ac75-48c3-9080-62e1bc49bb9f\") " pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.511179 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nnml\" (UniqueName: \"kubernetes.io/projected/0291b8d7-ac75-48c3-9080-62e1bc49bb9f-kube-api-access-9nnml\") pod \"community-operators-m4pdk\" (UID: \"0291b8d7-ac75-48c3-9080-62e1bc49bb9f\") " pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.513980 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0291b8d7-ac75-48c3-9080-62e1bc49bb9f-catalog-content\") pod \"community-operators-m4pdk\" (UID: \"0291b8d7-ac75-48c3-9080-62e1bc49bb9f\") " pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.514325 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0291b8d7-ac75-48c3-9080-62e1bc49bb9f-utilities\") pod \"community-operators-m4pdk\" (UID: \"0291b8d7-ac75-48c3-9080-62e1bc49bb9f\") " pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.565079 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nnml\" (UniqueName: \"kubernetes.io/projected/0291b8d7-ac75-48c3-9080-62e1bc49bb9f-kube-api-access-9nnml\") pod \"community-operators-m4pdk\" (UID: \"0291b8d7-ac75-48c3-9080-62e1bc49bb9f\") " pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:21 crc kubenswrapper[4736]: I0316 18:40:21.575033 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:23 crc kubenswrapper[4736]: I0316 18:40:23.085922 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m4pdk"] Mar 16 18:40:23 crc kubenswrapper[4736]: I0316 18:40:23.826713 4736 generic.go:334] "Generic (PLEG): container finished" podID="0291b8d7-ac75-48c3-9080-62e1bc49bb9f" containerID="f8289d1460c9fdb9979b2c5d13d664c32b897a273cbb76a0787b0db869101eb6" exitCode=0 Mar 16 18:40:23 crc kubenswrapper[4736]: I0316 18:40:23.827144 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4pdk" event={"ID":"0291b8d7-ac75-48c3-9080-62e1bc49bb9f","Type":"ContainerDied","Data":"f8289d1460c9fdb9979b2c5d13d664c32b897a273cbb76a0787b0db869101eb6"} Mar 16 18:40:23 crc kubenswrapper[4736]: I0316 18:40:23.827550 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4pdk" event={"ID":"0291b8d7-ac75-48c3-9080-62e1bc49bb9f","Type":"ContainerStarted","Data":"4599c640a04d3588171e1cb1b26c10e55e0b09f7286175d0b9eb0002d54c821c"} Mar 16 18:40:32 crc kubenswrapper[4736]: I0316 18:40:32.933206 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4pdk" event={"ID":"0291b8d7-ac75-48c3-9080-62e1bc49bb9f","Type":"ContainerStarted","Data":"b34e779fc583cdfc75c54276d19e05ce7edd2734761733159b50313d4baa9af8"} Mar 16 18:40:34 crc kubenswrapper[4736]: I0316 18:40:34.951969 4736 generic.go:334] "Generic (PLEG): container finished" podID="0291b8d7-ac75-48c3-9080-62e1bc49bb9f" containerID="b34e779fc583cdfc75c54276d19e05ce7edd2734761733159b50313d4baa9af8" exitCode=0 Mar 16 18:40:34 crc kubenswrapper[4736]: I0316 18:40:34.952047 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4pdk" event={"ID":"0291b8d7-ac75-48c3-9080-62e1bc49bb9f","Type":"ContainerDied","Data":"b34e779fc583cdfc75c54276d19e05ce7edd2734761733159b50313d4baa9af8"} Mar 16 18:40:35 crc kubenswrapper[4736]: I0316 18:40:35.969676 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m4pdk" event={"ID":"0291b8d7-ac75-48c3-9080-62e1bc49bb9f","Type":"ContainerStarted","Data":"cb7e467f5c2b4b831af0779b3bb9d6bbdba50b11cc0e3573c731deaa3991c31e"} Mar 16 18:40:36 crc kubenswrapper[4736]: I0316 18:40:36.002307 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m4pdk" podStartSLOduration=3.4620556000000002 podStartE2EDuration="14.999069131s" podCreationTimestamp="2026-03-16 18:40:21 +0000 UTC" firstStartedPulling="2026-03-16 18:40:23.829855146 +0000 UTC m=+12425.557245433" lastFinishedPulling="2026-03-16 18:40:35.366868687 +0000 UTC m=+12437.094258964" observedRunningTime="2026-03-16 18:40:35.992949724 +0000 UTC m=+12437.720340001" watchObservedRunningTime="2026-03-16 18:40:35.999069131 +0000 UTC m=+12437.726459418" Mar 16 18:40:38 crc kubenswrapper[4736]: I0316 18:40:38.509747 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:40:38 crc kubenswrapper[4736]: I0316 18:40:38.512414 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:40:41 crc kubenswrapper[4736]: I0316 18:40:41.576381 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:41 crc kubenswrapper[4736]: I0316 18:40:41.577019 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:42 crc kubenswrapper[4736]: I0316 18:40:42.628344 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-m4pdk" podUID="0291b8d7-ac75-48c3-9080-62e1bc49bb9f" containerName="registry-server" probeResult="failure" output=< Mar 16 18:40:42 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:40:42 crc kubenswrapper[4736]: > Mar 16 18:40:51 crc kubenswrapper[4736]: I0316 18:40:51.720952 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:51 crc kubenswrapper[4736]: I0316 18:40:51.776302 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m4pdk" Mar 16 18:40:52 crc kubenswrapper[4736]: I0316 18:40:52.287764 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m4pdk"] Mar 16 18:40:52 crc kubenswrapper[4736]: I0316 18:40:52.456801 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xvwm9"] Mar 16 18:40:52 crc kubenswrapper[4736]: I0316 18:40:52.463592 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xvwm9" podUID="e43252e2-f02a-4803-b945-9ce6e746e104" containerName="registry-server" containerID="cri-o://2f832717c8f9f1d97e80def2b9c87c0f86d36258536fe174866cee570bf740cd" gracePeriod=2 Mar 16 18:40:53 crc kubenswrapper[4736]: I0316 18:40:53.128442 4736 generic.go:334] "Generic (PLEG): container finished" podID="e43252e2-f02a-4803-b945-9ce6e746e104" containerID="2f832717c8f9f1d97e80def2b9c87c0f86d36258536fe174866cee570bf740cd" exitCode=0 Mar 16 18:40:53 crc kubenswrapper[4736]: I0316 18:40:53.128525 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvwm9" event={"ID":"e43252e2-f02a-4803-b945-9ce6e746e104","Type":"ContainerDied","Data":"2f832717c8f9f1d97e80def2b9c87c0f86d36258536fe174866cee570bf740cd"} Mar 16 18:40:53 crc kubenswrapper[4736]: I0316 18:40:53.740873 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvwm9" Mar 16 18:40:53 crc kubenswrapper[4736]: I0316 18:40:53.847940 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdjtr\" (UniqueName: \"kubernetes.io/projected/e43252e2-f02a-4803-b945-9ce6e746e104-kube-api-access-jdjtr\") pod \"e43252e2-f02a-4803-b945-9ce6e746e104\" (UID: \"e43252e2-f02a-4803-b945-9ce6e746e104\") " Mar 16 18:40:53 crc kubenswrapper[4736]: I0316 18:40:53.848227 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43252e2-f02a-4803-b945-9ce6e746e104-catalog-content\") pod \"e43252e2-f02a-4803-b945-9ce6e746e104\" (UID: \"e43252e2-f02a-4803-b945-9ce6e746e104\") " Mar 16 18:40:53 crc kubenswrapper[4736]: I0316 18:40:53.848285 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43252e2-f02a-4803-b945-9ce6e746e104-utilities\") pod \"e43252e2-f02a-4803-b945-9ce6e746e104\" (UID: \"e43252e2-f02a-4803-b945-9ce6e746e104\") " Mar 16 18:40:53 crc kubenswrapper[4736]: I0316 18:40:53.852700 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e43252e2-f02a-4803-b945-9ce6e746e104-utilities" (OuterVolumeSpecName: "utilities") pod "e43252e2-f02a-4803-b945-9ce6e746e104" (UID: "e43252e2-f02a-4803-b945-9ce6e746e104"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:40:53 crc kubenswrapper[4736]: I0316 18:40:53.886090 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e43252e2-f02a-4803-b945-9ce6e746e104-kube-api-access-jdjtr" (OuterVolumeSpecName: "kube-api-access-jdjtr") pod "e43252e2-f02a-4803-b945-9ce6e746e104" (UID: "e43252e2-f02a-4803-b945-9ce6e746e104"). InnerVolumeSpecName "kube-api-access-jdjtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:40:53 crc kubenswrapper[4736]: I0316 18:40:53.951836 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdjtr\" (UniqueName: \"kubernetes.io/projected/e43252e2-f02a-4803-b945-9ce6e746e104-kube-api-access-jdjtr\") on node \"crc\" DevicePath \"\"" Mar 16 18:40:53 crc kubenswrapper[4736]: I0316 18:40:53.951862 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43252e2-f02a-4803-b945-9ce6e746e104-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:40:54 crc kubenswrapper[4736]: I0316 18:40:54.029153 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e43252e2-f02a-4803-b945-9ce6e746e104-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e43252e2-f02a-4803-b945-9ce6e746e104" (UID: "e43252e2-f02a-4803-b945-9ce6e746e104"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:40:54 crc kubenswrapper[4736]: I0316 18:40:54.057656 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43252e2-f02a-4803-b945-9ce6e746e104-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:40:54 crc kubenswrapper[4736]: I0316 18:40:54.138614 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xvwm9" event={"ID":"e43252e2-f02a-4803-b945-9ce6e746e104","Type":"ContainerDied","Data":"8cd618296222679e356739096beed470242b89a08b3d21276dc5e5424380dbc1"} Mar 16 18:40:54 crc kubenswrapper[4736]: I0316 18:40:54.138715 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xvwm9" Mar 16 18:40:54 crc kubenswrapper[4736]: I0316 18:40:54.140063 4736 scope.go:117] "RemoveContainer" containerID="2f832717c8f9f1d97e80def2b9c87c0f86d36258536fe174866cee570bf740cd" Mar 16 18:40:54 crc kubenswrapper[4736]: I0316 18:40:54.186625 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xvwm9"] Mar 16 18:40:54 crc kubenswrapper[4736]: I0316 18:40:54.194760 4736 scope.go:117] "RemoveContainer" containerID="c164ca58845da298bb4eaa03d7a6ff365af0930c0bfdb13bccf826c54e20cf62" Mar 16 18:40:54 crc kubenswrapper[4736]: I0316 18:40:54.195391 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xvwm9"] Mar 16 18:40:54 crc kubenswrapper[4736]: I0316 18:40:54.224946 4736 scope.go:117] "RemoveContainer" containerID="6316c85298b3d0a321611a6c5fe4704eb956d5a40ecc3a30df90cf90cc420ccf" Mar 16 18:40:54 crc kubenswrapper[4736]: I0316 18:40:54.988934 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e43252e2-f02a-4803-b945-9ce6e746e104" path="/var/lib/kubelet/pods/e43252e2-f02a-4803-b945-9ce6e746e104/volumes" Mar 16 18:41:08 crc kubenswrapper[4736]: I0316 18:41:08.508081 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:41:08 crc kubenswrapper[4736]: I0316 18:41:08.509523 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:41:18 crc kubenswrapper[4736]: I0316 18:41:18.423859 4736 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.00305892s: [/var/lib/containers/storage/overlay/9bc1ccddcb1ed84b350a72333bc5146d9855621973bb6958f1c29d02c28921d3/diff /var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-qrt7l_d2eb8b3d-8b48-4110-bab7-66fc20948ee5/cert-manager-webhook/0.log]; will not log again for this container unless duration exceeds 2s Mar 16 18:41:38 crc kubenswrapper[4736]: I0316 18:41:38.508537 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:41:38 crc kubenswrapper[4736]: I0316 18:41:38.509256 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:41:38 crc kubenswrapper[4736]: I0316 18:41:38.509319 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 18:41:38 crc kubenswrapper[4736]: I0316 18:41:38.510591 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5097a79e54b5c08f99f9bf301e89c1f69efbb4fecb46a12af41f8d21c7fa3311"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 18:41:38 crc kubenswrapper[4736]: I0316 18:41:38.510705 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://5097a79e54b5c08f99f9bf301e89c1f69efbb4fecb46a12af41f8d21c7fa3311" gracePeriod=600 Mar 16 18:41:38 crc kubenswrapper[4736]: I0316 18:41:38.710898 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="5097a79e54b5c08f99f9bf301e89c1f69efbb4fecb46a12af41f8d21c7fa3311" exitCode=0 Mar 16 18:41:38 crc kubenswrapper[4736]: I0316 18:41:38.710940 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"5097a79e54b5c08f99f9bf301e89c1f69efbb4fecb46a12af41f8d21c7fa3311"} Mar 16 18:41:38 crc kubenswrapper[4736]: I0316 18:41:38.710971 4736 scope.go:117] "RemoveContainer" containerID="68b4fbdd117aa23d7db5f0db6ca855e759824d469f9aebc1dc4204b9a7666046" Mar 16 18:41:39 crc kubenswrapper[4736]: I0316 18:41:39.738969 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4"} Mar 16 18:42:00 crc kubenswrapper[4736]: I0316 18:42:00.294845 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561442-hkp65"] Mar 16 18:42:00 crc kubenswrapper[4736]: E0316 18:42:00.298983 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43252e2-f02a-4803-b945-9ce6e746e104" containerName="registry-server" Mar 16 18:42:00 crc kubenswrapper[4736]: I0316 18:42:00.299010 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43252e2-f02a-4803-b945-9ce6e746e104" containerName="registry-server" Mar 16 18:42:00 crc kubenswrapper[4736]: E0316 18:42:00.299040 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43252e2-f02a-4803-b945-9ce6e746e104" containerName="extract-utilities" Mar 16 18:42:00 crc kubenswrapper[4736]: I0316 18:42:00.299047 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43252e2-f02a-4803-b945-9ce6e746e104" containerName="extract-utilities" Mar 16 18:42:00 crc kubenswrapper[4736]: E0316 18:42:00.299062 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43252e2-f02a-4803-b945-9ce6e746e104" containerName="extract-content" Mar 16 18:42:00 crc kubenswrapper[4736]: I0316 18:42:00.299069 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43252e2-f02a-4803-b945-9ce6e746e104" containerName="extract-content" Mar 16 18:42:00 crc kubenswrapper[4736]: I0316 18:42:00.299373 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e43252e2-f02a-4803-b945-9ce6e746e104" containerName="registry-server" Mar 16 18:42:00 crc kubenswrapper[4736]: I0316 18:42:00.304951 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561442-hkp65" Mar 16 18:42:00 crc kubenswrapper[4736]: I0316 18:42:00.319848 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:42:00 crc kubenswrapper[4736]: I0316 18:42:00.319860 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:42:00 crc kubenswrapper[4736]: I0316 18:42:00.326171 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:42:00 crc kubenswrapper[4736]: I0316 18:42:00.387239 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561442-hkp65"] Mar 16 18:42:00 crc kubenswrapper[4736]: I0316 18:42:00.461441 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg9rr\" (UniqueName: \"kubernetes.io/projected/8ecd5486-0fb5-4256-82e5-e92e5d922030-kube-api-access-tg9rr\") pod \"auto-csr-approver-29561442-hkp65\" (UID: \"8ecd5486-0fb5-4256-82e5-e92e5d922030\") " pod="openshift-infra/auto-csr-approver-29561442-hkp65" Mar 16 18:42:00 crc kubenswrapper[4736]: I0316 18:42:00.564303 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg9rr\" (UniqueName: \"kubernetes.io/projected/8ecd5486-0fb5-4256-82e5-e92e5d922030-kube-api-access-tg9rr\") pod \"auto-csr-approver-29561442-hkp65\" (UID: \"8ecd5486-0fb5-4256-82e5-e92e5d922030\") " pod="openshift-infra/auto-csr-approver-29561442-hkp65" Mar 16 18:42:00 crc kubenswrapper[4736]: I0316 18:42:00.595810 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg9rr\" (UniqueName: \"kubernetes.io/projected/8ecd5486-0fb5-4256-82e5-e92e5d922030-kube-api-access-tg9rr\") pod \"auto-csr-approver-29561442-hkp65\" (UID: \"8ecd5486-0fb5-4256-82e5-e92e5d922030\") " pod="openshift-infra/auto-csr-approver-29561442-hkp65" Mar 16 18:42:00 crc kubenswrapper[4736]: I0316 18:42:00.640858 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561442-hkp65" Mar 16 18:42:01 crc kubenswrapper[4736]: I0316 18:42:01.783306 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561442-hkp65"] Mar 16 18:42:02 crc kubenswrapper[4736]: I0316 18:42:02.018685 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561442-hkp65" event={"ID":"8ecd5486-0fb5-4256-82e5-e92e5d922030","Type":"ContainerStarted","Data":"cde87a9ca4bbdda66d6705e413a195d4b0f882a8531a1c2201824873e8d7c774"} Mar 16 18:42:04 crc kubenswrapper[4736]: I0316 18:42:04.042754 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561442-hkp65" event={"ID":"8ecd5486-0fb5-4256-82e5-e92e5d922030","Type":"ContainerStarted","Data":"9cedd808de3ca36921262cac7fb2a5066faebf1877a6e40c0c5fd56920be7a70"} Mar 16 18:42:04 crc kubenswrapper[4736]: I0316 18:42:04.064723 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561442-hkp65" podStartSLOduration=3.023318896 podStartE2EDuration="4.06468588s" podCreationTimestamp="2026-03-16 18:42:00 +0000 UTC" firstStartedPulling="2026-03-16 18:42:01.798051764 +0000 UTC m=+12523.525442061" lastFinishedPulling="2026-03-16 18:42:02.839418758 +0000 UTC m=+12524.566809045" observedRunningTime="2026-03-16 18:42:04.060290189 +0000 UTC m=+12525.787680516" watchObservedRunningTime="2026-03-16 18:42:04.06468588 +0000 UTC m=+12525.792076197" Mar 16 18:42:05 crc kubenswrapper[4736]: I0316 18:42:05.059489 4736 generic.go:334] "Generic (PLEG): container finished" podID="8ecd5486-0fb5-4256-82e5-e92e5d922030" containerID="9cedd808de3ca36921262cac7fb2a5066faebf1877a6e40c0c5fd56920be7a70" exitCode=0 Mar 16 18:42:05 crc kubenswrapper[4736]: I0316 18:42:05.059671 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561442-hkp65" event={"ID":"8ecd5486-0fb5-4256-82e5-e92e5d922030","Type":"ContainerDied","Data":"9cedd808de3ca36921262cac7fb2a5066faebf1877a6e40c0c5fd56920be7a70"} Mar 16 18:42:06 crc kubenswrapper[4736]: I0316 18:42:06.557071 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561442-hkp65" Mar 16 18:42:06 crc kubenswrapper[4736]: I0316 18:42:06.706379 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg9rr\" (UniqueName: \"kubernetes.io/projected/8ecd5486-0fb5-4256-82e5-e92e5d922030-kube-api-access-tg9rr\") pod \"8ecd5486-0fb5-4256-82e5-e92e5d922030\" (UID: \"8ecd5486-0fb5-4256-82e5-e92e5d922030\") " Mar 16 18:42:06 crc kubenswrapper[4736]: I0316 18:42:06.713837 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ecd5486-0fb5-4256-82e5-e92e5d922030-kube-api-access-tg9rr" (OuterVolumeSpecName: "kube-api-access-tg9rr") pod "8ecd5486-0fb5-4256-82e5-e92e5d922030" (UID: "8ecd5486-0fb5-4256-82e5-e92e5d922030"). InnerVolumeSpecName "kube-api-access-tg9rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:42:06 crc kubenswrapper[4736]: I0316 18:42:06.809283 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg9rr\" (UniqueName: \"kubernetes.io/projected/8ecd5486-0fb5-4256-82e5-e92e5d922030-kube-api-access-tg9rr\") on node \"crc\" DevicePath \"\"" Mar 16 18:42:07 crc kubenswrapper[4736]: I0316 18:42:07.089913 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561442-hkp65" event={"ID":"8ecd5486-0fb5-4256-82e5-e92e5d922030","Type":"ContainerDied","Data":"cde87a9ca4bbdda66d6705e413a195d4b0f882a8531a1c2201824873e8d7c774"} Mar 16 18:42:07 crc kubenswrapper[4736]: I0316 18:42:07.090006 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cde87a9ca4bbdda66d6705e413a195d4b0f882a8531a1c2201824873e8d7c774" Mar 16 18:42:07 crc kubenswrapper[4736]: I0316 18:42:07.090159 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561442-hkp65" Mar 16 18:42:07 crc kubenswrapper[4736]: I0316 18:42:07.166164 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561436-7qf94"] Mar 16 18:42:07 crc kubenswrapper[4736]: I0316 18:42:07.175513 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561436-7qf94"] Mar 16 18:42:09 crc kubenswrapper[4736]: I0316 18:42:09.005048 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9884ed3c-e0e7-4e2b-baff-102c4f3dab68" path="/var/lib/kubelet/pods/9884ed3c-e0e7-4e2b-baff-102c4f3dab68/volumes" Mar 16 18:42:11 crc kubenswrapper[4736]: I0316 18:42:11.141480 4736 scope.go:117] "RemoveContainer" containerID="f019c9549d9b69a88f4d5a16c33301b706f65bf302e2ecf825a16a11a135612e" Mar 16 18:42:11 crc kubenswrapper[4736]: I0316 18:42:11.191373 4736 scope.go:117] "RemoveContainer" containerID="892af5277c94b3e9c63fa7d163d5a810c7822333a926029d5a719078ef895b52" Mar 16 18:42:41 crc kubenswrapper[4736]: I0316 18:42:41.555402 4736 generic.go:334] "Generic (PLEG): container finished" podID="2e738567-0291-4d67-99c4-7e217da6c59e" containerID="54b219aa76ddbff5bcd8b54fc2a77d9297ba54844ff6d6b0db49c81b3b866fac" exitCode=0 Mar 16 18:42:41 crc kubenswrapper[4736]: I0316 18:42:41.555483 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vw8m2/must-gather-9456n" event={"ID":"2e738567-0291-4d67-99c4-7e217da6c59e","Type":"ContainerDied","Data":"54b219aa76ddbff5bcd8b54fc2a77d9297ba54844ff6d6b0db49c81b3b866fac"} Mar 16 18:42:41 crc kubenswrapper[4736]: I0316 18:42:41.556554 4736 scope.go:117] "RemoveContainer" containerID="54b219aa76ddbff5bcd8b54fc2a77d9297ba54844ff6d6b0db49c81b3b866fac" Mar 16 18:42:41 crc kubenswrapper[4736]: I0316 18:42:41.992954 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vw8m2_must-gather-9456n_2e738567-0291-4d67-99c4-7e217da6c59e/gather/0.log" Mar 16 18:42:53 crc kubenswrapper[4736]: I0316 18:42:53.806937 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vw8m2/must-gather-9456n"] Mar 16 18:42:53 crc kubenswrapper[4736]: I0316 18:42:53.808254 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vw8m2/must-gather-9456n" podUID="2e738567-0291-4d67-99c4-7e217da6c59e" containerName="copy" containerID="cri-o://cc74e4677eb5727b05e510c227d72a10e756d633ae9731bb75e7d6ea8f48f8a6" gracePeriod=2 Mar 16 18:42:53 crc kubenswrapper[4736]: I0316 18:42:53.821090 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vw8m2/must-gather-9456n"] Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.644233 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vw8m2_must-gather-9456n_2e738567-0291-4d67-99c4-7e217da6c59e/copy/0.log" Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.645181 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/must-gather-9456n" Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.684944 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vw8m2_must-gather-9456n_2e738567-0291-4d67-99c4-7e217da6c59e/copy/0.log" Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.685307 4736 generic.go:334] "Generic (PLEG): container finished" podID="2e738567-0291-4d67-99c4-7e217da6c59e" containerID="cc74e4677eb5727b05e510c227d72a10e756d633ae9731bb75e7d6ea8f48f8a6" exitCode=143 Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.685381 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vw8m2/must-gather-9456n" Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.685379 4736 scope.go:117] "RemoveContainer" containerID="cc74e4677eb5727b05e510c227d72a10e756d633ae9731bb75e7d6ea8f48f8a6" Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.708051 4736 scope.go:117] "RemoveContainer" containerID="54b219aa76ddbff5bcd8b54fc2a77d9297ba54844ff6d6b0db49c81b3b866fac" Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.763288 4736 scope.go:117] "RemoveContainer" containerID="cc74e4677eb5727b05e510c227d72a10e756d633ae9731bb75e7d6ea8f48f8a6" Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.763827 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e738567-0291-4d67-99c4-7e217da6c59e-must-gather-output\") pod \"2e738567-0291-4d67-99c4-7e217da6c59e\" (UID: \"2e738567-0291-4d67-99c4-7e217da6c59e\") " Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.763937 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcbtj\" (UniqueName: \"kubernetes.io/projected/2e738567-0291-4d67-99c4-7e217da6c59e-kube-api-access-jcbtj\") pod \"2e738567-0291-4d67-99c4-7e217da6c59e\" (UID: \"2e738567-0291-4d67-99c4-7e217da6c59e\") " Mar 16 18:42:54 crc kubenswrapper[4736]: E0316 18:42:54.779667 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc74e4677eb5727b05e510c227d72a10e756d633ae9731bb75e7d6ea8f48f8a6\": container with ID starting with cc74e4677eb5727b05e510c227d72a10e756d633ae9731bb75e7d6ea8f48f8a6 not found: ID does not exist" containerID="cc74e4677eb5727b05e510c227d72a10e756d633ae9731bb75e7d6ea8f48f8a6" Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.779710 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc74e4677eb5727b05e510c227d72a10e756d633ae9731bb75e7d6ea8f48f8a6"} err="failed to get container status \"cc74e4677eb5727b05e510c227d72a10e756d633ae9731bb75e7d6ea8f48f8a6\": rpc error: code = NotFound desc = could not find container \"cc74e4677eb5727b05e510c227d72a10e756d633ae9731bb75e7d6ea8f48f8a6\": container with ID starting with cc74e4677eb5727b05e510c227d72a10e756d633ae9731bb75e7d6ea8f48f8a6 not found: ID does not exist" Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.779734 4736 scope.go:117] "RemoveContainer" containerID="54b219aa76ddbff5bcd8b54fc2a77d9297ba54844ff6d6b0db49c81b3b866fac" Mar 16 18:42:54 crc kubenswrapper[4736]: E0316 18:42:54.787230 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54b219aa76ddbff5bcd8b54fc2a77d9297ba54844ff6d6b0db49c81b3b866fac\": container with ID starting with 54b219aa76ddbff5bcd8b54fc2a77d9297ba54844ff6d6b0db49c81b3b866fac not found: ID does not exist" containerID="54b219aa76ddbff5bcd8b54fc2a77d9297ba54844ff6d6b0db49c81b3b866fac" Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.787492 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54b219aa76ddbff5bcd8b54fc2a77d9297ba54844ff6d6b0db49c81b3b866fac"} err="failed to get container status \"54b219aa76ddbff5bcd8b54fc2a77d9297ba54844ff6d6b0db49c81b3b866fac\": rpc error: code = NotFound desc = could not find container \"54b219aa76ddbff5bcd8b54fc2a77d9297ba54844ff6d6b0db49c81b3b866fac\": container with ID starting with 54b219aa76ddbff5bcd8b54fc2a77d9297ba54844ff6d6b0db49c81b3b866fac not found: ID does not exist" Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.797304 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e738567-0291-4d67-99c4-7e217da6c59e-kube-api-access-jcbtj" (OuterVolumeSpecName: "kube-api-access-jcbtj") pod "2e738567-0291-4d67-99c4-7e217da6c59e" (UID: "2e738567-0291-4d67-99c4-7e217da6c59e"). InnerVolumeSpecName "kube-api-access-jcbtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:42:54 crc kubenswrapper[4736]: I0316 18:42:54.868656 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcbtj\" (UniqueName: \"kubernetes.io/projected/2e738567-0291-4d67-99c4-7e217da6c59e-kube-api-access-jcbtj\") on node \"crc\" DevicePath \"\"" Mar 16 18:42:55 crc kubenswrapper[4736]: I0316 18:42:55.012615 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e738567-0291-4d67-99c4-7e217da6c59e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2e738567-0291-4d67-99c4-7e217da6c59e" (UID: "2e738567-0291-4d67-99c4-7e217da6c59e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:42:55 crc kubenswrapper[4736]: I0316 18:42:55.073670 4736 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e738567-0291-4d67-99c4-7e217da6c59e-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 16 18:42:56 crc kubenswrapper[4736]: I0316 18:42:56.997670 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e738567-0291-4d67-99c4-7e217da6c59e" path="/var/lib/kubelet/pods/2e738567-0291-4d67-99c4-7e217da6c59e/volumes" Mar 16 18:43:11 crc kubenswrapper[4736]: I0316 18:43:11.330371 4736 scope.go:117] "RemoveContainer" containerID="0dbaa8d80dc3c758ba6e2249401fadfab6a91d88bfa43836aee9296e3a5fc002" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.357029 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-plpph"] Mar 16 18:43:29 crc kubenswrapper[4736]: E0316 18:43:29.358462 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e738567-0291-4d67-99c4-7e217da6c59e" containerName="copy" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.358492 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e738567-0291-4d67-99c4-7e217da6c59e" containerName="copy" Mar 16 18:43:29 crc kubenswrapper[4736]: E0316 18:43:29.358534 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ecd5486-0fb5-4256-82e5-e92e5d922030" containerName="oc" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.358551 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ecd5486-0fb5-4256-82e5-e92e5d922030" containerName="oc" Mar 16 18:43:29 crc kubenswrapper[4736]: E0316 18:43:29.358589 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e738567-0291-4d67-99c4-7e217da6c59e" containerName="gather" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.358607 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e738567-0291-4d67-99c4-7e217da6c59e" containerName="gather" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.358989 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e738567-0291-4d67-99c4-7e217da6c59e" containerName="gather" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.359021 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e738567-0291-4d67-99c4-7e217da6c59e" containerName="copy" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.359043 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ecd5486-0fb5-4256-82e5-e92e5d922030" containerName="oc" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.361836 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.388700 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-plpph"] Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.451942 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-utilities\") pod \"redhat-marketplace-plpph\" (UID: \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\") " pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.452333 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjsk9\" (UniqueName: \"kubernetes.io/projected/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-kube-api-access-kjsk9\") pod \"redhat-marketplace-plpph\" (UID: \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\") " pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.452517 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-catalog-content\") pod \"redhat-marketplace-plpph\" (UID: \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\") " pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.554506 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-utilities\") pod \"redhat-marketplace-plpph\" (UID: \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\") " pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.554571 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjsk9\" (UniqueName: \"kubernetes.io/projected/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-kube-api-access-kjsk9\") pod \"redhat-marketplace-plpph\" (UID: \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\") " pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.554614 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-catalog-content\") pod \"redhat-marketplace-plpph\" (UID: \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\") " pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.555074 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-catalog-content\") pod \"redhat-marketplace-plpph\" (UID: \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\") " pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.555484 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-utilities\") pod \"redhat-marketplace-plpph\" (UID: \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\") " pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.577708 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjsk9\" (UniqueName: \"kubernetes.io/projected/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-kube-api-access-kjsk9\") pod \"redhat-marketplace-plpph\" (UID: \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\") " pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:29 crc kubenswrapper[4736]: I0316 18:43:29.697387 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:30 crc kubenswrapper[4736]: I0316 18:43:30.201364 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-plpph"] Mar 16 18:43:31 crc kubenswrapper[4736]: I0316 18:43:31.038166 4736 generic.go:334] "Generic (PLEG): container finished" podID="dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" containerID="cdd7996160a23969a123cac831c4126de91b7a533a19c187f1ccbf75b617f0bc" exitCode=0 Mar 16 18:43:31 crc kubenswrapper[4736]: I0316 18:43:31.038264 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plpph" event={"ID":"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28","Type":"ContainerDied","Data":"cdd7996160a23969a123cac831c4126de91b7a533a19c187f1ccbf75b617f0bc"} Mar 16 18:43:31 crc kubenswrapper[4736]: I0316 18:43:31.038467 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plpph" event={"ID":"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28","Type":"ContainerStarted","Data":"ad86a32e5337da0c2107cc9bdb8e20126e53eb7f18c94080efae8b8bb4590da5"} Mar 16 18:43:31 crc kubenswrapper[4736]: I0316 18:43:31.040640 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 18:43:32 crc kubenswrapper[4736]: I0316 18:43:32.047374 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plpph" event={"ID":"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28","Type":"ContainerStarted","Data":"4f963a4bfa0ce5ad85753aca02549124042e98677e0f782aab2a11d4c273a87c"} Mar 16 18:43:33 crc kubenswrapper[4736]: I0316 18:43:33.057022 4736 generic.go:334] "Generic (PLEG): container finished" podID="dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" containerID="4f963a4bfa0ce5ad85753aca02549124042e98677e0f782aab2a11d4c273a87c" exitCode=0 Mar 16 18:43:33 crc kubenswrapper[4736]: I0316 18:43:33.057164 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plpph" event={"ID":"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28","Type":"ContainerDied","Data":"4f963a4bfa0ce5ad85753aca02549124042e98677e0f782aab2a11d4c273a87c"} Mar 16 18:43:34 crc kubenswrapper[4736]: I0316 18:43:34.070490 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plpph" event={"ID":"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28","Type":"ContainerStarted","Data":"6b594d0a1570d0f73448b170838343006b81d3f008e6bb524a1f6e861983ec3f"} Mar 16 18:43:38 crc kubenswrapper[4736]: I0316 18:43:38.508436 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:43:38 crc kubenswrapper[4736]: I0316 18:43:38.508941 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:43:39 crc kubenswrapper[4736]: I0316 18:43:39.698986 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:39 crc kubenswrapper[4736]: I0316 18:43:39.699051 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:40 crc kubenswrapper[4736]: I0316 18:43:40.782209 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-plpph" podUID="dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" containerName="registry-server" probeResult="failure" output=< Mar 16 18:43:40 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:43:40 crc kubenswrapper[4736]: > Mar 16 18:43:49 crc kubenswrapper[4736]: I0316 18:43:49.771604 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:49 crc kubenswrapper[4736]: I0316 18:43:49.810267 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-plpph" podStartSLOduration=18.414800994 podStartE2EDuration="20.810250954s" podCreationTimestamp="2026-03-16 18:43:29 +0000 UTC" firstStartedPulling="2026-03-16 18:43:31.040381709 +0000 UTC m=+12612.767772006" lastFinishedPulling="2026-03-16 18:43:33.435831659 +0000 UTC m=+12615.163221966" observedRunningTime="2026-03-16 18:43:34.094794682 +0000 UTC m=+12615.822184969" watchObservedRunningTime="2026-03-16 18:43:49.810250954 +0000 UTC m=+12631.537641241" Mar 16 18:43:49 crc kubenswrapper[4736]: I0316 18:43:49.841835 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:50 crc kubenswrapper[4736]: I0316 18:43:50.012958 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-plpph"] Mar 16 18:43:51 crc kubenswrapper[4736]: I0316 18:43:51.237393 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-plpph" podUID="dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" containerName="registry-server" containerID="cri-o://6b594d0a1570d0f73448b170838343006b81d3f008e6bb524a1f6e861983ec3f" gracePeriod=2 Mar 16 18:43:51 crc kubenswrapper[4736]: I0316 18:43:51.805010 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:51 crc kubenswrapper[4736]: I0316 18:43:51.955370 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-utilities\") pod \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\" (UID: \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\") " Mar 16 18:43:51 crc kubenswrapper[4736]: I0316 18:43:51.955469 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjsk9\" (UniqueName: \"kubernetes.io/projected/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-kube-api-access-kjsk9\") pod \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\" (UID: \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\") " Mar 16 18:43:51 crc kubenswrapper[4736]: I0316 18:43:51.955536 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-catalog-content\") pod \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\" (UID: \"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28\") " Mar 16 18:43:51 crc kubenswrapper[4736]: I0316 18:43:51.956872 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-utilities" (OuterVolumeSpecName: "utilities") pod "dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" (UID: "dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:43:51 crc kubenswrapper[4736]: I0316 18:43:51.977704 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-kube-api-access-kjsk9" (OuterVolumeSpecName: "kube-api-access-kjsk9") pod "dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" (UID: "dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28"). InnerVolumeSpecName "kube-api-access-kjsk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.009778 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" (UID: "dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.057547 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.057578 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjsk9\" (UniqueName: \"kubernetes.io/projected/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-kube-api-access-kjsk9\") on node \"crc\" DevicePath \"\"" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.057589 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.248616 4736 generic.go:334] "Generic (PLEG): container finished" podID="dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" containerID="6b594d0a1570d0f73448b170838343006b81d3f008e6bb524a1f6e861983ec3f" exitCode=0 Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.248679 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plpph" event={"ID":"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28","Type":"ContainerDied","Data":"6b594d0a1570d0f73448b170838343006b81d3f008e6bb524a1f6e861983ec3f"} Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.248731 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plpph" event={"ID":"dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28","Type":"ContainerDied","Data":"ad86a32e5337da0c2107cc9bdb8e20126e53eb7f18c94080efae8b8bb4590da5"} Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.248723 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plpph" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.248750 4736 scope.go:117] "RemoveContainer" containerID="6b594d0a1570d0f73448b170838343006b81d3f008e6bb524a1f6e861983ec3f" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.288600 4736 scope.go:117] "RemoveContainer" containerID="4f963a4bfa0ce5ad85753aca02549124042e98677e0f782aab2a11d4c273a87c" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.297993 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-plpph"] Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.308832 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-plpph"] Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.310791 4736 scope.go:117] "RemoveContainer" containerID="cdd7996160a23969a123cac831c4126de91b7a533a19c187f1ccbf75b617f0bc" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.358371 4736 scope.go:117] "RemoveContainer" containerID="6b594d0a1570d0f73448b170838343006b81d3f008e6bb524a1f6e861983ec3f" Mar 16 18:43:52 crc kubenswrapper[4736]: E0316 18:43:52.358862 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b594d0a1570d0f73448b170838343006b81d3f008e6bb524a1f6e861983ec3f\": container with ID starting with 6b594d0a1570d0f73448b170838343006b81d3f008e6bb524a1f6e861983ec3f not found: ID does not exist" containerID="6b594d0a1570d0f73448b170838343006b81d3f008e6bb524a1f6e861983ec3f" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.358908 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b594d0a1570d0f73448b170838343006b81d3f008e6bb524a1f6e861983ec3f"} err="failed to get container status \"6b594d0a1570d0f73448b170838343006b81d3f008e6bb524a1f6e861983ec3f\": rpc error: code = NotFound desc = could not find container \"6b594d0a1570d0f73448b170838343006b81d3f008e6bb524a1f6e861983ec3f\": container with ID starting with 6b594d0a1570d0f73448b170838343006b81d3f008e6bb524a1f6e861983ec3f not found: ID does not exist" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.358941 4736 scope.go:117] "RemoveContainer" containerID="4f963a4bfa0ce5ad85753aca02549124042e98677e0f782aab2a11d4c273a87c" Mar 16 18:43:52 crc kubenswrapper[4736]: E0316 18:43:52.359371 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f963a4bfa0ce5ad85753aca02549124042e98677e0f782aab2a11d4c273a87c\": container with ID starting with 4f963a4bfa0ce5ad85753aca02549124042e98677e0f782aab2a11d4c273a87c not found: ID does not exist" containerID="4f963a4bfa0ce5ad85753aca02549124042e98677e0f782aab2a11d4c273a87c" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.359398 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f963a4bfa0ce5ad85753aca02549124042e98677e0f782aab2a11d4c273a87c"} err="failed to get container status \"4f963a4bfa0ce5ad85753aca02549124042e98677e0f782aab2a11d4c273a87c\": rpc error: code = NotFound desc = could not find container \"4f963a4bfa0ce5ad85753aca02549124042e98677e0f782aab2a11d4c273a87c\": container with ID starting with 4f963a4bfa0ce5ad85753aca02549124042e98677e0f782aab2a11d4c273a87c not found: ID does not exist" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.359416 4736 scope.go:117] "RemoveContainer" containerID="cdd7996160a23969a123cac831c4126de91b7a533a19c187f1ccbf75b617f0bc" Mar 16 18:43:52 crc kubenswrapper[4736]: E0316 18:43:52.359656 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdd7996160a23969a123cac831c4126de91b7a533a19c187f1ccbf75b617f0bc\": container with ID starting with cdd7996160a23969a123cac831c4126de91b7a533a19c187f1ccbf75b617f0bc not found: ID does not exist" containerID="cdd7996160a23969a123cac831c4126de91b7a533a19c187f1ccbf75b617f0bc" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.359684 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdd7996160a23969a123cac831c4126de91b7a533a19c187f1ccbf75b617f0bc"} err="failed to get container status \"cdd7996160a23969a123cac831c4126de91b7a533a19c187f1ccbf75b617f0bc\": rpc error: code = NotFound desc = could not find container \"cdd7996160a23969a123cac831c4126de91b7a533a19c187f1ccbf75b617f0bc\": container with ID starting with cdd7996160a23969a123cac831c4126de91b7a533a19c187f1ccbf75b617f0bc not found: ID does not exist" Mar 16 18:43:52 crc kubenswrapper[4736]: I0316 18:43:52.988513 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" path="/var/lib/kubelet/pods/dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28/volumes" Mar 16 18:44:00 crc kubenswrapper[4736]: I0316 18:44:00.146079 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561444-ngx98"] Mar 16 18:44:00 crc kubenswrapper[4736]: E0316 18:44:00.147200 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" containerName="extract-utilities" Mar 16 18:44:00 crc kubenswrapper[4736]: I0316 18:44:00.147214 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" containerName="extract-utilities" Mar 16 18:44:00 crc kubenswrapper[4736]: E0316 18:44:00.147237 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" containerName="extract-content" Mar 16 18:44:00 crc kubenswrapper[4736]: I0316 18:44:00.147243 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" containerName="extract-content" Mar 16 18:44:00 crc kubenswrapper[4736]: E0316 18:44:00.147253 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" containerName="registry-server" Mar 16 18:44:00 crc kubenswrapper[4736]: I0316 18:44:00.147260 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" containerName="registry-server" Mar 16 18:44:00 crc kubenswrapper[4736]: I0316 18:44:00.147449 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf4acdf-bbe8-4bb6-9f40-86e7b1acaa28" containerName="registry-server" Mar 16 18:44:00 crc kubenswrapper[4736]: I0316 18:44:00.148087 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561444-ngx98" Mar 16 18:44:00 crc kubenswrapper[4736]: I0316 18:44:00.151593 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:44:00 crc kubenswrapper[4736]: I0316 18:44:00.152003 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:44:00 crc kubenswrapper[4736]: I0316 18:44:00.152153 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:44:00 crc kubenswrapper[4736]: I0316 18:44:00.155477 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561444-ngx98"] Mar 16 18:44:00 crc kubenswrapper[4736]: I0316 18:44:00.328671 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdc6g\" (UniqueName: \"kubernetes.io/projected/1458e0bb-3d63-47d8-aa17-41302d7b078c-kube-api-access-xdc6g\") pod \"auto-csr-approver-29561444-ngx98\" (UID: \"1458e0bb-3d63-47d8-aa17-41302d7b078c\") " pod="openshift-infra/auto-csr-approver-29561444-ngx98" Mar 16 18:44:00 crc kubenswrapper[4736]: I0316 18:44:00.430845 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdc6g\" (UniqueName: \"kubernetes.io/projected/1458e0bb-3d63-47d8-aa17-41302d7b078c-kube-api-access-xdc6g\") pod \"auto-csr-approver-29561444-ngx98\" (UID: \"1458e0bb-3d63-47d8-aa17-41302d7b078c\") " pod="openshift-infra/auto-csr-approver-29561444-ngx98" Mar 16 18:44:00 crc kubenswrapper[4736]: I0316 18:44:00.452318 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdc6g\" (UniqueName: \"kubernetes.io/projected/1458e0bb-3d63-47d8-aa17-41302d7b078c-kube-api-access-xdc6g\") pod \"auto-csr-approver-29561444-ngx98\" (UID: \"1458e0bb-3d63-47d8-aa17-41302d7b078c\") " pod="openshift-infra/auto-csr-approver-29561444-ngx98" Mar 16 18:44:00 crc kubenswrapper[4736]: I0316 18:44:00.473069 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561444-ngx98" Mar 16 18:44:01 crc kubenswrapper[4736]: I0316 18:44:01.031206 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561444-ngx98"] Mar 16 18:44:01 crc kubenswrapper[4736]: I0316 18:44:01.351007 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561444-ngx98" event={"ID":"1458e0bb-3d63-47d8-aa17-41302d7b078c","Type":"ContainerStarted","Data":"e9879876acc300282b4cadf22f6852baa89fc692cf2208fd5d0bc58781ecc141"} Mar 16 18:44:03 crc kubenswrapper[4736]: I0316 18:44:03.368319 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561444-ngx98" event={"ID":"1458e0bb-3d63-47d8-aa17-41302d7b078c","Type":"ContainerStarted","Data":"4a2b01fbe0db029e6da6bf657d3044bea9fd2705aa9a6e05ad707a297d03f65d"} Mar 16 18:44:03 crc kubenswrapper[4736]: I0316 18:44:03.387242 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561444-ngx98" podStartSLOduration=1.982480517 podStartE2EDuration="3.387221569s" podCreationTimestamp="2026-03-16 18:44:00 +0000 UTC" firstStartedPulling="2026-03-16 18:44:01.045091484 +0000 UTC m=+12642.772481761" lastFinishedPulling="2026-03-16 18:44:02.449832526 +0000 UTC m=+12644.177222813" observedRunningTime="2026-03-16 18:44:03.38288436 +0000 UTC m=+12645.110274647" watchObservedRunningTime="2026-03-16 18:44:03.387221569 +0000 UTC m=+12645.114611866" Mar 16 18:44:04 crc kubenswrapper[4736]: I0316 18:44:04.378160 4736 generic.go:334] "Generic (PLEG): container finished" podID="1458e0bb-3d63-47d8-aa17-41302d7b078c" containerID="4a2b01fbe0db029e6da6bf657d3044bea9fd2705aa9a6e05ad707a297d03f65d" exitCode=0 Mar 16 18:44:04 crc kubenswrapper[4736]: I0316 18:44:04.378213 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561444-ngx98" event={"ID":"1458e0bb-3d63-47d8-aa17-41302d7b078c","Type":"ContainerDied","Data":"4a2b01fbe0db029e6da6bf657d3044bea9fd2705aa9a6e05ad707a297d03f65d"} Mar 16 18:44:05 crc kubenswrapper[4736]: I0316 18:44:05.806481 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561444-ngx98" Mar 16 18:44:05 crc kubenswrapper[4736]: I0316 18:44:05.944754 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdc6g\" (UniqueName: \"kubernetes.io/projected/1458e0bb-3d63-47d8-aa17-41302d7b078c-kube-api-access-xdc6g\") pod \"1458e0bb-3d63-47d8-aa17-41302d7b078c\" (UID: \"1458e0bb-3d63-47d8-aa17-41302d7b078c\") " Mar 16 18:44:05 crc kubenswrapper[4736]: I0316 18:44:05.951317 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1458e0bb-3d63-47d8-aa17-41302d7b078c-kube-api-access-xdc6g" (OuterVolumeSpecName: "kube-api-access-xdc6g") pod "1458e0bb-3d63-47d8-aa17-41302d7b078c" (UID: "1458e0bb-3d63-47d8-aa17-41302d7b078c"). InnerVolumeSpecName "kube-api-access-xdc6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:44:06 crc kubenswrapper[4736]: I0316 18:44:06.047099 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdc6g\" (UniqueName: \"kubernetes.io/projected/1458e0bb-3d63-47d8-aa17-41302d7b078c-kube-api-access-xdc6g\") on node \"crc\" DevicePath \"\"" Mar 16 18:44:06 crc kubenswrapper[4736]: I0316 18:44:06.406004 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561444-ngx98" event={"ID":"1458e0bb-3d63-47d8-aa17-41302d7b078c","Type":"ContainerDied","Data":"e9879876acc300282b4cadf22f6852baa89fc692cf2208fd5d0bc58781ecc141"} Mar 16 18:44:06 crc kubenswrapper[4736]: I0316 18:44:06.406206 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9879876acc300282b4cadf22f6852baa89fc692cf2208fd5d0bc58781ecc141" Mar 16 18:44:06 crc kubenswrapper[4736]: I0316 18:44:06.406341 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561444-ngx98" Mar 16 18:44:06 crc kubenswrapper[4736]: I0316 18:44:06.487479 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561438-jgrtd"] Mar 16 18:44:06 crc kubenswrapper[4736]: I0316 18:44:06.497517 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561438-jgrtd"] Mar 16 18:44:06 crc kubenswrapper[4736]: I0316 18:44:06.998301 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4abb45b1-25f1-4f81-a0ea-d582bbad2168" path="/var/lib/kubelet/pods/4abb45b1-25f1-4f81-a0ea-d582bbad2168/volumes" Mar 16 18:44:08 crc kubenswrapper[4736]: I0316 18:44:08.508030 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:44:08 crc kubenswrapper[4736]: I0316 18:44:08.509156 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:44:11 crc kubenswrapper[4736]: I0316 18:44:11.418569 4736 scope.go:117] "RemoveContainer" containerID="5e7f94777afecdd775f24aece1426382789e426219575b2668181df7cbd16c08" Mar 16 18:44:38 crc kubenswrapper[4736]: I0316 18:44:38.508007 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:44:38 crc kubenswrapper[4736]: I0316 18:44:38.508499 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:44:38 crc kubenswrapper[4736]: I0316 18:44:38.508534 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 18:44:38 crc kubenswrapper[4736]: I0316 18:44:38.509272 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 18:44:38 crc kubenswrapper[4736]: I0316 18:44:38.509317 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" gracePeriod=600 Mar 16 18:44:38 crc kubenswrapper[4736]: E0316 18:44:38.642605 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:44:38 crc kubenswrapper[4736]: I0316 18:44:38.764213 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" exitCode=0 Mar 16 18:44:38 crc kubenswrapper[4736]: I0316 18:44:38.764281 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4"} Mar 16 18:44:38 crc kubenswrapper[4736]: I0316 18:44:38.764332 4736 scope.go:117] "RemoveContainer" containerID="5097a79e54b5c08f99f9bf301e89c1f69efbb4fecb46a12af41f8d21c7fa3311" Mar 16 18:44:38 crc kubenswrapper[4736]: I0316 18:44:38.765652 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:44:38 crc kubenswrapper[4736]: E0316 18:44:38.766252 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:44:51 crc kubenswrapper[4736]: I0316 18:44:51.978374 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:44:51 crc kubenswrapper[4736]: E0316 18:44:51.979489 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.171328 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv"] Mar 16 18:45:00 crc kubenswrapper[4736]: E0316 18:45:00.172440 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1458e0bb-3d63-47d8-aa17-41302d7b078c" containerName="oc" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.172456 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="1458e0bb-3d63-47d8-aa17-41302d7b078c" containerName="oc" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.172688 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="1458e0bb-3d63-47d8-aa17-41302d7b078c" containerName="oc" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.173466 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.177013 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.184338 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.194575 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv"] Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.335940 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cbf6656e-080f-44e9-9f2e-576088800b2d-secret-volume\") pod \"collect-profiles-29561445-dcjwv\" (UID: \"cbf6656e-080f-44e9-9f2e-576088800b2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.336028 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57wfc\" (UniqueName: \"kubernetes.io/projected/cbf6656e-080f-44e9-9f2e-576088800b2d-kube-api-access-57wfc\") pod \"collect-profiles-29561445-dcjwv\" (UID: \"cbf6656e-080f-44e9-9f2e-576088800b2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.336294 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbf6656e-080f-44e9-9f2e-576088800b2d-config-volume\") pod \"collect-profiles-29561445-dcjwv\" (UID: \"cbf6656e-080f-44e9-9f2e-576088800b2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.438278 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbf6656e-080f-44e9-9f2e-576088800b2d-config-volume\") pod \"collect-profiles-29561445-dcjwv\" (UID: \"cbf6656e-080f-44e9-9f2e-576088800b2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.438446 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cbf6656e-080f-44e9-9f2e-576088800b2d-secret-volume\") pod \"collect-profiles-29561445-dcjwv\" (UID: \"cbf6656e-080f-44e9-9f2e-576088800b2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.438483 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57wfc\" (UniqueName: \"kubernetes.io/projected/cbf6656e-080f-44e9-9f2e-576088800b2d-kube-api-access-57wfc\") pod \"collect-profiles-29561445-dcjwv\" (UID: \"cbf6656e-080f-44e9-9f2e-576088800b2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.439376 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbf6656e-080f-44e9-9f2e-576088800b2d-config-volume\") pod \"collect-profiles-29561445-dcjwv\" (UID: \"cbf6656e-080f-44e9-9f2e-576088800b2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.457069 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cbf6656e-080f-44e9-9f2e-576088800b2d-secret-volume\") pod \"collect-profiles-29561445-dcjwv\" (UID: \"cbf6656e-080f-44e9-9f2e-576088800b2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.464631 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57wfc\" (UniqueName: \"kubernetes.io/projected/cbf6656e-080f-44e9-9f2e-576088800b2d-kube-api-access-57wfc\") pod \"collect-profiles-29561445-dcjwv\" (UID: \"cbf6656e-080f-44e9-9f2e-576088800b2d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.501304 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" Mar 16 18:45:00 crc kubenswrapper[4736]: I0316 18:45:00.917598 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv"] Mar 16 18:45:01 crc kubenswrapper[4736]: I0316 18:45:01.023618 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" event={"ID":"cbf6656e-080f-44e9-9f2e-576088800b2d","Type":"ContainerStarted","Data":"5e08a85ab56b953fe4bcf128a3ba09fd97166969455fea8c5c7f0059e46b3af8"} Mar 16 18:45:02 crc kubenswrapper[4736]: I0316 18:45:02.034282 4736 generic.go:334] "Generic (PLEG): container finished" podID="cbf6656e-080f-44e9-9f2e-576088800b2d" containerID="92dd3f4737f07fea725cbee6d1dc48bdbeb4149a65f3856f35f016066320f96b" exitCode=0 Mar 16 18:45:02 crc kubenswrapper[4736]: I0316 18:45:02.034386 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" event={"ID":"cbf6656e-080f-44e9-9f2e-576088800b2d","Type":"ContainerDied","Data":"92dd3f4737f07fea725cbee6d1dc48bdbeb4149a65f3856f35f016066320f96b"} Mar 16 18:45:03 crc kubenswrapper[4736]: I0316 18:45:03.608677 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" Mar 16 18:45:03 crc kubenswrapper[4736]: I0316 18:45:03.714700 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cbf6656e-080f-44e9-9f2e-576088800b2d-secret-volume\") pod \"cbf6656e-080f-44e9-9f2e-576088800b2d\" (UID: \"cbf6656e-080f-44e9-9f2e-576088800b2d\") " Mar 16 18:45:03 crc kubenswrapper[4736]: I0316 18:45:03.714796 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbf6656e-080f-44e9-9f2e-576088800b2d-config-volume\") pod \"cbf6656e-080f-44e9-9f2e-576088800b2d\" (UID: \"cbf6656e-080f-44e9-9f2e-576088800b2d\") " Mar 16 18:45:03 crc kubenswrapper[4736]: I0316 18:45:03.714948 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57wfc\" (UniqueName: \"kubernetes.io/projected/cbf6656e-080f-44e9-9f2e-576088800b2d-kube-api-access-57wfc\") pod \"cbf6656e-080f-44e9-9f2e-576088800b2d\" (UID: \"cbf6656e-080f-44e9-9f2e-576088800b2d\") " Mar 16 18:45:03 crc kubenswrapper[4736]: I0316 18:45:03.715917 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbf6656e-080f-44e9-9f2e-576088800b2d-config-volume" (OuterVolumeSpecName: "config-volume") pod "cbf6656e-080f-44e9-9f2e-576088800b2d" (UID: "cbf6656e-080f-44e9-9f2e-576088800b2d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 16 18:45:03 crc kubenswrapper[4736]: I0316 18:45:03.720706 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf6656e-080f-44e9-9f2e-576088800b2d-kube-api-access-57wfc" (OuterVolumeSpecName: "kube-api-access-57wfc") pod "cbf6656e-080f-44e9-9f2e-576088800b2d" (UID: "cbf6656e-080f-44e9-9f2e-576088800b2d"). InnerVolumeSpecName "kube-api-access-57wfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:45:03 crc kubenswrapper[4736]: I0316 18:45:03.721928 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbf6656e-080f-44e9-9f2e-576088800b2d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cbf6656e-080f-44e9-9f2e-576088800b2d" (UID: "cbf6656e-080f-44e9-9f2e-576088800b2d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 16 18:45:03 crc kubenswrapper[4736]: I0316 18:45:03.817799 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57wfc\" (UniqueName: \"kubernetes.io/projected/cbf6656e-080f-44e9-9f2e-576088800b2d-kube-api-access-57wfc\") on node \"crc\" DevicePath \"\"" Mar 16 18:45:03 crc kubenswrapper[4736]: I0316 18:45:03.817855 4736 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cbf6656e-080f-44e9-9f2e-576088800b2d-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 16 18:45:03 crc kubenswrapper[4736]: I0316 18:45:03.817875 4736 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbf6656e-080f-44e9-9f2e-576088800b2d-config-volume\") on node \"crc\" DevicePath \"\"" Mar 16 18:45:04 crc kubenswrapper[4736]: I0316 18:45:04.057268 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" event={"ID":"cbf6656e-080f-44e9-9f2e-576088800b2d","Type":"ContainerDied","Data":"5e08a85ab56b953fe4bcf128a3ba09fd97166969455fea8c5c7f0059e46b3af8"} Mar 16 18:45:04 crc kubenswrapper[4736]: I0316 18:45:04.057311 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e08a85ab56b953fe4bcf128a3ba09fd97166969455fea8c5c7f0059e46b3af8" Mar 16 18:45:04 crc kubenswrapper[4736]: I0316 18:45:04.057352 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29561445-dcjwv" Mar 16 18:45:04 crc kubenswrapper[4736]: I0316 18:45:04.736506 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm"] Mar 16 18:45:04 crc kubenswrapper[4736]: I0316 18:45:04.751935 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29561400-xd6vm"] Mar 16 18:45:04 crc kubenswrapper[4736]: I0316 18:45:04.991471 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e29211b8-900b-411e-8762-3dfaf7b4a740" path="/var/lib/kubelet/pods/e29211b8-900b-411e-8762-3dfaf7b4a740/volumes" Mar 16 18:45:05 crc kubenswrapper[4736]: I0316 18:45:05.978814 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:45:05 crc kubenswrapper[4736]: E0316 18:45:05.979160 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:45:11 crc kubenswrapper[4736]: I0316 18:45:11.629249 4736 scope.go:117] "RemoveContainer" containerID="fbcec20ba52e34a9e4ba1f354c206dc3f5f6640f730578972249ccce0cf533f6" Mar 16 18:45:14 crc kubenswrapper[4736]: I0316 18:45:14.381866 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-285pn/must-gather-kmbcd"] Mar 16 18:45:14 crc kubenswrapper[4736]: E0316 18:45:14.382702 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf6656e-080f-44e9-9f2e-576088800b2d" containerName="collect-profiles" Mar 16 18:45:14 crc kubenswrapper[4736]: I0316 18:45:14.382714 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf6656e-080f-44e9-9f2e-576088800b2d" containerName="collect-profiles" Mar 16 18:45:14 crc kubenswrapper[4736]: I0316 18:45:14.382901 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf6656e-080f-44e9-9f2e-576088800b2d" containerName="collect-profiles" Mar 16 18:45:14 crc kubenswrapper[4736]: I0316 18:45:14.383852 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/must-gather-kmbcd" Mar 16 18:45:14 crc kubenswrapper[4736]: I0316 18:45:14.396244 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-285pn"/"kube-root-ca.crt" Mar 16 18:45:14 crc kubenswrapper[4736]: I0316 18:45:14.396453 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-285pn"/"openshift-service-ca.crt" Mar 16 18:45:14 crc kubenswrapper[4736]: I0316 18:45:14.425430 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-285pn/must-gather-kmbcd"] Mar 16 18:45:14 crc kubenswrapper[4736]: I0316 18:45:14.438081 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6b5e0ca0-0401-4c9f-a734-40c98a75bd3a-must-gather-output\") pod \"must-gather-kmbcd\" (UID: \"6b5e0ca0-0401-4c9f-a734-40c98a75bd3a\") " pod="openshift-must-gather-285pn/must-gather-kmbcd" Mar 16 18:45:14 crc kubenswrapper[4736]: I0316 18:45:14.438133 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm8qw\" (UniqueName: \"kubernetes.io/projected/6b5e0ca0-0401-4c9f-a734-40c98a75bd3a-kube-api-access-vm8qw\") pod \"must-gather-kmbcd\" (UID: \"6b5e0ca0-0401-4c9f-a734-40c98a75bd3a\") " pod="openshift-must-gather-285pn/must-gather-kmbcd" Mar 16 18:45:14 crc kubenswrapper[4736]: I0316 18:45:14.542059 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6b5e0ca0-0401-4c9f-a734-40c98a75bd3a-must-gather-output\") pod \"must-gather-kmbcd\" (UID: \"6b5e0ca0-0401-4c9f-a734-40c98a75bd3a\") " pod="openshift-must-gather-285pn/must-gather-kmbcd" Mar 16 18:45:14 crc kubenswrapper[4736]: I0316 18:45:14.542121 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm8qw\" (UniqueName: \"kubernetes.io/projected/6b5e0ca0-0401-4c9f-a734-40c98a75bd3a-kube-api-access-vm8qw\") pod \"must-gather-kmbcd\" (UID: \"6b5e0ca0-0401-4c9f-a734-40c98a75bd3a\") " pod="openshift-must-gather-285pn/must-gather-kmbcd" Mar 16 18:45:14 crc kubenswrapper[4736]: I0316 18:45:14.542636 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6b5e0ca0-0401-4c9f-a734-40c98a75bd3a-must-gather-output\") pod \"must-gather-kmbcd\" (UID: \"6b5e0ca0-0401-4c9f-a734-40c98a75bd3a\") " pod="openshift-must-gather-285pn/must-gather-kmbcd" Mar 16 18:45:14 crc kubenswrapper[4736]: I0316 18:45:14.565365 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm8qw\" (UniqueName: \"kubernetes.io/projected/6b5e0ca0-0401-4c9f-a734-40c98a75bd3a-kube-api-access-vm8qw\") pod \"must-gather-kmbcd\" (UID: \"6b5e0ca0-0401-4c9f-a734-40c98a75bd3a\") " pod="openshift-must-gather-285pn/must-gather-kmbcd" Mar 16 18:45:14 crc kubenswrapper[4736]: I0316 18:45:14.732692 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/must-gather-kmbcd" Mar 16 18:45:15 crc kubenswrapper[4736]: I0316 18:45:15.333375 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-285pn/must-gather-kmbcd"] Mar 16 18:45:16 crc kubenswrapper[4736]: I0316 18:45:16.217355 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-285pn/must-gather-kmbcd" event={"ID":"6b5e0ca0-0401-4c9f-a734-40c98a75bd3a","Type":"ContainerStarted","Data":"edd2d2767bea75009ed00e1bc2c23b980d2fff61d39fd49361ea0df16cfe5b9b"} Mar 16 18:45:16 crc kubenswrapper[4736]: I0316 18:45:16.217909 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-285pn/must-gather-kmbcd" event={"ID":"6b5e0ca0-0401-4c9f-a734-40c98a75bd3a","Type":"ContainerStarted","Data":"da27d6c0a7649d4c86fa1faea41ecfcc669363b7b5fe3a146549db85669160f2"} Mar 16 18:45:16 crc kubenswrapper[4736]: I0316 18:45:16.217921 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-285pn/must-gather-kmbcd" event={"ID":"6b5e0ca0-0401-4c9f-a734-40c98a75bd3a","Type":"ContainerStarted","Data":"f5de8a4df4c68be6177e6d30d15bb7a5e135746f435bfc1d3568f796bac53977"} Mar 16 18:45:16 crc kubenswrapper[4736]: I0316 18:45:16.243858 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-285pn/must-gather-kmbcd" podStartSLOduration=2.243835981 podStartE2EDuration="2.243835981s" podCreationTimestamp="2026-03-16 18:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 18:45:16.234430085 +0000 UTC m=+12717.961820382" watchObservedRunningTime="2026-03-16 18:45:16.243835981 +0000 UTC m=+12717.971226268" Mar 16 18:45:19 crc kubenswrapper[4736]: I0316 18:45:19.978687 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:45:19 crc kubenswrapper[4736]: E0316 18:45:19.979456 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:45:22 crc kubenswrapper[4736]: I0316 18:45:22.959845 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-285pn/crc-debug-5lk4n"] Mar 16 18:45:22 crc kubenswrapper[4736]: I0316 18:45:22.964415 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/crc-debug-5lk4n" Mar 16 18:45:22 crc kubenswrapper[4736]: I0316 18:45:22.971089 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-285pn"/"default-dockercfg-qxxzk" Mar 16 18:45:23 crc kubenswrapper[4736]: I0316 18:45:23.106005 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/601704ab-1d83-4e16-9d39-2b089918e296-host\") pod \"crc-debug-5lk4n\" (UID: \"601704ab-1d83-4e16-9d39-2b089918e296\") " pod="openshift-must-gather-285pn/crc-debug-5lk4n" Mar 16 18:45:23 crc kubenswrapper[4736]: I0316 18:45:23.106158 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdgvx\" (UniqueName: \"kubernetes.io/projected/601704ab-1d83-4e16-9d39-2b089918e296-kube-api-access-wdgvx\") pod \"crc-debug-5lk4n\" (UID: \"601704ab-1d83-4e16-9d39-2b089918e296\") " pod="openshift-must-gather-285pn/crc-debug-5lk4n" Mar 16 18:45:23 crc kubenswrapper[4736]: I0316 18:45:23.213008 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdgvx\" (UniqueName: \"kubernetes.io/projected/601704ab-1d83-4e16-9d39-2b089918e296-kube-api-access-wdgvx\") pod \"crc-debug-5lk4n\" (UID: \"601704ab-1d83-4e16-9d39-2b089918e296\") " pod="openshift-must-gather-285pn/crc-debug-5lk4n" Mar 16 18:45:23 crc kubenswrapper[4736]: I0316 18:45:23.213658 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/601704ab-1d83-4e16-9d39-2b089918e296-host\") pod \"crc-debug-5lk4n\" (UID: \"601704ab-1d83-4e16-9d39-2b089918e296\") " pod="openshift-must-gather-285pn/crc-debug-5lk4n" Mar 16 18:45:23 crc kubenswrapper[4736]: I0316 18:45:23.225284 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/601704ab-1d83-4e16-9d39-2b089918e296-host\") pod \"crc-debug-5lk4n\" (UID: \"601704ab-1d83-4e16-9d39-2b089918e296\") " pod="openshift-must-gather-285pn/crc-debug-5lk4n" Mar 16 18:45:23 crc kubenswrapper[4736]: I0316 18:45:23.246736 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdgvx\" (UniqueName: \"kubernetes.io/projected/601704ab-1d83-4e16-9d39-2b089918e296-kube-api-access-wdgvx\") pod \"crc-debug-5lk4n\" (UID: \"601704ab-1d83-4e16-9d39-2b089918e296\") " pod="openshift-must-gather-285pn/crc-debug-5lk4n" Mar 16 18:45:23 crc kubenswrapper[4736]: I0316 18:45:23.304697 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/crc-debug-5lk4n" Mar 16 18:45:24 crc kubenswrapper[4736]: I0316 18:45:24.282385 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-285pn/crc-debug-5lk4n" event={"ID":"601704ab-1d83-4e16-9d39-2b089918e296","Type":"ContainerStarted","Data":"16c256e889cd4202b728abd9c868735a6a96ac8facf3720751050d7c7c154978"} Mar 16 18:45:24 crc kubenswrapper[4736]: I0316 18:45:24.282870 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-285pn/crc-debug-5lk4n" event={"ID":"601704ab-1d83-4e16-9d39-2b089918e296","Type":"ContainerStarted","Data":"934dea1cb926231999efbe0d3f470d6dcba8e24a4a72f08769f56617109b555f"} Mar 16 18:45:24 crc kubenswrapper[4736]: I0316 18:45:24.303476 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-285pn/crc-debug-5lk4n" podStartSLOduration=2.303460655 podStartE2EDuration="2.303460655s" podCreationTimestamp="2026-03-16 18:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 18:45:24.295321183 +0000 UTC m=+12726.022711470" watchObservedRunningTime="2026-03-16 18:45:24.303460655 +0000 UTC m=+12726.030850942" Mar 16 18:45:32 crc kubenswrapper[4736]: I0316 18:45:32.977825 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:45:32 crc kubenswrapper[4736]: E0316 18:45:32.978572 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:45:47 crc kubenswrapper[4736]: I0316 18:45:47.978193 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:45:47 crc kubenswrapper[4736]: E0316 18:45:47.979039 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:45:59 crc kubenswrapper[4736]: I0316 18:45:59.978565 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:45:59 crc kubenswrapper[4736]: E0316 18:45:59.979449 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:46:00 crc kubenswrapper[4736]: I0316 18:46:00.139857 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561446-9cg4l"] Mar 16 18:46:00 crc kubenswrapper[4736]: I0316 18:46:00.143221 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561446-9cg4l" Mar 16 18:46:00 crc kubenswrapper[4736]: I0316 18:46:00.145502 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:46:00 crc kubenswrapper[4736]: I0316 18:46:00.147300 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:46:00 crc kubenswrapper[4736]: I0316 18:46:00.148441 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:46:00 crc kubenswrapper[4736]: I0316 18:46:00.150160 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561446-9cg4l"] Mar 16 18:46:00 crc kubenswrapper[4736]: I0316 18:46:00.239046 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gvsq\" (UniqueName: \"kubernetes.io/projected/8f110314-5c8b-4ce2-aab5-5a8e3944cab5-kube-api-access-7gvsq\") pod \"auto-csr-approver-29561446-9cg4l\" (UID: \"8f110314-5c8b-4ce2-aab5-5a8e3944cab5\") " pod="openshift-infra/auto-csr-approver-29561446-9cg4l" Mar 16 18:46:00 crc kubenswrapper[4736]: I0316 18:46:00.341540 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gvsq\" (UniqueName: \"kubernetes.io/projected/8f110314-5c8b-4ce2-aab5-5a8e3944cab5-kube-api-access-7gvsq\") pod \"auto-csr-approver-29561446-9cg4l\" (UID: \"8f110314-5c8b-4ce2-aab5-5a8e3944cab5\") " pod="openshift-infra/auto-csr-approver-29561446-9cg4l" Mar 16 18:46:00 crc kubenswrapper[4736]: I0316 18:46:00.360724 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gvsq\" (UniqueName: \"kubernetes.io/projected/8f110314-5c8b-4ce2-aab5-5a8e3944cab5-kube-api-access-7gvsq\") pod \"auto-csr-approver-29561446-9cg4l\" (UID: \"8f110314-5c8b-4ce2-aab5-5a8e3944cab5\") " pod="openshift-infra/auto-csr-approver-29561446-9cg4l" Mar 16 18:46:00 crc kubenswrapper[4736]: I0316 18:46:00.460156 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561446-9cg4l" Mar 16 18:46:01 crc kubenswrapper[4736]: I0316 18:46:01.205957 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561446-9cg4l"] Mar 16 18:46:01 crc kubenswrapper[4736]: I0316 18:46:01.591141 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561446-9cg4l" event={"ID":"8f110314-5c8b-4ce2-aab5-5a8e3944cab5","Type":"ContainerStarted","Data":"7720bf1b4e3f488b7fdedde1cf06df8c8b4a3f44665bdd0fb72405553c15c959"} Mar 16 18:46:03 crc kubenswrapper[4736]: I0316 18:46:03.607299 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561446-9cg4l" event={"ID":"8f110314-5c8b-4ce2-aab5-5a8e3944cab5","Type":"ContainerStarted","Data":"9bca14046d6eb9f7c855857cc46a427460d4026b5de72346e4d1aaed266ec42e"} Mar 16 18:46:04 crc kubenswrapper[4736]: I0316 18:46:04.617428 4736 generic.go:334] "Generic (PLEG): container finished" podID="8f110314-5c8b-4ce2-aab5-5a8e3944cab5" containerID="9bca14046d6eb9f7c855857cc46a427460d4026b5de72346e4d1aaed266ec42e" exitCode=0 Mar 16 18:46:04 crc kubenswrapper[4736]: I0316 18:46:04.617762 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561446-9cg4l" event={"ID":"8f110314-5c8b-4ce2-aab5-5a8e3944cab5","Type":"ContainerDied","Data":"9bca14046d6eb9f7c855857cc46a427460d4026b5de72346e4d1aaed266ec42e"} Mar 16 18:46:06 crc kubenswrapper[4736]: I0316 18:46:06.016570 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561446-9cg4l" Mar 16 18:46:06 crc kubenswrapper[4736]: I0316 18:46:06.143008 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gvsq\" (UniqueName: \"kubernetes.io/projected/8f110314-5c8b-4ce2-aab5-5a8e3944cab5-kube-api-access-7gvsq\") pod \"8f110314-5c8b-4ce2-aab5-5a8e3944cab5\" (UID: \"8f110314-5c8b-4ce2-aab5-5a8e3944cab5\") " Mar 16 18:46:06 crc kubenswrapper[4736]: I0316 18:46:06.152234 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f110314-5c8b-4ce2-aab5-5a8e3944cab5-kube-api-access-7gvsq" (OuterVolumeSpecName: "kube-api-access-7gvsq") pod "8f110314-5c8b-4ce2-aab5-5a8e3944cab5" (UID: "8f110314-5c8b-4ce2-aab5-5a8e3944cab5"). InnerVolumeSpecName "kube-api-access-7gvsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:46:06 crc kubenswrapper[4736]: I0316 18:46:06.245087 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gvsq\" (UniqueName: \"kubernetes.io/projected/8f110314-5c8b-4ce2-aab5-5a8e3944cab5-kube-api-access-7gvsq\") on node \"crc\" DevicePath \"\"" Mar 16 18:46:06 crc kubenswrapper[4736]: I0316 18:46:06.635474 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561446-9cg4l" event={"ID":"8f110314-5c8b-4ce2-aab5-5a8e3944cab5","Type":"ContainerDied","Data":"7720bf1b4e3f488b7fdedde1cf06df8c8b4a3f44665bdd0fb72405553c15c959"} Mar 16 18:46:06 crc kubenswrapper[4736]: I0316 18:46:06.636219 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7720bf1b4e3f488b7fdedde1cf06df8c8b4a3f44665bdd0fb72405553c15c959" Mar 16 18:46:06 crc kubenswrapper[4736]: I0316 18:46:06.635666 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561446-9cg4l" Mar 16 18:46:07 crc kubenswrapper[4736]: I0316 18:46:07.085918 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561440-78r7d"] Mar 16 18:46:07 crc kubenswrapper[4736]: I0316 18:46:07.097656 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561440-78r7d"] Mar 16 18:46:07 crc kubenswrapper[4736]: I0316 18:46:07.647506 4736 generic.go:334] "Generic (PLEG): container finished" podID="601704ab-1d83-4e16-9d39-2b089918e296" containerID="16c256e889cd4202b728abd9c868735a6a96ac8facf3720751050d7c7c154978" exitCode=0 Mar 16 18:46:07 crc kubenswrapper[4736]: I0316 18:46:07.647554 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-285pn/crc-debug-5lk4n" event={"ID":"601704ab-1d83-4e16-9d39-2b089918e296","Type":"ContainerDied","Data":"16c256e889cd4202b728abd9c868735a6a96ac8facf3720751050d7c7c154978"} Mar 16 18:46:08 crc kubenswrapper[4736]: I0316 18:46:08.757534 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/crc-debug-5lk4n" Mar 16 18:46:08 crc kubenswrapper[4736]: I0316 18:46:08.789842 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdgvx\" (UniqueName: \"kubernetes.io/projected/601704ab-1d83-4e16-9d39-2b089918e296-kube-api-access-wdgvx\") pod \"601704ab-1d83-4e16-9d39-2b089918e296\" (UID: \"601704ab-1d83-4e16-9d39-2b089918e296\") " Mar 16 18:46:08 crc kubenswrapper[4736]: I0316 18:46:08.790065 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/601704ab-1d83-4e16-9d39-2b089918e296-host\") pod \"601704ab-1d83-4e16-9d39-2b089918e296\" (UID: \"601704ab-1d83-4e16-9d39-2b089918e296\") " Mar 16 18:46:08 crc kubenswrapper[4736]: I0316 18:46:08.790474 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/601704ab-1d83-4e16-9d39-2b089918e296-host" (OuterVolumeSpecName: "host") pod "601704ab-1d83-4e16-9d39-2b089918e296" (UID: "601704ab-1d83-4e16-9d39-2b089918e296"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 18:46:08 crc kubenswrapper[4736]: I0316 18:46:08.790510 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-285pn/crc-debug-5lk4n"] Mar 16 18:46:08 crc kubenswrapper[4736]: I0316 18:46:08.805299 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601704ab-1d83-4e16-9d39-2b089918e296-kube-api-access-wdgvx" (OuterVolumeSpecName: "kube-api-access-wdgvx") pod "601704ab-1d83-4e16-9d39-2b089918e296" (UID: "601704ab-1d83-4e16-9d39-2b089918e296"). InnerVolumeSpecName "kube-api-access-wdgvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:46:08 crc kubenswrapper[4736]: I0316 18:46:08.839692 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-285pn/crc-debug-5lk4n"] Mar 16 18:46:08 crc kubenswrapper[4736]: I0316 18:46:08.892480 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdgvx\" (UniqueName: \"kubernetes.io/projected/601704ab-1d83-4e16-9d39-2b089918e296-kube-api-access-wdgvx\") on node \"crc\" DevicePath \"\"" Mar 16 18:46:08 crc kubenswrapper[4736]: I0316 18:46:08.892516 4736 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/601704ab-1d83-4e16-9d39-2b089918e296-host\") on node \"crc\" DevicePath \"\"" Mar 16 18:46:08 crc kubenswrapper[4736]: I0316 18:46:08.989844 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156" path="/var/lib/kubelet/pods/11bd4e5b-b7a7-43b6-b5e4-d29b0ecea156/volumes" Mar 16 18:46:08 crc kubenswrapper[4736]: I0316 18:46:08.990858 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="601704ab-1d83-4e16-9d39-2b089918e296" path="/var/lib/kubelet/pods/601704ab-1d83-4e16-9d39-2b089918e296/volumes" Mar 16 18:46:09 crc kubenswrapper[4736]: I0316 18:46:09.687518 4736 scope.go:117] "RemoveContainer" containerID="16c256e889cd4202b728abd9c868735a6a96ac8facf3720751050d7c7c154978" Mar 16 18:46:09 crc kubenswrapper[4736]: I0316 18:46:09.687572 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/crc-debug-5lk4n" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.024553 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-285pn/crc-debug-7xhqp"] Mar 16 18:46:10 crc kubenswrapper[4736]: E0316 18:46:10.025115 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f110314-5c8b-4ce2-aab5-5a8e3944cab5" containerName="oc" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.025139 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f110314-5c8b-4ce2-aab5-5a8e3944cab5" containerName="oc" Mar 16 18:46:10 crc kubenswrapper[4736]: E0316 18:46:10.025182 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601704ab-1d83-4e16-9d39-2b089918e296" containerName="container-00" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.025189 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="601704ab-1d83-4e16-9d39-2b089918e296" containerName="container-00" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.025365 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="601704ab-1d83-4e16-9d39-2b089918e296" containerName="container-00" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.025388 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f110314-5c8b-4ce2-aab5-5a8e3944cab5" containerName="oc" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.025908 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/crc-debug-7xhqp" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.029571 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-285pn"/"default-dockercfg-qxxzk" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.115303 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwsl2\" (UniqueName: \"kubernetes.io/projected/87bc2d03-c4e8-446d-83d3-17aa64e59a88-kube-api-access-cwsl2\") pod \"crc-debug-7xhqp\" (UID: \"87bc2d03-c4e8-446d-83d3-17aa64e59a88\") " pod="openshift-must-gather-285pn/crc-debug-7xhqp" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.115705 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/87bc2d03-c4e8-446d-83d3-17aa64e59a88-host\") pod \"crc-debug-7xhqp\" (UID: \"87bc2d03-c4e8-446d-83d3-17aa64e59a88\") " pod="openshift-must-gather-285pn/crc-debug-7xhqp" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.217541 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/87bc2d03-c4e8-446d-83d3-17aa64e59a88-host\") pod \"crc-debug-7xhqp\" (UID: \"87bc2d03-c4e8-446d-83d3-17aa64e59a88\") " pod="openshift-must-gather-285pn/crc-debug-7xhqp" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.217656 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwsl2\" (UniqueName: \"kubernetes.io/projected/87bc2d03-c4e8-446d-83d3-17aa64e59a88-kube-api-access-cwsl2\") pod \"crc-debug-7xhqp\" (UID: \"87bc2d03-c4e8-446d-83d3-17aa64e59a88\") " pod="openshift-must-gather-285pn/crc-debug-7xhqp" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.217689 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/87bc2d03-c4e8-446d-83d3-17aa64e59a88-host\") pod \"crc-debug-7xhqp\" (UID: \"87bc2d03-c4e8-446d-83d3-17aa64e59a88\") " pod="openshift-must-gather-285pn/crc-debug-7xhqp" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.236027 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwsl2\" (UniqueName: \"kubernetes.io/projected/87bc2d03-c4e8-446d-83d3-17aa64e59a88-kube-api-access-cwsl2\") pod \"crc-debug-7xhqp\" (UID: \"87bc2d03-c4e8-446d-83d3-17aa64e59a88\") " pod="openshift-must-gather-285pn/crc-debug-7xhqp" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.340068 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/crc-debug-7xhqp" Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.696826 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-285pn/crc-debug-7xhqp" event={"ID":"87bc2d03-c4e8-446d-83d3-17aa64e59a88","Type":"ContainerStarted","Data":"2d777ffc3caecb8b724ea6dbaf6d5ae55bb4c1ab0cceac79de4edf31f64b853b"} Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.697115 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-285pn/crc-debug-7xhqp" event={"ID":"87bc2d03-c4e8-446d-83d3-17aa64e59a88","Type":"ContainerStarted","Data":"5cf1a2eb90ec5d5cc230bab9dd76bcec496bdc5c910660a9f3f5c890095bb1aa"} Mar 16 18:46:10 crc kubenswrapper[4736]: I0316 18:46:10.713148 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-285pn/crc-debug-7xhqp" podStartSLOduration=0.713132558 podStartE2EDuration="713.132558ms" podCreationTimestamp="2026-03-16 18:46:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-16 18:46:10.708409789 +0000 UTC m=+12772.435800076" watchObservedRunningTime="2026-03-16 18:46:10.713132558 +0000 UTC m=+12772.440522845" Mar 16 18:46:11 crc kubenswrapper[4736]: I0316 18:46:11.720723 4736 generic.go:334] "Generic (PLEG): container finished" podID="87bc2d03-c4e8-446d-83d3-17aa64e59a88" containerID="2d777ffc3caecb8b724ea6dbaf6d5ae55bb4c1ab0cceac79de4edf31f64b853b" exitCode=0 Mar 16 18:46:11 crc kubenswrapper[4736]: I0316 18:46:11.720960 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-285pn/crc-debug-7xhqp" event={"ID":"87bc2d03-c4e8-446d-83d3-17aa64e59a88","Type":"ContainerDied","Data":"2d777ffc3caecb8b724ea6dbaf6d5ae55bb4c1ab0cceac79de4edf31f64b853b"} Mar 16 18:46:11 crc kubenswrapper[4736]: I0316 18:46:11.754193 4736 scope.go:117] "RemoveContainer" containerID="1520920d9bfe76a5cbf0008494ecd01ce0f017bf9ef2066c1bb70fa06ab86d01" Mar 16 18:46:12 crc kubenswrapper[4736]: I0316 18:46:12.822334 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/crc-debug-7xhqp" Mar 16 18:46:12 crc kubenswrapper[4736]: I0316 18:46:12.861884 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwsl2\" (UniqueName: \"kubernetes.io/projected/87bc2d03-c4e8-446d-83d3-17aa64e59a88-kube-api-access-cwsl2\") pod \"87bc2d03-c4e8-446d-83d3-17aa64e59a88\" (UID: \"87bc2d03-c4e8-446d-83d3-17aa64e59a88\") " Mar 16 18:46:12 crc kubenswrapper[4736]: I0316 18:46:12.861957 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/87bc2d03-c4e8-446d-83d3-17aa64e59a88-host\") pod \"87bc2d03-c4e8-446d-83d3-17aa64e59a88\" (UID: \"87bc2d03-c4e8-446d-83d3-17aa64e59a88\") " Mar 16 18:46:12 crc kubenswrapper[4736]: I0316 18:46:12.862395 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87bc2d03-c4e8-446d-83d3-17aa64e59a88-host" (OuterVolumeSpecName: "host") pod "87bc2d03-c4e8-446d-83d3-17aa64e59a88" (UID: "87bc2d03-c4e8-446d-83d3-17aa64e59a88"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 18:46:12 crc kubenswrapper[4736]: I0316 18:46:12.862648 4736 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/87bc2d03-c4e8-446d-83d3-17aa64e59a88-host\") on node \"crc\" DevicePath \"\"" Mar 16 18:46:12 crc kubenswrapper[4736]: I0316 18:46:12.867208 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87bc2d03-c4e8-446d-83d3-17aa64e59a88-kube-api-access-cwsl2" (OuterVolumeSpecName: "kube-api-access-cwsl2") pod "87bc2d03-c4e8-446d-83d3-17aa64e59a88" (UID: "87bc2d03-c4e8-446d-83d3-17aa64e59a88"). InnerVolumeSpecName "kube-api-access-cwsl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:46:12 crc kubenswrapper[4736]: I0316 18:46:12.916173 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-285pn/crc-debug-7xhqp"] Mar 16 18:46:12 crc kubenswrapper[4736]: I0316 18:46:12.923929 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-285pn/crc-debug-7xhqp"] Mar 16 18:46:12 crc kubenswrapper[4736]: I0316 18:46:12.963993 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwsl2\" (UniqueName: \"kubernetes.io/projected/87bc2d03-c4e8-446d-83d3-17aa64e59a88-kube-api-access-cwsl2\") on node \"crc\" DevicePath \"\"" Mar 16 18:46:12 crc kubenswrapper[4736]: I0316 18:46:12.989237 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87bc2d03-c4e8-446d-83d3-17aa64e59a88" path="/var/lib/kubelet/pods/87bc2d03-c4e8-446d-83d3-17aa64e59a88/volumes" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.674389 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n68dw"] Mar 16 18:46:13 crc kubenswrapper[4736]: E0316 18:46:13.674847 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bc2d03-c4e8-446d-83d3-17aa64e59a88" containerName="container-00" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.674866 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bc2d03-c4e8-446d-83d3-17aa64e59a88" containerName="container-00" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.675065 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bc2d03-c4e8-446d-83d3-17aa64e59a88" containerName="container-00" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.676429 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.689832 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n68dw"] Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.738861 4736 scope.go:117] "RemoveContainer" containerID="2d777ffc3caecb8b724ea6dbaf6d5ae55bb4c1ab0cceac79de4edf31f64b853b" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.738928 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/crc-debug-7xhqp" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.777214 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855bd155-e49f-4ffd-b109-dae4b051a56f-catalog-content\") pod \"redhat-operators-n68dw\" (UID: \"855bd155-e49f-4ffd-b109-dae4b051a56f\") " pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.777550 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrbhv\" (UniqueName: \"kubernetes.io/projected/855bd155-e49f-4ffd-b109-dae4b051a56f-kube-api-access-mrbhv\") pod \"redhat-operators-n68dw\" (UID: \"855bd155-e49f-4ffd-b109-dae4b051a56f\") " pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.777583 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855bd155-e49f-4ffd-b109-dae4b051a56f-utilities\") pod \"redhat-operators-n68dw\" (UID: \"855bd155-e49f-4ffd-b109-dae4b051a56f\") " pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.879010 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855bd155-e49f-4ffd-b109-dae4b051a56f-catalog-content\") pod \"redhat-operators-n68dw\" (UID: \"855bd155-e49f-4ffd-b109-dae4b051a56f\") " pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.879134 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrbhv\" (UniqueName: \"kubernetes.io/projected/855bd155-e49f-4ffd-b109-dae4b051a56f-kube-api-access-mrbhv\") pod \"redhat-operators-n68dw\" (UID: \"855bd155-e49f-4ffd-b109-dae4b051a56f\") " pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.879172 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855bd155-e49f-4ffd-b109-dae4b051a56f-utilities\") pod \"redhat-operators-n68dw\" (UID: \"855bd155-e49f-4ffd-b109-dae4b051a56f\") " pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.879526 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855bd155-e49f-4ffd-b109-dae4b051a56f-catalog-content\") pod \"redhat-operators-n68dw\" (UID: \"855bd155-e49f-4ffd-b109-dae4b051a56f\") " pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.879616 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855bd155-e49f-4ffd-b109-dae4b051a56f-utilities\") pod \"redhat-operators-n68dw\" (UID: \"855bd155-e49f-4ffd-b109-dae4b051a56f\") " pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.896240 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrbhv\" (UniqueName: \"kubernetes.io/projected/855bd155-e49f-4ffd-b109-dae4b051a56f-kube-api-access-mrbhv\") pod \"redhat-operators-n68dw\" (UID: \"855bd155-e49f-4ffd-b109-dae4b051a56f\") " pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.978262 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:46:13 crc kubenswrapper[4736]: E0316 18:46:13.978583 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:46:13 crc kubenswrapper[4736]: I0316 18:46:13.993618 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.179198 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-285pn/crc-debug-n8zh7"] Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.180708 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/crc-debug-n8zh7" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.191372 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-285pn"/"default-dockercfg-qxxzk" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.300419 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4151b176-dee3-4c23-891a-85aabb1491a9-host\") pod \"crc-debug-n8zh7\" (UID: \"4151b176-dee3-4c23-891a-85aabb1491a9\") " pod="openshift-must-gather-285pn/crc-debug-n8zh7" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.300466 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bxlf\" (UniqueName: \"kubernetes.io/projected/4151b176-dee3-4c23-891a-85aabb1491a9-kube-api-access-9bxlf\") pod \"crc-debug-n8zh7\" (UID: \"4151b176-dee3-4c23-891a-85aabb1491a9\") " pod="openshift-must-gather-285pn/crc-debug-n8zh7" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.402580 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4151b176-dee3-4c23-891a-85aabb1491a9-host\") pod \"crc-debug-n8zh7\" (UID: \"4151b176-dee3-4c23-891a-85aabb1491a9\") " pod="openshift-must-gather-285pn/crc-debug-n8zh7" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.402634 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bxlf\" (UniqueName: \"kubernetes.io/projected/4151b176-dee3-4c23-891a-85aabb1491a9-kube-api-access-9bxlf\") pod \"crc-debug-n8zh7\" (UID: \"4151b176-dee3-4c23-891a-85aabb1491a9\") " pod="openshift-must-gather-285pn/crc-debug-n8zh7" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.402785 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4151b176-dee3-4c23-891a-85aabb1491a9-host\") pod \"crc-debug-n8zh7\" (UID: \"4151b176-dee3-4c23-891a-85aabb1491a9\") " pod="openshift-must-gather-285pn/crc-debug-n8zh7" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.427999 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bxlf\" (UniqueName: \"kubernetes.io/projected/4151b176-dee3-4c23-891a-85aabb1491a9-kube-api-access-9bxlf\") pod \"crc-debug-n8zh7\" (UID: \"4151b176-dee3-4c23-891a-85aabb1491a9\") " pod="openshift-must-gather-285pn/crc-debug-n8zh7" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.500276 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n68dw"] Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.520092 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/crc-debug-n8zh7" Mar 16 18:46:14 crc kubenswrapper[4736]: W0316 18:46:14.556153 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4151b176_dee3_4c23_891a_85aabb1491a9.slice/crio-5d53193fd20d1952068117c7aee4b017ab3bd082ded4cc168da068953f892180 WatchSource:0}: Error finding container 5d53193fd20d1952068117c7aee4b017ab3bd082ded4cc168da068953f892180: Status 404 returned error can't find the container with id 5d53193fd20d1952068117c7aee4b017ab3bd082ded4cc168da068953f892180 Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.677953 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-88fbp"] Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.705482 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.738994 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-88fbp"] Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.798055 4736 generic.go:334] "Generic (PLEG): container finished" podID="4151b176-dee3-4c23-891a-85aabb1491a9" containerID="9603fc1c99aec624b22584b26374a0f993beb1a990f2d6429189a9a66837000a" exitCode=0 Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.798206 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-285pn/crc-debug-n8zh7" event={"ID":"4151b176-dee3-4c23-891a-85aabb1491a9","Type":"ContainerDied","Data":"9603fc1c99aec624b22584b26374a0f993beb1a990f2d6429189a9a66837000a"} Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.798237 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-285pn/crc-debug-n8zh7" event={"ID":"4151b176-dee3-4c23-891a-85aabb1491a9","Type":"ContainerStarted","Data":"5d53193fd20d1952068117c7aee4b017ab3bd082ded4cc168da068953f892180"} Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.800758 4736 generic.go:334] "Generic (PLEG): container finished" podID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerID="be1bedc6bb51a1a0c433c8da0c7cc90fa22c03009b9e705ed100034109242a2d" exitCode=0 Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.800803 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n68dw" event={"ID":"855bd155-e49f-4ffd-b109-dae4b051a56f","Type":"ContainerDied","Data":"be1bedc6bb51a1a0c433c8da0c7cc90fa22c03009b9e705ed100034109242a2d"} Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.800828 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n68dw" event={"ID":"855bd155-e49f-4ffd-b109-dae4b051a56f","Type":"ContainerStarted","Data":"a640ac76ef525da6a732f35e581f2bdc9089ff3a2ddc86933bf00fa3c7670eed"} Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.814622 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-utilities\") pod \"certified-operators-88fbp\" (UID: \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\") " pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.814911 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxcmq\" (UniqueName: \"kubernetes.io/projected/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-kube-api-access-hxcmq\") pod \"certified-operators-88fbp\" (UID: \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\") " pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.815039 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-catalog-content\") pod \"certified-operators-88fbp\" (UID: \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\") " pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.867163 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-285pn/crc-debug-n8zh7"] Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.874358 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-285pn/crc-debug-n8zh7"] Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.918973 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-catalog-content\") pod \"certified-operators-88fbp\" (UID: \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\") " pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.919076 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-utilities\") pod \"certified-operators-88fbp\" (UID: \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\") " pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.919138 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxcmq\" (UniqueName: \"kubernetes.io/projected/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-kube-api-access-hxcmq\") pod \"certified-operators-88fbp\" (UID: \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\") " pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.920455 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-catalog-content\") pod \"certified-operators-88fbp\" (UID: \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\") " pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.920551 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-utilities\") pod \"certified-operators-88fbp\" (UID: \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\") " pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:14 crc kubenswrapper[4736]: I0316 18:46:14.946929 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxcmq\" (UniqueName: \"kubernetes.io/projected/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-kube-api-access-hxcmq\") pod \"certified-operators-88fbp\" (UID: \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\") " pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:15 crc kubenswrapper[4736]: I0316 18:46:15.119503 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:15 crc kubenswrapper[4736]: I0316 18:46:15.574511 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-88fbp"] Mar 16 18:46:15 crc kubenswrapper[4736]: W0316 18:46:15.581373 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod802d5c93_6fcd_4d68_abcb_6d8a863e91bc.slice/crio-1d2d2b9358ebaf4f5cd42d2a61c4dfa4137a7d1eda412c084ebd814a4000e462 WatchSource:0}: Error finding container 1d2d2b9358ebaf4f5cd42d2a61c4dfa4137a7d1eda412c084ebd814a4000e462: Status 404 returned error can't find the container with id 1d2d2b9358ebaf4f5cd42d2a61c4dfa4137a7d1eda412c084ebd814a4000e462 Mar 16 18:46:15 crc kubenswrapper[4736]: I0316 18:46:15.810997 4736 generic.go:334] "Generic (PLEG): container finished" podID="802d5c93-6fcd-4d68-abcb-6d8a863e91bc" containerID="29dc1d53f96fc9a06b18d3c0eda60a10e65dd36bb005cf7f288b48928a0ee9b3" exitCode=0 Mar 16 18:46:16 crc kubenswrapper[4736]: I0316 18:46:15.811086 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88fbp" event={"ID":"802d5c93-6fcd-4d68-abcb-6d8a863e91bc","Type":"ContainerDied","Data":"29dc1d53f96fc9a06b18d3c0eda60a10e65dd36bb005cf7f288b48928a0ee9b3"} Mar 16 18:46:16 crc kubenswrapper[4736]: I0316 18:46:15.811391 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88fbp" event={"ID":"802d5c93-6fcd-4d68-abcb-6d8a863e91bc","Type":"ContainerStarted","Data":"1d2d2b9358ebaf4f5cd42d2a61c4dfa4137a7d1eda412c084ebd814a4000e462"} Mar 16 18:46:16 crc kubenswrapper[4736]: I0316 18:46:15.824907 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/crc-debug-n8zh7" Mar 16 18:46:16 crc kubenswrapper[4736]: I0316 18:46:15.935169 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bxlf\" (UniqueName: \"kubernetes.io/projected/4151b176-dee3-4c23-891a-85aabb1491a9-kube-api-access-9bxlf\") pod \"4151b176-dee3-4c23-891a-85aabb1491a9\" (UID: \"4151b176-dee3-4c23-891a-85aabb1491a9\") " Mar 16 18:46:16 crc kubenswrapper[4736]: I0316 18:46:15.935230 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4151b176-dee3-4c23-891a-85aabb1491a9-host\") pod \"4151b176-dee3-4c23-891a-85aabb1491a9\" (UID: \"4151b176-dee3-4c23-891a-85aabb1491a9\") " Mar 16 18:46:16 crc kubenswrapper[4736]: I0316 18:46:15.935429 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4151b176-dee3-4c23-891a-85aabb1491a9-host" (OuterVolumeSpecName: "host") pod "4151b176-dee3-4c23-891a-85aabb1491a9" (UID: "4151b176-dee3-4c23-891a-85aabb1491a9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 16 18:46:16 crc kubenswrapper[4736]: I0316 18:46:15.935831 4736 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4151b176-dee3-4c23-891a-85aabb1491a9-host\") on node \"crc\" DevicePath \"\"" Mar 16 18:46:16 crc kubenswrapper[4736]: I0316 18:46:16.337195 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4151b176-dee3-4c23-891a-85aabb1491a9-kube-api-access-9bxlf" (OuterVolumeSpecName: "kube-api-access-9bxlf") pod "4151b176-dee3-4c23-891a-85aabb1491a9" (UID: "4151b176-dee3-4c23-891a-85aabb1491a9"). InnerVolumeSpecName "kube-api-access-9bxlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:46:16 crc kubenswrapper[4736]: I0316 18:46:16.437515 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bxlf\" (UniqueName: \"kubernetes.io/projected/4151b176-dee3-4c23-891a-85aabb1491a9-kube-api-access-9bxlf\") on node \"crc\" DevicePath \"\"" Mar 16 18:46:16 crc kubenswrapper[4736]: I0316 18:46:16.821447 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n68dw" event={"ID":"855bd155-e49f-4ffd-b109-dae4b051a56f","Type":"ContainerStarted","Data":"2e5f0864082ffb48d0eabf4ca971317774cf6601b1b02316c99544e8410e4630"} Mar 16 18:46:16 crc kubenswrapper[4736]: I0316 18:46:16.823232 4736 scope.go:117] "RemoveContainer" containerID="9603fc1c99aec624b22584b26374a0f993beb1a990f2d6429189a9a66837000a" Mar 16 18:46:16 crc kubenswrapper[4736]: I0316 18:46:16.823256 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/crc-debug-n8zh7" Mar 16 18:46:16 crc kubenswrapper[4736]: I0316 18:46:16.989797 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4151b176-dee3-4c23-891a-85aabb1491a9" path="/var/lib/kubelet/pods/4151b176-dee3-4c23-891a-85aabb1491a9/volumes" Mar 16 18:46:17 crc kubenswrapper[4736]: I0316 18:46:17.849410 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88fbp" event={"ID":"802d5c93-6fcd-4d68-abcb-6d8a863e91bc","Type":"ContainerStarted","Data":"5b75ea4907184d0669ea5c656eeab4b933e4bf808752a9a93efcb57151031e3e"} Mar 16 18:46:19 crc kubenswrapper[4736]: I0316 18:46:19.876206 4736 generic.go:334] "Generic (PLEG): container finished" podID="802d5c93-6fcd-4d68-abcb-6d8a863e91bc" containerID="5b75ea4907184d0669ea5c656eeab4b933e4bf808752a9a93efcb57151031e3e" exitCode=0 Mar 16 18:46:19 crc kubenswrapper[4736]: I0316 18:46:19.876270 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88fbp" event={"ID":"802d5c93-6fcd-4d68-abcb-6d8a863e91bc","Type":"ContainerDied","Data":"5b75ea4907184d0669ea5c656eeab4b933e4bf808752a9a93efcb57151031e3e"} Mar 16 18:46:20 crc kubenswrapper[4736]: I0316 18:46:20.887778 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88fbp" event={"ID":"802d5c93-6fcd-4d68-abcb-6d8a863e91bc","Type":"ContainerStarted","Data":"79322f24b9429984e39978f88ef659cfc9e18d82b477f6402e209a29be02803e"} Mar 16 18:46:20 crc kubenswrapper[4736]: I0316 18:46:20.911646 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-88fbp" podStartSLOduration=2.21397855 podStartE2EDuration="6.911629278s" podCreationTimestamp="2026-03-16 18:46:14 +0000 UTC" firstStartedPulling="2026-03-16 18:46:15.814805649 +0000 UTC m=+12777.542195936" lastFinishedPulling="2026-03-16 18:46:20.512456377 +0000 UTC m=+12782.239846664" observedRunningTime="2026-03-16 18:46:20.904762891 +0000 UTC m=+12782.632153178" watchObservedRunningTime="2026-03-16 18:46:20.911629278 +0000 UTC m=+12782.639019565" Mar 16 18:46:22 crc kubenswrapper[4736]: I0316 18:46:22.907596 4736 generic.go:334] "Generic (PLEG): container finished" podID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerID="2e5f0864082ffb48d0eabf4ca971317774cf6601b1b02316c99544e8410e4630" exitCode=0 Mar 16 18:46:22 crc kubenswrapper[4736]: I0316 18:46:22.907670 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n68dw" event={"ID":"855bd155-e49f-4ffd-b109-dae4b051a56f","Type":"ContainerDied","Data":"2e5f0864082ffb48d0eabf4ca971317774cf6601b1b02316c99544e8410e4630"} Mar 16 18:46:23 crc kubenswrapper[4736]: I0316 18:46:23.919041 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n68dw" event={"ID":"855bd155-e49f-4ffd-b109-dae4b051a56f","Type":"ContainerStarted","Data":"a07d73e122c2407815592c8a501d5e38d43ad64e7ad2493f2c505b9e6bb2d7a5"} Mar 16 18:46:23 crc kubenswrapper[4736]: I0316 18:46:23.941449 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n68dw" podStartSLOduration=2.331832483 podStartE2EDuration="10.94143025s" podCreationTimestamp="2026-03-16 18:46:13 +0000 UTC" firstStartedPulling="2026-03-16 18:46:14.803066529 +0000 UTC m=+12776.530456816" lastFinishedPulling="2026-03-16 18:46:23.412664286 +0000 UTC m=+12785.140054583" observedRunningTime="2026-03-16 18:46:23.938292845 +0000 UTC m=+12785.665683132" watchObservedRunningTime="2026-03-16 18:46:23.94143025 +0000 UTC m=+12785.668820527" Mar 16 18:46:23 crc kubenswrapper[4736]: I0316 18:46:23.994894 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:46:23 crc kubenswrapper[4736]: I0316 18:46:23.995212 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:46:25 crc kubenswrapper[4736]: I0316 18:46:25.042471 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n68dw" podUID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerName="registry-server" probeResult="failure" output=< Mar 16 18:46:25 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:46:25 crc kubenswrapper[4736]: > Mar 16 18:46:25 crc kubenswrapper[4736]: I0316 18:46:25.120194 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:25 crc kubenswrapper[4736]: I0316 18:46:25.120249 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:26 crc kubenswrapper[4736]: I0316 18:46:26.173638 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-88fbp" podUID="802d5c93-6fcd-4d68-abcb-6d8a863e91bc" containerName="registry-server" probeResult="failure" output=< Mar 16 18:46:26 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:46:26 crc kubenswrapper[4736]: > Mar 16 18:46:28 crc kubenswrapper[4736]: I0316 18:46:28.984710 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:46:28 crc kubenswrapper[4736]: E0316 18:46:28.985346 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:46:35 crc kubenswrapper[4736]: I0316 18:46:35.045896 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n68dw" podUID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerName="registry-server" probeResult="failure" output=< Mar 16 18:46:35 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:46:35 crc kubenswrapper[4736]: > Mar 16 18:46:35 crc kubenswrapper[4736]: I0316 18:46:35.180494 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:35 crc kubenswrapper[4736]: I0316 18:46:35.236266 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:35 crc kubenswrapper[4736]: I0316 18:46:35.411352 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-88fbp"] Mar 16 18:46:37 crc kubenswrapper[4736]: I0316 18:46:37.032147 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-88fbp" podUID="802d5c93-6fcd-4d68-abcb-6d8a863e91bc" containerName="registry-server" containerID="cri-o://79322f24b9429984e39978f88ef659cfc9e18d82b477f6402e209a29be02803e" gracePeriod=2 Mar 16 18:46:37 crc kubenswrapper[4736]: I0316 18:46:37.596189 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:37 crc kubenswrapper[4736]: I0316 18:46:37.701144 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxcmq\" (UniqueName: \"kubernetes.io/projected/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-kube-api-access-hxcmq\") pod \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\" (UID: \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\") " Mar 16 18:46:37 crc kubenswrapper[4736]: I0316 18:46:37.702025 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-catalog-content\") pod \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\" (UID: \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\") " Mar 16 18:46:37 crc kubenswrapper[4736]: I0316 18:46:37.702094 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-utilities\") pod \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\" (UID: \"802d5c93-6fcd-4d68-abcb-6d8a863e91bc\") " Mar 16 18:46:37 crc kubenswrapper[4736]: I0316 18:46:37.702872 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-utilities" (OuterVolumeSpecName: "utilities") pod "802d5c93-6fcd-4d68-abcb-6d8a863e91bc" (UID: "802d5c93-6fcd-4d68-abcb-6d8a863e91bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:46:37 crc kubenswrapper[4736]: I0316 18:46:37.709864 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-kube-api-access-hxcmq" (OuterVolumeSpecName: "kube-api-access-hxcmq") pod "802d5c93-6fcd-4d68-abcb-6d8a863e91bc" (UID: "802d5c93-6fcd-4d68-abcb-6d8a863e91bc"). InnerVolumeSpecName "kube-api-access-hxcmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:46:37 crc kubenswrapper[4736]: I0316 18:46:37.767846 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "802d5c93-6fcd-4d68-abcb-6d8a863e91bc" (UID: "802d5c93-6fcd-4d68-abcb-6d8a863e91bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:46:37 crc kubenswrapper[4736]: I0316 18:46:37.803781 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxcmq\" (UniqueName: \"kubernetes.io/projected/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-kube-api-access-hxcmq\") on node \"crc\" DevicePath \"\"" Mar 16 18:46:37 crc kubenswrapper[4736]: I0316 18:46:37.803813 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:46:37 crc kubenswrapper[4736]: I0316 18:46:37.803822 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/802d5c93-6fcd-4d68-abcb-6d8a863e91bc-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.040578 4736 generic.go:334] "Generic (PLEG): container finished" podID="802d5c93-6fcd-4d68-abcb-6d8a863e91bc" containerID="79322f24b9429984e39978f88ef659cfc9e18d82b477f6402e209a29be02803e" exitCode=0 Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.040643 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88fbp" event={"ID":"802d5c93-6fcd-4d68-abcb-6d8a863e91bc","Type":"ContainerDied","Data":"79322f24b9429984e39978f88ef659cfc9e18d82b477f6402e209a29be02803e"} Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.040678 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88fbp" Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.040694 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88fbp" event={"ID":"802d5c93-6fcd-4d68-abcb-6d8a863e91bc","Type":"ContainerDied","Data":"1d2d2b9358ebaf4f5cd42d2a61c4dfa4137a7d1eda412c084ebd814a4000e462"} Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.040718 4736 scope.go:117] "RemoveContainer" containerID="79322f24b9429984e39978f88ef659cfc9e18d82b477f6402e209a29be02803e" Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.079807 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-88fbp"] Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.080843 4736 scope.go:117] "RemoveContainer" containerID="5b75ea4907184d0669ea5c656eeab4b933e4bf808752a9a93efcb57151031e3e" Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.093869 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-88fbp"] Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.112272 4736 scope.go:117] "RemoveContainer" containerID="29dc1d53f96fc9a06b18d3c0eda60a10e65dd36bb005cf7f288b48928a0ee9b3" Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.155407 4736 scope.go:117] "RemoveContainer" containerID="79322f24b9429984e39978f88ef659cfc9e18d82b477f6402e209a29be02803e" Mar 16 18:46:38 crc kubenswrapper[4736]: E0316 18:46:38.160640 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79322f24b9429984e39978f88ef659cfc9e18d82b477f6402e209a29be02803e\": container with ID starting with 79322f24b9429984e39978f88ef659cfc9e18d82b477f6402e209a29be02803e not found: ID does not exist" containerID="79322f24b9429984e39978f88ef659cfc9e18d82b477f6402e209a29be02803e" Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.160679 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79322f24b9429984e39978f88ef659cfc9e18d82b477f6402e209a29be02803e"} err="failed to get container status \"79322f24b9429984e39978f88ef659cfc9e18d82b477f6402e209a29be02803e\": rpc error: code = NotFound desc = could not find container \"79322f24b9429984e39978f88ef659cfc9e18d82b477f6402e209a29be02803e\": container with ID starting with 79322f24b9429984e39978f88ef659cfc9e18d82b477f6402e209a29be02803e not found: ID does not exist" Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.160704 4736 scope.go:117] "RemoveContainer" containerID="5b75ea4907184d0669ea5c656eeab4b933e4bf808752a9a93efcb57151031e3e" Mar 16 18:46:38 crc kubenswrapper[4736]: E0316 18:46:38.161211 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b75ea4907184d0669ea5c656eeab4b933e4bf808752a9a93efcb57151031e3e\": container with ID starting with 5b75ea4907184d0669ea5c656eeab4b933e4bf808752a9a93efcb57151031e3e not found: ID does not exist" containerID="5b75ea4907184d0669ea5c656eeab4b933e4bf808752a9a93efcb57151031e3e" Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.161234 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b75ea4907184d0669ea5c656eeab4b933e4bf808752a9a93efcb57151031e3e"} err="failed to get container status \"5b75ea4907184d0669ea5c656eeab4b933e4bf808752a9a93efcb57151031e3e\": rpc error: code = NotFound desc = could not find container \"5b75ea4907184d0669ea5c656eeab4b933e4bf808752a9a93efcb57151031e3e\": container with ID starting with 5b75ea4907184d0669ea5c656eeab4b933e4bf808752a9a93efcb57151031e3e not found: ID does not exist" Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.161247 4736 scope.go:117] "RemoveContainer" containerID="29dc1d53f96fc9a06b18d3c0eda60a10e65dd36bb005cf7f288b48928a0ee9b3" Mar 16 18:46:38 crc kubenswrapper[4736]: E0316 18:46:38.161598 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29dc1d53f96fc9a06b18d3c0eda60a10e65dd36bb005cf7f288b48928a0ee9b3\": container with ID starting with 29dc1d53f96fc9a06b18d3c0eda60a10e65dd36bb005cf7f288b48928a0ee9b3 not found: ID does not exist" containerID="29dc1d53f96fc9a06b18d3c0eda60a10e65dd36bb005cf7f288b48928a0ee9b3" Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.161647 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29dc1d53f96fc9a06b18d3c0eda60a10e65dd36bb005cf7f288b48928a0ee9b3"} err="failed to get container status \"29dc1d53f96fc9a06b18d3c0eda60a10e65dd36bb005cf7f288b48928a0ee9b3\": rpc error: code = NotFound desc = could not find container \"29dc1d53f96fc9a06b18d3c0eda60a10e65dd36bb005cf7f288b48928a0ee9b3\": container with ID starting with 29dc1d53f96fc9a06b18d3c0eda60a10e65dd36bb005cf7f288b48928a0ee9b3 not found: ID does not exist" Mar 16 18:46:38 crc kubenswrapper[4736]: I0316 18:46:38.999313 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802d5c93-6fcd-4d68-abcb-6d8a863e91bc" path="/var/lib/kubelet/pods/802d5c93-6fcd-4d68-abcb-6d8a863e91bc/volumes" Mar 16 18:46:42 crc kubenswrapper[4736]: I0316 18:46:42.978348 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:46:42 crc kubenswrapper[4736]: E0316 18:46:42.979163 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:46:45 crc kubenswrapper[4736]: I0316 18:46:45.040210 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n68dw" podUID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerName="registry-server" probeResult="failure" output=< Mar 16 18:46:45 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:46:45 crc kubenswrapper[4736]: > Mar 16 18:46:52 crc kubenswrapper[4736]: I0316 18:46:52.980867 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59554d7c7d-jq5k7_3c9e0de3-6386-4733-bc2b-b2eec48d8098/barbican-api/0.log" Mar 16 18:46:52 crc kubenswrapper[4736]: I0316 18:46:52.999421 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59554d7c7d-jq5k7_3c9e0de3-6386-4733-bc2b-b2eec48d8098/barbican-api-log/0.log" Mar 16 18:46:53 crc kubenswrapper[4736]: I0316 18:46:53.200653 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5868c68fb4-ww9v7_ad07fdc6-06e5-4045-8049-783bc6e6d5c6/barbican-keystone-listener/0.log" Mar 16 18:46:53 crc kubenswrapper[4736]: I0316 18:46:53.414609 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5868c68fb4-ww9v7_ad07fdc6-06e5-4045-8049-783bc6e6d5c6/barbican-keystone-listener-log/0.log" Mar 16 18:46:53 crc kubenswrapper[4736]: I0316 18:46:53.503250 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-648fb9b5bc-8f55h_ee82b20e-e4ad-4267-9845-3c5838fa1e0f/barbican-worker/0.log" Mar 16 18:46:53 crc kubenswrapper[4736]: I0316 18:46:53.587951 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-648fb9b5bc-8f55h_ee82b20e-e4ad-4267-9845-3c5838fa1e0f/barbican-worker-log/0.log" Mar 16 18:46:53 crc kubenswrapper[4736]: I0316 18:46:53.773721 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vzgbm_bac31a5a-12b7-4a43-b596-91352137545b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:46:53 crc kubenswrapper[4736]: I0316 18:46:53.951980 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_321d2397-bb79-4799-8725-95081269785f/ceilometer-central-agent/1.log" Mar 16 18:46:53 crc kubenswrapper[4736]: I0316 18:46:53.978076 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:46:53 crc kubenswrapper[4736]: E0316 18:46:53.978412 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:46:54 crc kubenswrapper[4736]: I0316 18:46:54.004349 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_321d2397-bb79-4799-8725-95081269785f/ceilometer-central-agent/0.log" Mar 16 18:46:54 crc kubenswrapper[4736]: I0316 18:46:54.120683 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_321d2397-bb79-4799-8725-95081269785f/proxy-httpd/0.log" Mar 16 18:46:54 crc kubenswrapper[4736]: I0316 18:46:54.141131 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_321d2397-bb79-4799-8725-95081269785f/ceilometer-notification-agent/0.log" Mar 16 18:46:54 crc kubenswrapper[4736]: I0316 18:46:54.243214 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_321d2397-bb79-4799-8725-95081269785f/sg-core/0.log" Mar 16 18:46:54 crc kubenswrapper[4736]: I0316 18:46:54.412051 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9/cinder-api-log/0.log" Mar 16 18:46:54 crc kubenswrapper[4736]: I0316 18:46:54.512555 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_40c8e7ab-5a8b-4ea8-bf0a-745f04e4f9e9/cinder-api/0.log" Mar 16 18:46:54 crc kubenswrapper[4736]: I0316 18:46:54.836288 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1d62491e-6f65-49ab-8baf-3c653e7df95e/cinder-scheduler/0.log" Mar 16 18:46:55 crc kubenswrapper[4736]: I0316 18:46:55.045543 4736 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n68dw" podUID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerName="registry-server" probeResult="failure" output=< Mar 16 18:46:55 crc kubenswrapper[4736]: timeout: failed to connect service ":50051" within 1s Mar 16 18:46:55 crc kubenswrapper[4736]: > Mar 16 18:46:55 crc kubenswrapper[4736]: I0316 18:46:55.083641 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_1d62491e-6f65-49ab-8baf-3c653e7df95e/probe/0.log" Mar 16 18:46:55 crc kubenswrapper[4736]: I0316 18:46:55.096749 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-shf2t_123f23c5-bce7-4080-a7da-1bce3b43d685/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:46:55 crc kubenswrapper[4736]: I0316 18:46:55.301081 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-thsth_09227c49-1a61-4c9c-827d-336efc0fe550/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:46:55 crc kubenswrapper[4736]: I0316 18:46:55.329238 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-577bbc9c65-pclks_8de44873-db27-4ee1-bade-ac87cec3c328/init/0.log" Mar 16 18:46:55 crc kubenswrapper[4736]: I0316 18:46:55.604357 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-577bbc9c65-pclks_8de44873-db27-4ee1-bade-ac87cec3c328/init/0.log" Mar 16 18:46:55 crc kubenswrapper[4736]: I0316 18:46:55.605992 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xxldx_ad968503-ce02-492c-a946-1d0e986a99ff/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:46:55 crc kubenswrapper[4736]: I0316 18:46:55.783031 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-577bbc9c65-pclks_8de44873-db27-4ee1-bade-ac87cec3c328/dnsmasq-dns/0.log" Mar 16 18:46:55 crc kubenswrapper[4736]: I0316 18:46:55.854672 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5c913371-a3e8-4e40-a1e3-69f93eeef930/glance-log/0.log" Mar 16 18:46:55 crc kubenswrapper[4736]: I0316 18:46:55.910385 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5c913371-a3e8-4e40-a1e3-69f93eeef930/glance-httpd/0.log" Mar 16 18:46:56 crc kubenswrapper[4736]: I0316 18:46:56.097013 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ce1ada3f-941d-4468-8a04-0c780a84148b/glance-httpd/0.log" Mar 16 18:46:56 crc kubenswrapper[4736]: I0316 18:46:56.186209 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ce1ada3f-941d-4468-8a04-0c780a84148b/glance-log/0.log" Mar 16 18:46:56 crc kubenswrapper[4736]: I0316 18:46:56.752365 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-678d85b7f7-bdd5r_5498e21f-9b52-4ecb-9c10-7a688723d57f/heat-engine/0.log" Mar 16 18:46:57 crc kubenswrapper[4736]: I0316 18:46:57.094180 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-ff55bcd5b-psrsc_4a2c18b8-790c-4bb8-ac86-c70f0220ab3f/horizon/2.log" Mar 16 18:46:57 crc kubenswrapper[4736]: I0316 18:46:57.156298 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-ff55bcd5b-psrsc_4a2c18b8-790c-4bb8-ac86-c70f0220ab3f/horizon/1.log" Mar 16 18:46:57 crc kubenswrapper[4736]: I0316 18:46:57.950406 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-tfpm9_7f216da9-755f-42e5-8058-15af7388a669/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:46:58 crc kubenswrapper[4736]: I0316 18:46:58.231924 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-f5xbq_3477848e-08a4-4e82-a565-d5e83bf58c7d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:46:58 crc kubenswrapper[4736]: I0316 18:46:58.648920 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-75b949cc99-d78kz_f78b78b4-bda7-42a0-b9e4-5083ef5e7cd7/heat-api/0.log" Mar 16 18:46:58 crc kubenswrapper[4736]: I0316 18:46:58.723981 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7b4b9fc8dc-4c2rv_b85b52b6-77da-47f3-96b7-5230c7804524/heat-cfnapi/0.log" Mar 16 18:46:58 crc kubenswrapper[4736]: I0316 18:46:58.906475 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29561281-4v2pq_1be53923-c22e-42a2-936a-5dd4a6484821/keystone-cron/0.log" Mar 16 18:46:59 crc kubenswrapper[4736]: I0316 18:46:59.034591 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29561341-2v97d_e106e4db-3f81-4cba-9c0a-697acced07dd/keystone-cron/0.log" Mar 16 18:46:59 crc kubenswrapper[4736]: I0316 18:46:59.295876 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29561401-q7h6f_0b25b5f7-f358-40e8-91df-9398c8719033/keystone-cron/0.log" Mar 16 18:46:59 crc kubenswrapper[4736]: I0316 18:46:59.608306 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_02f0ab2b-3871-4319-a39a-2c1d13a8c6e6/kube-state-metrics/0.log" Mar 16 18:46:59 crc kubenswrapper[4736]: I0316 18:46:59.758201 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gxm6h_97bb28be-aaed-4b82-9df1-cb24c9dd48e3/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:46:59 crc kubenswrapper[4736]: I0316 18:46:59.842896 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-ff55bcd5b-psrsc_4a2c18b8-790c-4bb8-ac86-c70f0220ab3f/horizon-log/0.log" Mar 16 18:47:00 crc kubenswrapper[4736]: I0316 18:47:00.040439 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-654fb4cdb6-6lld5_ea7a8a52-515b-45e3-8c30-4fd52d65cdc6/keystone-api/0.log" Mar 16 18:47:00 crc kubenswrapper[4736]: I0316 18:47:00.336472 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75f7dd5797-28cnx_a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c/neutron-httpd/0.log" Mar 16 18:47:00 crc kubenswrapper[4736]: I0316 18:47:00.375180 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-qhd9x_7e4df23c-a76d-497d-b0c1-0b3264ed20ce/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:47:01 crc kubenswrapper[4736]: I0316 18:47:01.300404 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75f7dd5797-28cnx_a0df4b54-37bc-4bbe-bc2d-cacd7d1c921c/neutron-api/0.log" Mar 16 18:47:01 crc kubenswrapper[4736]: I0316 18:47:01.637926 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_24e0aaa8-4eb1-408e-b98a-f99ce1f8e909/nova-cell0-conductor-conductor/0.log" Mar 16 18:47:01 crc kubenswrapper[4736]: I0316 18:47:01.963670 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_202b09c4-bf70-46c9-aff5-b536e3f7ef9d/nova-cell1-conductor-conductor/0.log" Mar 16 18:47:02 crc kubenswrapper[4736]: I0316 18:47:02.448166 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b1ca423d-d8ce-437c-9fca-1b57025ab173/nova-cell1-novncproxy-novncproxy/0.log" Mar 16 18:47:02 crc kubenswrapper[4736]: I0316 18:47:02.470598 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7fa40817-425b-4ee8-9c3b-e7e109307837/memcached/0.log" Mar 16 18:47:02 crc kubenswrapper[4736]: I0316 18:47:02.476930 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-bbqh7_4020b89b-a736-4914-9ea6-969e75a9b526/nova-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:47:02 crc kubenswrapper[4736]: I0316 18:47:02.943775 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b839907f-5ee5-450e-b483-ace8fd0fb0d5/nova-metadata-log/0.log" Mar 16 18:47:03 crc kubenswrapper[4736]: I0316 18:47:03.482497 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9243d80f-05dc-4dff-a328-780f64a121af/mysql-bootstrap/0.log" Mar 16 18:47:03 crc kubenswrapper[4736]: I0316 18:47:03.724999 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9243d80f-05dc-4dff-a328-780f64a121af/mysql-bootstrap/0.log" Mar 16 18:47:03 crc kubenswrapper[4736]: I0316 18:47:03.888487 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ffe59cdd-6766-4d9b-a82c-0287d028a8d0/nova-api-log/0.log" Mar 16 18:47:03 crc kubenswrapper[4736]: I0316 18:47:03.957422 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9243d80f-05dc-4dff-a328-780f64a121af/galera/0.log" Mar 16 18:47:04 crc kubenswrapper[4736]: I0316 18:47:04.041514 4736 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:47:04 crc kubenswrapper[4736]: I0316 18:47:04.056701 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c38eb8c1-13d7-4ef2-b026-d55b36f56919/nova-scheduler-scheduler/0.log" Mar 16 18:47:04 crc kubenswrapper[4736]: I0316 18:47:04.095676 4736 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:47:04 crc kubenswrapper[4736]: I0316 18:47:04.234164 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_51e06fc2-19ee-4e32-8118-d4596cb6b124/mysql-bootstrap/0.log" Mar 16 18:47:04 crc kubenswrapper[4736]: I0316 18:47:04.281937 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n68dw"] Mar 16 18:47:04 crc kubenswrapper[4736]: I0316 18:47:04.508937 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_51e06fc2-19ee-4e32-8118-d4596cb6b124/galera/0.log" Mar 16 18:47:04 crc kubenswrapper[4736]: I0316 18:47:04.568664 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_51e06fc2-19ee-4e32-8118-d4596cb6b124/mysql-bootstrap/0.log" Mar 16 18:47:04 crc kubenswrapper[4736]: I0316 18:47:04.749160 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_421bab10-ac4a-458f-98e3-18cd0adef038/openstackclient/0.log" Mar 16 18:47:04 crc kubenswrapper[4736]: I0316 18:47:04.793018 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b839907f-5ee5-450e-b483-ace8fd0fb0d5/nova-metadata-metadata/0.log" Mar 16 18:47:04 crc kubenswrapper[4736]: I0316 18:47:04.934060 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9wkhh_b3d93764-b264-4e7d-87fe-ea95bd3fb252/ovn-controller/0.log" Mar 16 18:47:05 crc kubenswrapper[4736]: I0316 18:47:05.108440 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-m74rl_8ff193b6-fc55-427d-b256-a9b253fa60c4/openstack-network-exporter/0.log" Mar 16 18:47:05 crc kubenswrapper[4736]: I0316 18:47:05.180584 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jchb9_197c602f-0abb-430a-8011-a454072994fd/ovsdb-server-init/0.log" Mar 16 18:47:05 crc kubenswrapper[4736]: I0316 18:47:05.261628 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n68dw" podUID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerName="registry-server" containerID="cri-o://a07d73e122c2407815592c8a501d5e38d43ad64e7ad2493f2c505b9e6bb2d7a5" gracePeriod=2 Mar 16 18:47:05 crc kubenswrapper[4736]: I0316 18:47:05.473021 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ffe59cdd-6766-4d9b-a82c-0287d028a8d0/nova-api-api/0.log" Mar 16 18:47:05 crc kubenswrapper[4736]: I0316 18:47:05.487428 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jchb9_197c602f-0abb-430a-8011-a454072994fd/ovsdb-server/0.log" Mar 16 18:47:05 crc kubenswrapper[4736]: I0316 18:47:05.494852 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jchb9_197c602f-0abb-430a-8011-a454072994fd/ovsdb-server-init/0.log" Mar 16 18:47:05 crc kubenswrapper[4736]: I0316 18:47:05.518590 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jchb9_197c602f-0abb-430a-8011-a454072994fd/ovs-vswitchd/0.log" Mar 16 18:47:05 crc kubenswrapper[4736]: I0316 18:47:05.691451 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_38835fa0-dde3-4eb4-8ec0-7627436b49ca/ovn-northd/0.log" Mar 16 18:47:05 crc kubenswrapper[4736]: I0316 18:47:05.714537 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xt72q_cd71853b-a1d3-4429-90b3-4cee241cfa21/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:47:05 crc kubenswrapper[4736]: I0316 18:47:05.753140 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_38835fa0-dde3-4eb4-8ec0-7627436b49ca/openstack-network-exporter/0.log" Mar 16 18:47:05 crc kubenswrapper[4736]: I0316 18:47:05.948157 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3228db46-56d3-4e82-8973-77a049c7e003/ovsdbserver-nb/0.log" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.063716 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_adfa9156-d077-4b45-af4d-cc113fbff209/ovsdbserver-sb/0.log" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.078179 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3228db46-56d3-4e82-8973-77a049c7e003/openstack-network-exporter/0.log" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.092611 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_adfa9156-d077-4b45-af4d-cc113fbff209/openstack-network-exporter/0.log" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.202420 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.242519 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855bd155-e49f-4ffd-b109-dae4b051a56f-utilities\") pod \"855bd155-e49f-4ffd-b109-dae4b051a56f\" (UID: \"855bd155-e49f-4ffd-b109-dae4b051a56f\") " Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.242613 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrbhv\" (UniqueName: \"kubernetes.io/projected/855bd155-e49f-4ffd-b109-dae4b051a56f-kube-api-access-mrbhv\") pod \"855bd155-e49f-4ffd-b109-dae4b051a56f\" (UID: \"855bd155-e49f-4ffd-b109-dae4b051a56f\") " Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.242710 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855bd155-e49f-4ffd-b109-dae4b051a56f-catalog-content\") pod \"855bd155-e49f-4ffd-b109-dae4b051a56f\" (UID: \"855bd155-e49f-4ffd-b109-dae4b051a56f\") " Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.251607 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/855bd155-e49f-4ffd-b109-dae4b051a56f-utilities" (OuterVolumeSpecName: "utilities") pod "855bd155-e49f-4ffd-b109-dae4b051a56f" (UID: "855bd155-e49f-4ffd-b109-dae4b051a56f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.347549 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855bd155-e49f-4ffd-b109-dae4b051a56f-kube-api-access-mrbhv" (OuterVolumeSpecName: "kube-api-access-mrbhv") pod "855bd155-e49f-4ffd-b109-dae4b051a56f" (UID: "855bd155-e49f-4ffd-b109-dae4b051a56f"). InnerVolumeSpecName "kube-api-access-mrbhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.348174 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrbhv\" (UniqueName: \"kubernetes.io/projected/855bd155-e49f-4ffd-b109-dae4b051a56f-kube-api-access-mrbhv\") on node \"crc\" DevicePath \"\"" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.348295 4736 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/855bd155-e49f-4ffd-b109-dae4b051a56f-utilities\") on node \"crc\" DevicePath \"\"" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.348563 4736 generic.go:334] "Generic (PLEG): container finished" podID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerID="a07d73e122c2407815592c8a501d5e38d43ad64e7ad2493f2c505b9e6bb2d7a5" exitCode=0 Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.348654 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n68dw" event={"ID":"855bd155-e49f-4ffd-b109-dae4b051a56f","Type":"ContainerDied","Data":"a07d73e122c2407815592c8a501d5e38d43ad64e7ad2493f2c505b9e6bb2d7a5"} Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.348735 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n68dw" event={"ID":"855bd155-e49f-4ffd-b109-dae4b051a56f","Type":"ContainerDied","Data":"a640ac76ef525da6a732f35e581f2bdc9089ff3a2ddc86933bf00fa3c7670eed"} Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.348958 4736 scope.go:117] "RemoveContainer" containerID="a07d73e122c2407815592c8a501d5e38d43ad64e7ad2493f2c505b9e6bb2d7a5" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.349041 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n68dw" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.395482 4736 scope.go:117] "RemoveContainer" containerID="2e5f0864082ffb48d0eabf4ca971317774cf6601b1b02316c99544e8410e4630" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.427235 4736 scope.go:117] "RemoveContainer" containerID="be1bedc6bb51a1a0c433c8da0c7cc90fa22c03009b9e705ed100034109242a2d" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.455428 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/855bd155-e49f-4ffd-b109-dae4b051a56f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "855bd155-e49f-4ffd-b109-dae4b051a56f" (UID: "855bd155-e49f-4ffd-b109-dae4b051a56f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.485321 4736 scope.go:117] "RemoveContainer" containerID="a07d73e122c2407815592c8a501d5e38d43ad64e7ad2493f2c505b9e6bb2d7a5" Mar 16 18:47:06 crc kubenswrapper[4736]: E0316 18:47:06.485715 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07d73e122c2407815592c8a501d5e38d43ad64e7ad2493f2c505b9e6bb2d7a5\": container with ID starting with a07d73e122c2407815592c8a501d5e38d43ad64e7ad2493f2c505b9e6bb2d7a5 not found: ID does not exist" containerID="a07d73e122c2407815592c8a501d5e38d43ad64e7ad2493f2c505b9e6bb2d7a5" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.485748 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07d73e122c2407815592c8a501d5e38d43ad64e7ad2493f2c505b9e6bb2d7a5"} err="failed to get container status \"a07d73e122c2407815592c8a501d5e38d43ad64e7ad2493f2c505b9e6bb2d7a5\": rpc error: code = NotFound desc = could not find container \"a07d73e122c2407815592c8a501d5e38d43ad64e7ad2493f2c505b9e6bb2d7a5\": container with ID starting with a07d73e122c2407815592c8a501d5e38d43ad64e7ad2493f2c505b9e6bb2d7a5 not found: ID does not exist" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.485779 4736 scope.go:117] "RemoveContainer" containerID="2e5f0864082ffb48d0eabf4ca971317774cf6601b1b02316c99544e8410e4630" Mar 16 18:47:06 crc kubenswrapper[4736]: E0316 18:47:06.485981 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e5f0864082ffb48d0eabf4ca971317774cf6601b1b02316c99544e8410e4630\": container with ID starting with 2e5f0864082ffb48d0eabf4ca971317774cf6601b1b02316c99544e8410e4630 not found: ID does not exist" containerID="2e5f0864082ffb48d0eabf4ca971317774cf6601b1b02316c99544e8410e4630" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.486010 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e5f0864082ffb48d0eabf4ca971317774cf6601b1b02316c99544e8410e4630"} err="failed to get container status \"2e5f0864082ffb48d0eabf4ca971317774cf6601b1b02316c99544e8410e4630\": rpc error: code = NotFound desc = could not find container \"2e5f0864082ffb48d0eabf4ca971317774cf6601b1b02316c99544e8410e4630\": container with ID starting with 2e5f0864082ffb48d0eabf4ca971317774cf6601b1b02316c99544e8410e4630 not found: ID does not exist" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.486027 4736 scope.go:117] "RemoveContainer" containerID="be1bedc6bb51a1a0c433c8da0c7cc90fa22c03009b9e705ed100034109242a2d" Mar 16 18:47:06 crc kubenswrapper[4736]: E0316 18:47:06.486518 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be1bedc6bb51a1a0c433c8da0c7cc90fa22c03009b9e705ed100034109242a2d\": container with ID starting with be1bedc6bb51a1a0c433c8da0c7cc90fa22c03009b9e705ed100034109242a2d not found: ID does not exist" containerID="be1bedc6bb51a1a0c433c8da0c7cc90fa22c03009b9e705ed100034109242a2d" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.486568 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be1bedc6bb51a1a0c433c8da0c7cc90fa22c03009b9e705ed100034109242a2d"} err="failed to get container status \"be1bedc6bb51a1a0c433c8da0c7cc90fa22c03009b9e705ed100034109242a2d\": rpc error: code = NotFound desc = could not find container \"be1bedc6bb51a1a0c433c8da0c7cc90fa22c03009b9e705ed100034109242a2d\": container with ID starting with be1bedc6bb51a1a0c433c8da0c7cc90fa22c03009b9e705ed100034109242a2d not found: ID does not exist" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.549437 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da4145d1-110a-477c-ba28-813d6c53db11/setup-container/0.log" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.552026 4736 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/855bd155-e49f-4ffd-b109-dae4b051a56f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.624126 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da4145d1-110a-477c-ba28-813d6c53db11/setup-container/0.log" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.700900 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n68dw"] Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.713069 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n68dw"] Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.737039 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-784f554468-tgz6j_f744bb37-172a-4e29-b348-5b70d53c5d16/placement-api/0.log" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.814659 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da4145d1-110a-477c-ba28-813d6c53db11/rabbitmq/0.log" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.857187 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0b8db200-f455-4868-8ffd-7f129434034e/setup-container/0.log" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.952221 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-784f554468-tgz6j_f744bb37-172a-4e29-b348-5b70d53c5d16/placement-log/0.log" Mar 16 18:47:06 crc kubenswrapper[4736]: I0316 18:47:06.988657 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="855bd155-e49f-4ffd-b109-dae4b051a56f" path="/var/lib/kubelet/pods/855bd155-e49f-4ffd-b109-dae4b051a56f/volumes" Mar 16 18:47:07 crc kubenswrapper[4736]: I0316 18:47:07.149282 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0b8db200-f455-4868-8ffd-7f129434034e/rabbitmq/0.log" Mar 16 18:47:07 crc kubenswrapper[4736]: I0316 18:47:07.149733 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0b8db200-f455-4868-8ffd-7f129434034e/setup-container/0.log" Mar 16 18:47:07 crc kubenswrapper[4736]: I0316 18:47:07.247028 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jpnnb_431e1d6e-e9a2-4414-b37e-9612991eb00c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:47:07 crc kubenswrapper[4736]: I0316 18:47:07.313094 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-lzcb8_a44a03ad-9259-452c-8234-2ee8f93d66be/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:47:07 crc kubenswrapper[4736]: I0316 18:47:07.360319 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-xck9n_37a5c3b4-a904-4d80-8823-97fa52f36de3/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:47:07 crc kubenswrapper[4736]: I0316 18:47:07.478437 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-trqck_9ac6b0e7-17e2-4e8d-8e5c-5f188af3ed0a/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:47:07 crc kubenswrapper[4736]: I0316 18:47:07.565534 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-clhnp_b21d0120-de7a-44aa-a9a7-469ff2670bd4/ssh-known-hosts-edpm-deployment/0.log" Mar 16 18:47:07 crc kubenswrapper[4736]: I0316 18:47:07.816192 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-678dd4f677-jxtsk_bccee937-d642-4483-87fb-033b157cf68c/proxy-server/0.log" Mar 16 18:47:07 crc kubenswrapper[4736]: I0316 18:47:07.949045 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-6f7cb_48b165ae-e228-45fa-a5d3-6d1f8c8f43b1/swift-ring-rebalance/0.log" Mar 16 18:47:07 crc kubenswrapper[4736]: I0316 18:47:07.978441 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:47:07 crc kubenswrapper[4736]: E0316 18:47:07.978671 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.034411 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-678dd4f677-jxtsk_bccee937-d642-4483-87fb-033b157cf68c/proxy-httpd/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.117533 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/account-reaper/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.154922 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/account-auditor/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.191704 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/account-server/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.273895 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/account-replicator/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.291733 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/container-auditor/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.401151 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/container-replicator/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.433646 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/container-server/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.436159 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/container-updater/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.532172 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/object-auditor/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.636973 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/object-expirer/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.690971 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/object-server/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.744633 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/object-updater/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.752119 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/object-replicator/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.761133 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/rsync/0.log" Mar 16 18:47:08 crc kubenswrapper[4736]: I0316 18:47:08.902019 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0892ebc9-dbd4-4652-9691-13028da07f80/swift-recon-cron/0.log" Mar 16 18:47:09 crc kubenswrapper[4736]: I0316 18:47:09.033349 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qrlvl_bf51b7ea-25d5-4fa2-9abe-db781c31f96f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:47:09 crc kubenswrapper[4736]: I0316 18:47:09.047941 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-multi-thread-testing_50b6dff6-11bb-40fd-bc4b-6d0cec3f2e31/tempest-tests-tempest-tests-runner/0.log" Mar 16 18:47:09 crc kubenswrapper[4736]: I0316 18:47:09.180966 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-thread-testing_252155f6-a310-43e1-bf80-1d17a2db2128/tempest-tests-tempest-tests-runner/0.log" Mar 16 18:47:09 crc kubenswrapper[4736]: I0316 18:47:09.235903 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_75ad0a89-ba23-4203-b8a5-e6b3a4ee7c19/test-operator-logs-container/0.log" Mar 16 18:47:09 crc kubenswrapper[4736]: I0316 18:47:09.397150 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6q2tj_6f274275-8257-4335-b3d8-a2441d5ddf1e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 16 18:47:18 crc kubenswrapper[4736]: I0316 18:47:18.979098 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:47:18 crc kubenswrapper[4736]: E0316 18:47:18.980949 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:47:29 crc kubenswrapper[4736]: I0316 18:47:29.978098 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:47:29 crc kubenswrapper[4736]: E0316 18:47:29.979850 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:47:39 crc kubenswrapper[4736]: I0316 18:47:39.027460 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59bc569d95-w4ppt_99a35a5a-103f-4e00-9b39-d4f86531f5f7/manager/0.log" Mar 16 18:47:39 crc kubenswrapper[4736]: I0316 18:47:39.293939 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt_6e002ccd-bc1b-4542-9e87-4086de4291c9/util/0.log" Mar 16 18:47:39 crc kubenswrapper[4736]: I0316 18:47:39.536150 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt_6e002ccd-bc1b-4542-9e87-4086de4291c9/pull/0.log" Mar 16 18:47:39 crc kubenswrapper[4736]: I0316 18:47:39.576672 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt_6e002ccd-bc1b-4542-9e87-4086de4291c9/pull/0.log" Mar 16 18:47:39 crc kubenswrapper[4736]: I0316 18:47:39.590399 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt_6e002ccd-bc1b-4542-9e87-4086de4291c9/util/0.log" Mar 16 18:47:39 crc kubenswrapper[4736]: I0316 18:47:39.807014 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt_6e002ccd-bc1b-4542-9e87-4086de4291c9/extract/0.log" Mar 16 18:47:39 crc kubenswrapper[4736]: I0316 18:47:39.846136 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt_6e002ccd-bc1b-4542-9e87-4086de4291c9/pull/0.log" Mar 16 18:47:39 crc kubenswrapper[4736]: I0316 18:47:39.892998 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbd000c4e420e86222bb51dea4dfaa52c1f7b7d4d9f24d538cb6738e4a9gzkt_6e002ccd-bc1b-4542-9e87-4086de4291c9/util/0.log" Mar 16 18:47:40 crc kubenswrapper[4736]: I0316 18:47:40.355149 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-588d4d986b-c6tc2_2d48b057-960e-445a-bc66-b6d3dbfb56f9/manager/0.log" Mar 16 18:47:40 crc kubenswrapper[4736]: I0316 18:47:40.644532 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-79df6bcc97-z9l9q_9d7909e1-3088-4a9e-b2ac-286927abd741/manager/0.log" Mar 16 18:47:40 crc kubenswrapper[4736]: I0316 18:47:40.906594 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-67dd5f86f5-d6m9n_1ae22b3c-97a5-4592-b263-557131818155/manager/0.log" Mar 16 18:47:40 crc kubenswrapper[4736]: I0316 18:47:40.925328 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-8464cc45fb-fd7xj_aac26090-af84-496a-afdf-efdb24694811/manager/0.log" Mar 16 18:47:41 crc kubenswrapper[4736]: I0316 18:47:41.566459 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6f787dddc9-9kzj2_f8308a1a-301e-40b9-8a0e-b7e267e74a10/manager/0.log" Mar 16 18:47:41 crc kubenswrapper[4736]: I0316 18:47:41.572063 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7b9c774f96-9b78c_634ac783-1fe6-4191-b432-f22ad5d84357/manager/0.log" Mar 16 18:47:41 crc kubenswrapper[4736]: I0316 18:47:41.875060 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-768b96df4c-7gd6n_d77bc7ac-fb08-4603-8453-677c6be6916d/manager/0.log" Mar 16 18:47:41 crc kubenswrapper[4736]: I0316 18:47:41.898693 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-55f864c847-sjpl5_569449b8-1135-4dd6-b6fe-ad66844b413e/manager/0.log" Mar 16 18:47:41 crc kubenswrapper[4736]: I0316 18:47:41.981183 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:47:41 crc kubenswrapper[4736]: E0316 18:47:41.983684 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:47:42 crc kubenswrapper[4736]: I0316 18:47:42.196938 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67ccfc9778-bgvjq_285f243f-b886-440f-8a92-b1ddf60bf6e6/manager/0.log" Mar 16 18:47:42 crc kubenswrapper[4736]: I0316 18:47:42.608996 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-767865f676-bqghw_99d86cbe-cf17-42a7-bc5b-d692609fff64/manager/0.log" Mar 16 18:47:42 crc kubenswrapper[4736]: I0316 18:47:42.697430 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5d488d59fb-7hrfc_b1ae843c-f1b5-4ee2-8300-55f93941ba2b/manager/0.log" Mar 16 18:47:42 crc kubenswrapper[4736]: I0316 18:47:42.890236 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d58dc466-tgqsm_8163ef92-862a-4de1-a443-8ac84a5ba0c9/manager/0.log" Mar 16 18:47:42 crc kubenswrapper[4736]: I0316 18:47:42.931864 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5b9f45d989-47kkg_e7971b38-1b13-4984-a055-2cc52b34bf6b/manager/0.log" Mar 16 18:47:43 crc kubenswrapper[4736]: I0316 18:47:43.124565 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-89d64c458-pc5vv_62d536a1-c184-4077-a6f8-4285c3ebe5db/manager/0.log" Mar 16 18:47:43 crc kubenswrapper[4736]: I0316 18:47:43.323223 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5dbd94f64-hsp7x_34b67803-050a-457b-80ff-64455949a26d/operator/0.log" Mar 16 18:47:43 crc kubenswrapper[4736]: I0316 18:47:43.578098 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ztkrd_aeb1e197-872b-4ade-b3e4-425a5e52433f/registry-server/0.log" Mar 16 18:47:43 crc kubenswrapper[4736]: I0316 18:47:43.882725 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-884679f54-vfqg8_40be2c61-bd71-46b6-b837-abf09d8d5aeb/manager/0.log" Mar 16 18:47:44 crc kubenswrapper[4736]: I0316 18:47:44.120983 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5784578c99-6fjhm_6cbcdd30-245d-4732-8986-77f861f1f568/manager/0.log" Mar 16 18:47:44 crc kubenswrapper[4736]: I0316 18:47:44.271493 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-dblvg_0a609c84-6f6b-48ae-a12b-d604e7b91c36/operator/0.log" Mar 16 18:47:44 crc kubenswrapper[4736]: I0316 18:47:44.396290 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5467877-vhgh7_0a9b1e66-192c-4eab-a960-7fbd08759f54/manager/0.log" Mar 16 18:47:44 crc kubenswrapper[4736]: I0316 18:47:44.467211 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-c674c5965-7lk9g_534a3ae8-6587-4e8a-b454-b084edbfeb21/manager/0.log" Mar 16 18:47:44 crc kubenswrapper[4736]: I0316 18:47:44.555577 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-d6b694c5-6z9rj_fff6882e-3a77-462f-b12e-25192ea56328/manager/0.log" Mar 16 18:47:44 crc kubenswrapper[4736]: I0316 18:47:44.718109 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-5ngf9_93d0e3bc-0e33-4254-b52e-31f28fdff357/manager/0.log" Mar 16 18:47:44 crc kubenswrapper[4736]: I0316 18:47:44.788729 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c4d75f7f9-pj69z_bdcce941-5cae-42fe-9dc5-a71e1e55790e/manager/0.log" Mar 16 18:47:56 crc kubenswrapper[4736]: I0316 18:47:56.978368 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:47:56 crc kubenswrapper[4736]: E0316 18:47:56.979155 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.318585 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561448-lhtbv"] Mar 16 18:48:00 crc kubenswrapper[4736]: E0316 18:48:00.323011 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerName="extract-content" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.323048 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerName="extract-content" Mar 16 18:48:00 crc kubenswrapper[4736]: E0316 18:48:00.323094 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802d5c93-6fcd-4d68-abcb-6d8a863e91bc" containerName="registry-server" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.323140 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="802d5c93-6fcd-4d68-abcb-6d8a863e91bc" containerName="registry-server" Mar 16 18:48:00 crc kubenswrapper[4736]: E0316 18:48:00.323166 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802d5c93-6fcd-4d68-abcb-6d8a863e91bc" containerName="extract-utilities" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.323177 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="802d5c93-6fcd-4d68-abcb-6d8a863e91bc" containerName="extract-utilities" Mar 16 18:48:00 crc kubenswrapper[4736]: E0316 18:48:00.323216 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerName="registry-server" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.323225 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerName="registry-server" Mar 16 18:48:00 crc kubenswrapper[4736]: E0316 18:48:00.323237 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4151b176-dee3-4c23-891a-85aabb1491a9" containerName="container-00" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.323245 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="4151b176-dee3-4c23-891a-85aabb1491a9" containerName="container-00" Mar 16 18:48:00 crc kubenswrapper[4736]: E0316 18:48:00.323273 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802d5c93-6fcd-4d68-abcb-6d8a863e91bc" containerName="extract-content" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.323282 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="802d5c93-6fcd-4d68-abcb-6d8a863e91bc" containerName="extract-content" Mar 16 18:48:00 crc kubenswrapper[4736]: E0316 18:48:00.323308 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerName="extract-utilities" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.323318 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerName="extract-utilities" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.324818 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="4151b176-dee3-4c23-891a-85aabb1491a9" containerName="container-00" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.325162 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="802d5c93-6fcd-4d68-abcb-6d8a863e91bc" containerName="registry-server" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.325188 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="855bd155-e49f-4ffd-b109-dae4b051a56f" containerName="registry-server" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.330861 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561448-lhtbv" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.345000 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.345134 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.345010 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.350690 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561448-lhtbv"] Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.459857 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdm7p\" (UniqueName: \"kubernetes.io/projected/370bf431-b0ae-4e95-b520-a99c927bad31-kube-api-access-tdm7p\") pod \"auto-csr-approver-29561448-lhtbv\" (UID: \"370bf431-b0ae-4e95-b520-a99c927bad31\") " pod="openshift-infra/auto-csr-approver-29561448-lhtbv" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.562395 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdm7p\" (UniqueName: \"kubernetes.io/projected/370bf431-b0ae-4e95-b520-a99c927bad31-kube-api-access-tdm7p\") pod \"auto-csr-approver-29561448-lhtbv\" (UID: \"370bf431-b0ae-4e95-b520-a99c927bad31\") " pod="openshift-infra/auto-csr-approver-29561448-lhtbv" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.600331 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdm7p\" (UniqueName: \"kubernetes.io/projected/370bf431-b0ae-4e95-b520-a99c927bad31-kube-api-access-tdm7p\") pod \"auto-csr-approver-29561448-lhtbv\" (UID: \"370bf431-b0ae-4e95-b520-a99c927bad31\") " pod="openshift-infra/auto-csr-approver-29561448-lhtbv" Mar 16 18:48:00 crc kubenswrapper[4736]: I0316 18:48:00.656902 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561448-lhtbv" Mar 16 18:48:01 crc kubenswrapper[4736]: I0316 18:48:01.248762 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561448-lhtbv"] Mar 16 18:48:01 crc kubenswrapper[4736]: W0316 18:48:01.272973 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod370bf431_b0ae_4e95_b520_a99c927bad31.slice/crio-2ef6d2f11fec998d6f451b8548157c883023179ea9dbd20dda03a8883637f342 WatchSource:0}: Error finding container 2ef6d2f11fec998d6f451b8548157c883023179ea9dbd20dda03a8883637f342: Status 404 returned error can't find the container with id 2ef6d2f11fec998d6f451b8548157c883023179ea9dbd20dda03a8883637f342 Mar 16 18:48:01 crc kubenswrapper[4736]: I0316 18:48:01.878482 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561448-lhtbv" event={"ID":"370bf431-b0ae-4e95-b520-a99c927bad31","Type":"ContainerStarted","Data":"2ef6d2f11fec998d6f451b8548157c883023179ea9dbd20dda03a8883637f342"} Mar 16 18:48:03 crc kubenswrapper[4736]: I0316 18:48:03.907938 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561448-lhtbv" event={"ID":"370bf431-b0ae-4e95-b520-a99c927bad31","Type":"ContainerStarted","Data":"eeb9d26c73b19c81133dc36edf18e4b063040d3b3e4bfaafd7f6937cf96436a7"} Mar 16 18:48:03 crc kubenswrapper[4736]: I0316 18:48:03.924127 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561448-lhtbv" podStartSLOduration=2.830462051 podStartE2EDuration="3.924099414s" podCreationTimestamp="2026-03-16 18:48:00 +0000 UTC" firstStartedPulling="2026-03-16 18:48:01.276272064 +0000 UTC m=+12883.003662341" lastFinishedPulling="2026-03-16 18:48:02.369909397 +0000 UTC m=+12884.097299704" observedRunningTime="2026-03-16 18:48:03.921133083 +0000 UTC m=+12885.648523370" watchObservedRunningTime="2026-03-16 18:48:03.924099414 +0000 UTC m=+12885.651489691" Mar 16 18:48:04 crc kubenswrapper[4736]: I0316 18:48:04.920716 4736 generic.go:334] "Generic (PLEG): container finished" podID="370bf431-b0ae-4e95-b520-a99c927bad31" containerID="eeb9d26c73b19c81133dc36edf18e4b063040d3b3e4bfaafd7f6937cf96436a7" exitCode=0 Mar 16 18:48:04 crc kubenswrapper[4736]: I0316 18:48:04.921057 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561448-lhtbv" event={"ID":"370bf431-b0ae-4e95-b520-a99c927bad31","Type":"ContainerDied","Data":"eeb9d26c73b19c81133dc36edf18e4b063040d3b3e4bfaafd7f6937cf96436a7"} Mar 16 18:48:06 crc kubenswrapper[4736]: I0316 18:48:06.359453 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561448-lhtbv" Mar 16 18:48:06 crc kubenswrapper[4736]: I0316 18:48:06.483863 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdm7p\" (UniqueName: \"kubernetes.io/projected/370bf431-b0ae-4e95-b520-a99c927bad31-kube-api-access-tdm7p\") pod \"370bf431-b0ae-4e95-b520-a99c927bad31\" (UID: \"370bf431-b0ae-4e95-b520-a99c927bad31\") " Mar 16 18:48:06 crc kubenswrapper[4736]: I0316 18:48:06.490307 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/370bf431-b0ae-4e95-b520-a99c927bad31-kube-api-access-tdm7p" (OuterVolumeSpecName: "kube-api-access-tdm7p") pod "370bf431-b0ae-4e95-b520-a99c927bad31" (UID: "370bf431-b0ae-4e95-b520-a99c927bad31"). InnerVolumeSpecName "kube-api-access-tdm7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:48:06 crc kubenswrapper[4736]: I0316 18:48:06.586501 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdm7p\" (UniqueName: \"kubernetes.io/projected/370bf431-b0ae-4e95-b520-a99c927bad31-kube-api-access-tdm7p\") on node \"crc\" DevicePath \"\"" Mar 16 18:48:06 crc kubenswrapper[4736]: I0316 18:48:06.940290 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561448-lhtbv" event={"ID":"370bf431-b0ae-4e95-b520-a99c927bad31","Type":"ContainerDied","Data":"2ef6d2f11fec998d6f451b8548157c883023179ea9dbd20dda03a8883637f342"} Mar 16 18:48:06 crc kubenswrapper[4736]: I0316 18:48:06.940327 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ef6d2f11fec998d6f451b8548157c883023179ea9dbd20dda03a8883637f342" Mar 16 18:48:06 crc kubenswrapper[4736]: I0316 18:48:06.940339 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561448-lhtbv" Mar 16 18:48:07 crc kubenswrapper[4736]: I0316 18:48:07.011375 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561442-hkp65"] Mar 16 18:48:07 crc kubenswrapper[4736]: I0316 18:48:07.020159 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561442-hkp65"] Mar 16 18:48:07 crc kubenswrapper[4736]: I0316 18:48:07.456253 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zxgkb_dca4fa92-819d-4973-87b1-b6282946f072/control-plane-machine-set-operator/0.log" Mar 16 18:48:07 crc kubenswrapper[4736]: I0316 18:48:07.580730 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dswrb_aee91c3b-8c99-4023-a891-2aaa3ab5ebcc/kube-rbac-proxy/0.log" Mar 16 18:48:07 crc kubenswrapper[4736]: I0316 18:48:07.636880 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dswrb_aee91c3b-8c99-4023-a891-2aaa3ab5ebcc/machine-api-operator/0.log" Mar 16 18:48:09 crc kubenswrapper[4736]: I0316 18:48:09.006147 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ecd5486-0fb5-4256-82e5-e92e5d922030" path="/var/lib/kubelet/pods/8ecd5486-0fb5-4256-82e5-e92e5d922030/volumes" Mar 16 18:48:10 crc kubenswrapper[4736]: I0316 18:48:10.978944 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:48:10 crc kubenswrapper[4736]: E0316 18:48:10.979643 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:48:12 crc kubenswrapper[4736]: I0316 18:48:12.027430 4736 scope.go:117] "RemoveContainer" containerID="9cedd808de3ca36921262cac7fb2a5066faebf1877a6e40c0c5fd56920be7a70" Mar 16 18:48:22 crc kubenswrapper[4736]: I0316 18:48:22.731455 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-775xw_855eb880-6d37-4d3c-a863-d4cb7520dc47/cert-manager-controller/0.log" Mar 16 18:48:22 crc kubenswrapper[4736]: I0316 18:48:22.946394 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-nvl2s_8c13d851-26c9-4a4f-8ffc-a94a10784cf2/cert-manager-cainjector/0.log" Mar 16 18:48:22 crc kubenswrapper[4736]: I0316 18:48:22.979114 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:48:22 crc kubenswrapper[4736]: E0316 18:48:22.979323 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:48:22 crc kubenswrapper[4736]: I0316 18:48:22.988753 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-qrt7l_d2eb8b3d-8b48-4110-bab7-66fc20948ee5/cert-manager-webhook/0.log" Mar 16 18:48:33 crc kubenswrapper[4736]: I0316 18:48:33.978968 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:48:33 crc kubenswrapper[4736]: E0316 18:48:33.980017 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:48:37 crc kubenswrapper[4736]: I0316 18:48:37.349542 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-tsmxn_3822e5b4-5129-4b3b-8bf3-7262a5ad4cde/nmstate-console-plugin/0.log" Mar 16 18:48:37 crc kubenswrapper[4736]: I0316 18:48:37.613947 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-nhkfx_b184bdf0-82cb-428b-96ce-f4ebbada7645/kube-rbac-proxy/0.log" Mar 16 18:48:37 crc kubenswrapper[4736]: I0316 18:48:37.637431 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tbvd2_eea9e7aa-6f24-4b45-b7b4-347a38dccb64/nmstate-handler/0.log" Mar 16 18:48:37 crc kubenswrapper[4736]: I0316 18:48:37.740247 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-nhkfx_b184bdf0-82cb-428b-96ce-f4ebbada7645/nmstate-metrics/0.log" Mar 16 18:48:37 crc kubenswrapper[4736]: I0316 18:48:37.869473 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-5cjp6_46173c99-f17a-4099-a210-397cf7b8cd18/nmstate-operator/0.log" Mar 16 18:48:38 crc kubenswrapper[4736]: I0316 18:48:38.002231 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-xbp5l_88c72b3e-a013-4f44-ae5f-93e44846f22a/nmstate-webhook/0.log" Mar 16 18:48:46 crc kubenswrapper[4736]: I0316 18:48:46.981895 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:48:46 crc kubenswrapper[4736]: E0316 18:48:46.983007 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:48:57 crc kubenswrapper[4736]: I0316 18:48:57.978236 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:48:57 crc kubenswrapper[4736]: E0316 18:48:57.979017 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:49:07 crc kubenswrapper[4736]: I0316 18:49:07.212545 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-fxq9c_29db2924-7903-45c8-9f87-a4e3e070a4a3/kube-rbac-proxy/0.log" Mar 16 18:49:07 crc kubenswrapper[4736]: I0316 18:49:07.319978 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-fxq9c_29db2924-7903-45c8-9f87-a4e3e070a4a3/controller/0.log" Mar 16 18:49:07 crc kubenswrapper[4736]: I0316 18:49:07.470845 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-frr-files/0.log" Mar 16 18:49:07 crc kubenswrapper[4736]: I0316 18:49:07.615349 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-reloader/0.log" Mar 16 18:49:07 crc kubenswrapper[4736]: I0316 18:49:07.662001 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-frr-files/0.log" Mar 16 18:49:07 crc kubenswrapper[4736]: I0316 18:49:07.698577 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-metrics/0.log" Mar 16 18:49:07 crc kubenswrapper[4736]: I0316 18:49:07.743828 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-reloader/0.log" Mar 16 18:49:07 crc kubenswrapper[4736]: I0316 18:49:07.861697 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-reloader/0.log" Mar 16 18:49:07 crc kubenswrapper[4736]: I0316 18:49:07.864822 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-frr-files/0.log" Mar 16 18:49:07 crc kubenswrapper[4736]: I0316 18:49:07.912743 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-metrics/0.log" Mar 16 18:49:07 crc kubenswrapper[4736]: I0316 18:49:07.964827 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-metrics/0.log" Mar 16 18:49:08 crc kubenswrapper[4736]: I0316 18:49:08.175346 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-frr-files/0.log" Mar 16 18:49:08 crc kubenswrapper[4736]: I0316 18:49:08.178589 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-metrics/0.log" Mar 16 18:49:08 crc kubenswrapper[4736]: I0316 18:49:08.216947 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/controller/0.log" Mar 16 18:49:08 crc kubenswrapper[4736]: I0316 18:49:08.225065 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/cp-reloader/0.log" Mar 16 18:49:08 crc kubenswrapper[4736]: I0316 18:49:08.474923 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/frr-metrics/0.log" Mar 16 18:49:08 crc kubenswrapper[4736]: I0316 18:49:08.533536 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/kube-rbac-proxy/0.log" Mar 16 18:49:08 crc kubenswrapper[4736]: I0316 18:49:08.767979 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/reloader/0.log" Mar 16 18:49:08 crc kubenswrapper[4736]: I0316 18:49:08.789715 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/kube-rbac-proxy-frr/0.log" Mar 16 18:49:08 crc kubenswrapper[4736]: I0316 18:49:08.987546 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:49:08 crc kubenswrapper[4736]: E0316 18:49:08.988175 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:49:09 crc kubenswrapper[4736]: I0316 18:49:09.047400 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-vxgc7_21bc5f54-2767-431f-add2-433724ea4408/frr-k8s-webhook-server/0.log" Mar 16 18:49:09 crc kubenswrapper[4736]: I0316 18:49:09.352548 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b95679b96-mfd85_109b033e-a4ea-474a-9e79-e895cc75666e/manager/0.log" Mar 16 18:49:09 crc kubenswrapper[4736]: I0316 18:49:09.752258 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-9c55cfcd7-trkfb_b7e58b81-1f06-4844-adbe-ade114adc726/webhook-server/0.log" Mar 16 18:49:10 crc kubenswrapper[4736]: I0316 18:49:10.052694 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-djs8w_5ab46a17-c761-4952-b743-9ede5877674a/kube-rbac-proxy/0.log" Mar 16 18:49:10 crc kubenswrapper[4736]: I0316 18:49:10.838677 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-djs8w_5ab46a17-c761-4952-b743-9ede5877674a/speaker/0.log" Mar 16 18:49:10 crc kubenswrapper[4736]: I0316 18:49:10.901442 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/frr/0.log" Mar 16 18:49:10 crc kubenswrapper[4736]: I0316 18:49:10.909749 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hsz7s_b8fd5e0d-983e-4780-9a84-dc84a9766804/frr/1.log" Mar 16 18:49:23 crc kubenswrapper[4736]: I0316 18:49:23.978744 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:49:23 crc kubenswrapper[4736]: E0316 18:49:23.979557 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:49:24 crc kubenswrapper[4736]: I0316 18:49:24.833300 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd_f2926967-d729-4a26-8c6a-350f3a0419e1/util/0.log" Mar 16 18:49:25 crc kubenswrapper[4736]: I0316 18:49:25.194498 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd_f2926967-d729-4a26-8c6a-350f3a0419e1/util/0.log" Mar 16 18:49:25 crc kubenswrapper[4736]: I0316 18:49:25.225352 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd_f2926967-d729-4a26-8c6a-350f3a0419e1/pull/0.log" Mar 16 18:49:25 crc kubenswrapper[4736]: I0316 18:49:25.307249 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd_f2926967-d729-4a26-8c6a-350f3a0419e1/pull/0.log" Mar 16 18:49:25 crc kubenswrapper[4736]: I0316 18:49:25.496350 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd_f2926967-d729-4a26-8c6a-350f3a0419e1/util/0.log" Mar 16 18:49:25 crc kubenswrapper[4736]: I0316 18:49:25.533649 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd_f2926967-d729-4a26-8c6a-350f3a0419e1/extract/0.log" Mar 16 18:49:25 crc kubenswrapper[4736]: I0316 18:49:25.580747 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8744gzfd_f2926967-d729-4a26-8c6a-350f3a0419e1/pull/0.log" Mar 16 18:49:25 crc kubenswrapper[4736]: I0316 18:49:25.713398 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk_c5b483ba-1563-4424-bd65-5e489514f5e5/util/0.log" Mar 16 18:49:25 crc kubenswrapper[4736]: I0316 18:49:25.883436 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk_c5b483ba-1563-4424-bd65-5e489514f5e5/pull/0.log" Mar 16 18:49:25 crc kubenswrapper[4736]: I0316 18:49:25.898682 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk_c5b483ba-1563-4424-bd65-5e489514f5e5/util/0.log" Mar 16 18:49:25 crc kubenswrapper[4736]: I0316 18:49:25.928789 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk_c5b483ba-1563-4424-bd65-5e489514f5e5/pull/0.log" Mar 16 18:49:26 crc kubenswrapper[4736]: I0316 18:49:26.135239 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk_c5b483ba-1563-4424-bd65-5e489514f5e5/extract/0.log" Mar 16 18:49:26 crc kubenswrapper[4736]: I0316 18:49:26.154369 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk_c5b483ba-1563-4424-bd65-5e489514f5e5/pull/0.log" Mar 16 18:49:26 crc kubenswrapper[4736]: I0316 18:49:26.168659 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1grbxk_c5b483ba-1563-4424-bd65-5e489514f5e5/util/0.log" Mar 16 18:49:26 crc kubenswrapper[4736]: I0316 18:49:26.364497 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j9zbt_64d704b6-37a3-4ea1-bbe6-af675569eb7a/extract-utilities/0.log" Mar 16 18:49:26 crc kubenswrapper[4736]: I0316 18:49:26.560957 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j9zbt_64d704b6-37a3-4ea1-bbe6-af675569eb7a/extract-utilities/0.log" Mar 16 18:49:26 crc kubenswrapper[4736]: I0316 18:49:26.599790 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j9zbt_64d704b6-37a3-4ea1-bbe6-af675569eb7a/extract-content/0.log" Mar 16 18:49:26 crc kubenswrapper[4736]: I0316 18:49:26.610883 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j9zbt_64d704b6-37a3-4ea1-bbe6-af675569eb7a/extract-content/0.log" Mar 16 18:49:26 crc kubenswrapper[4736]: I0316 18:49:26.848342 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j9zbt_64d704b6-37a3-4ea1-bbe6-af675569eb7a/extract-utilities/0.log" Mar 16 18:49:26 crc kubenswrapper[4736]: I0316 18:49:26.894815 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j9zbt_64d704b6-37a3-4ea1-bbe6-af675569eb7a/extract-content/0.log" Mar 16 18:49:27 crc kubenswrapper[4736]: I0316 18:49:27.146985 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m4pdk_0291b8d7-ac75-48c3-9080-62e1bc49bb9f/extract-utilities/0.log" Mar 16 18:49:27 crc kubenswrapper[4736]: I0316 18:49:27.413479 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m4pdk_0291b8d7-ac75-48c3-9080-62e1bc49bb9f/extract-content/0.log" Mar 16 18:49:27 crc kubenswrapper[4736]: I0316 18:49:27.416191 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m4pdk_0291b8d7-ac75-48c3-9080-62e1bc49bb9f/extract-utilities/0.log" Mar 16 18:49:27 crc kubenswrapper[4736]: I0316 18:49:27.570311 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m4pdk_0291b8d7-ac75-48c3-9080-62e1bc49bb9f/extract-content/0.log" Mar 16 18:49:27 crc kubenswrapper[4736]: I0316 18:49:27.788757 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m4pdk_0291b8d7-ac75-48c3-9080-62e1bc49bb9f/extract-content/0.log" Mar 16 18:49:27 crc kubenswrapper[4736]: I0316 18:49:27.867615 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m4pdk_0291b8d7-ac75-48c3-9080-62e1bc49bb9f/extract-utilities/0.log" Mar 16 18:49:28 crc kubenswrapper[4736]: I0316 18:49:28.122285 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m4pdk_0291b8d7-ac75-48c3-9080-62e1bc49bb9f/registry-server/0.log" Mar 16 18:49:28 crc kubenswrapper[4736]: I0316 18:49:28.173361 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4qcc8_7c040e8d-b247-49a6-93bd-f928c704b135/marketplace-operator/0.log" Mar 16 18:49:28 crc kubenswrapper[4736]: I0316 18:49:28.530173 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5kgqs_9fba4db8-97e3-4e96-b22b-0aca71b4217f/extract-utilities/0.log" Mar 16 18:49:28 crc kubenswrapper[4736]: I0316 18:49:28.555024 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-j9zbt_64d704b6-37a3-4ea1-bbe6-af675569eb7a/registry-server/0.log" Mar 16 18:49:28 crc kubenswrapper[4736]: I0316 18:49:28.757787 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5kgqs_9fba4db8-97e3-4e96-b22b-0aca71b4217f/extract-content/0.log" Mar 16 18:49:28 crc kubenswrapper[4736]: I0316 18:49:28.798998 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5kgqs_9fba4db8-97e3-4e96-b22b-0aca71b4217f/extract-utilities/0.log" Mar 16 18:49:28 crc kubenswrapper[4736]: I0316 18:49:28.808892 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5kgqs_9fba4db8-97e3-4e96-b22b-0aca71b4217f/extract-content/0.log" Mar 16 18:49:29 crc kubenswrapper[4736]: I0316 18:49:29.028022 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5kgqs_9fba4db8-97e3-4e96-b22b-0aca71b4217f/extract-utilities/0.log" Mar 16 18:49:29 crc kubenswrapper[4736]: I0316 18:49:29.093167 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5kgqs_9fba4db8-97e3-4e96-b22b-0aca71b4217f/extract-content/0.log" Mar 16 18:49:29 crc kubenswrapper[4736]: I0316 18:49:29.359355 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kffc4_8fdffd9a-9fd4-4ec8-a660-cbd4c759b375/extract-utilities/0.log" Mar 16 18:49:29 crc kubenswrapper[4736]: I0316 18:49:29.421619 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5kgqs_9fba4db8-97e3-4e96-b22b-0aca71b4217f/registry-server/0.log" Mar 16 18:49:29 crc kubenswrapper[4736]: I0316 18:49:29.661812 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kffc4_8fdffd9a-9fd4-4ec8-a660-cbd4c759b375/extract-content/0.log" Mar 16 18:49:29 crc kubenswrapper[4736]: I0316 18:49:29.710269 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kffc4_8fdffd9a-9fd4-4ec8-a660-cbd4c759b375/extract-content/0.log" Mar 16 18:49:29 crc kubenswrapper[4736]: I0316 18:49:29.711838 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kffc4_8fdffd9a-9fd4-4ec8-a660-cbd4c759b375/extract-utilities/0.log" Mar 16 18:49:29 crc kubenswrapper[4736]: I0316 18:49:29.958503 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kffc4_8fdffd9a-9fd4-4ec8-a660-cbd4c759b375/extract-content/0.log" Mar 16 18:49:29 crc kubenswrapper[4736]: I0316 18:49:29.960608 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kffc4_8fdffd9a-9fd4-4ec8-a660-cbd4c759b375/extract-utilities/0.log" Mar 16 18:49:30 crc kubenswrapper[4736]: I0316 18:49:30.533043 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kffc4_8fdffd9a-9fd4-4ec8-a660-cbd4c759b375/registry-server/0.log" Mar 16 18:49:38 crc kubenswrapper[4736]: I0316 18:49:38.994785 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:49:39 crc kubenswrapper[4736]: I0316 18:49:39.779769 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"2ac85b8e3e10e660e194a587abee4c989c98c647973f5f4808a4029a7482bb58"} Mar 16 18:49:53 crc kubenswrapper[4736]: E0316 18:49:53.492127 4736 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.30:43780->38.102.83.30:38289: read tcp 38.102.83.30:43780->38.102.83.30:38289: read: connection reset by peer Mar 16 18:50:00 crc kubenswrapper[4736]: I0316 18:50:00.183965 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561450-dc8sv"] Mar 16 18:50:00 crc kubenswrapper[4736]: E0316 18:50:00.190779 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370bf431-b0ae-4e95-b520-a99c927bad31" containerName="oc" Mar 16 18:50:00 crc kubenswrapper[4736]: I0316 18:50:00.190816 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="370bf431-b0ae-4e95-b520-a99c927bad31" containerName="oc" Mar 16 18:50:00 crc kubenswrapper[4736]: I0316 18:50:00.193783 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="370bf431-b0ae-4e95-b520-a99c927bad31" containerName="oc" Mar 16 18:50:00 crc kubenswrapper[4736]: I0316 18:50:00.198386 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561450-dc8sv"] Mar 16 18:50:00 crc kubenswrapper[4736]: I0316 18:50:00.198506 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561450-dc8sv" Mar 16 18:50:00 crc kubenswrapper[4736]: I0316 18:50:00.207868 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:50:00 crc kubenswrapper[4736]: I0316 18:50:00.207872 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:50:00 crc kubenswrapper[4736]: I0316 18:50:00.207876 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:50:00 crc kubenswrapper[4736]: I0316 18:50:00.351358 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvrpx\" (UniqueName: \"kubernetes.io/projected/e5f91286-06b2-4d02-a397-ff977f824452-kube-api-access-xvrpx\") pod \"auto-csr-approver-29561450-dc8sv\" (UID: \"e5f91286-06b2-4d02-a397-ff977f824452\") " pod="openshift-infra/auto-csr-approver-29561450-dc8sv" Mar 16 18:50:00 crc kubenswrapper[4736]: I0316 18:50:00.453749 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvrpx\" (UniqueName: \"kubernetes.io/projected/e5f91286-06b2-4d02-a397-ff977f824452-kube-api-access-xvrpx\") pod \"auto-csr-approver-29561450-dc8sv\" (UID: \"e5f91286-06b2-4d02-a397-ff977f824452\") " pod="openshift-infra/auto-csr-approver-29561450-dc8sv" Mar 16 18:50:00 crc kubenswrapper[4736]: I0316 18:50:00.496395 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvrpx\" (UniqueName: \"kubernetes.io/projected/e5f91286-06b2-4d02-a397-ff977f824452-kube-api-access-xvrpx\") pod \"auto-csr-approver-29561450-dc8sv\" (UID: \"e5f91286-06b2-4d02-a397-ff977f824452\") " pod="openshift-infra/auto-csr-approver-29561450-dc8sv" Mar 16 18:50:00 crc kubenswrapper[4736]: I0316 18:50:00.521187 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561450-dc8sv" Mar 16 18:50:02 crc kubenswrapper[4736]: I0316 18:50:02.096915 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561450-dc8sv"] Mar 16 18:50:02 crc kubenswrapper[4736]: I0316 18:50:02.126586 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 18:50:03 crc kubenswrapper[4736]: I0316 18:50:03.009689 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561450-dc8sv" event={"ID":"e5f91286-06b2-4d02-a397-ff977f824452","Type":"ContainerStarted","Data":"df9f1455b1d9383f5c71cbba33177990b6bb83484e454185896fd46831ea03b0"} Mar 16 18:50:05 crc kubenswrapper[4736]: I0316 18:50:05.028500 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561450-dc8sv" event={"ID":"e5f91286-06b2-4d02-a397-ff977f824452","Type":"ContainerStarted","Data":"3faf501f17b24fc6ea7b7f36fbdba81acf6a742dd4e810f3e9aeb38550cb32b8"} Mar 16 18:50:05 crc kubenswrapper[4736]: I0316 18:50:05.064845 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561450-dc8sv" podStartSLOduration=3.761753692 podStartE2EDuration="5.064163816s" podCreationTimestamp="2026-03-16 18:50:00 +0000 UTC" firstStartedPulling="2026-03-16 18:50:02.116826202 +0000 UTC m=+13003.844216489" lastFinishedPulling="2026-03-16 18:50:03.419236326 +0000 UTC m=+13005.146626613" observedRunningTime="2026-03-16 18:50:05.05254043 +0000 UTC m=+13006.779930737" watchObservedRunningTime="2026-03-16 18:50:05.064163816 +0000 UTC m=+13006.791554103" Mar 16 18:50:07 crc kubenswrapper[4736]: I0316 18:50:07.044365 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561450-dc8sv" event={"ID":"e5f91286-06b2-4d02-a397-ff977f824452","Type":"ContainerDied","Data":"3faf501f17b24fc6ea7b7f36fbdba81acf6a742dd4e810f3e9aeb38550cb32b8"} Mar 16 18:50:07 crc kubenswrapper[4736]: I0316 18:50:07.046315 4736 generic.go:334] "Generic (PLEG): container finished" podID="e5f91286-06b2-4d02-a397-ff977f824452" containerID="3faf501f17b24fc6ea7b7f36fbdba81acf6a742dd4e810f3e9aeb38550cb32b8" exitCode=0 Mar 16 18:50:08 crc kubenswrapper[4736]: I0316 18:50:08.562084 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561450-dc8sv" Mar 16 18:50:08 crc kubenswrapper[4736]: I0316 18:50:08.641262 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvrpx\" (UniqueName: \"kubernetes.io/projected/e5f91286-06b2-4d02-a397-ff977f824452-kube-api-access-xvrpx\") pod \"e5f91286-06b2-4d02-a397-ff977f824452\" (UID: \"e5f91286-06b2-4d02-a397-ff977f824452\") " Mar 16 18:50:08 crc kubenswrapper[4736]: I0316 18:50:08.663459 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f91286-06b2-4d02-a397-ff977f824452-kube-api-access-xvrpx" (OuterVolumeSpecName: "kube-api-access-xvrpx") pod "e5f91286-06b2-4d02-a397-ff977f824452" (UID: "e5f91286-06b2-4d02-a397-ff977f824452"). InnerVolumeSpecName "kube-api-access-xvrpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:50:08 crc kubenswrapper[4736]: I0316 18:50:08.746755 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvrpx\" (UniqueName: \"kubernetes.io/projected/e5f91286-06b2-4d02-a397-ff977f824452-kube-api-access-xvrpx\") on node \"crc\" DevicePath \"\"" Mar 16 18:50:09 crc kubenswrapper[4736]: I0316 18:50:09.061918 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561450-dc8sv" event={"ID":"e5f91286-06b2-4d02-a397-ff977f824452","Type":"ContainerDied","Data":"df9f1455b1d9383f5c71cbba33177990b6bb83484e454185896fd46831ea03b0"} Mar 16 18:50:09 crc kubenswrapper[4736]: I0316 18:50:09.062247 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561450-dc8sv" Mar 16 18:50:09 crc kubenswrapper[4736]: I0316 18:50:09.062747 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df9f1455b1d9383f5c71cbba33177990b6bb83484e454185896fd46831ea03b0" Mar 16 18:50:09 crc kubenswrapper[4736]: I0316 18:50:09.167328 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561444-ngx98"] Mar 16 18:50:09 crc kubenswrapper[4736]: I0316 18:50:09.174849 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561444-ngx98"] Mar 16 18:50:10 crc kubenswrapper[4736]: I0316 18:50:10.990493 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1458e0bb-3d63-47d8-aa17-41302d7b078c" path="/var/lib/kubelet/pods/1458e0bb-3d63-47d8-aa17-41302d7b078c/volumes" Mar 16 18:50:12 crc kubenswrapper[4736]: I0316 18:50:12.129865 4736 scope.go:117] "RemoveContainer" containerID="4a2b01fbe0db029e6da6bf657d3044bea9fd2705aa9a6e05ad707a297d03f65d" Mar 16 18:52:00 crc kubenswrapper[4736]: I0316 18:52:00.174409 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561452-2k4z8"] Mar 16 18:52:00 crc kubenswrapper[4736]: E0316 18:52:00.175406 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f91286-06b2-4d02-a397-ff977f824452" containerName="oc" Mar 16 18:52:00 crc kubenswrapper[4736]: I0316 18:52:00.175423 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f91286-06b2-4d02-a397-ff977f824452" containerName="oc" Mar 16 18:52:00 crc kubenswrapper[4736]: I0316 18:52:00.176345 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f91286-06b2-4d02-a397-ff977f824452" containerName="oc" Mar 16 18:52:00 crc kubenswrapper[4736]: I0316 18:52:00.177159 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561452-2k4z8" Mar 16 18:52:00 crc kubenswrapper[4736]: I0316 18:52:00.179528 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:52:00 crc kubenswrapper[4736]: I0316 18:52:00.179822 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:52:00 crc kubenswrapper[4736]: I0316 18:52:00.186219 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:52:00 crc kubenswrapper[4736]: I0316 18:52:00.212892 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561452-2k4z8"] Mar 16 18:52:00 crc kubenswrapper[4736]: I0316 18:52:00.247148 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbd9q\" (UniqueName: \"kubernetes.io/projected/92919fbd-33ef-48b8-8c77-0a950edc6a88-kube-api-access-rbd9q\") pod \"auto-csr-approver-29561452-2k4z8\" (UID: \"92919fbd-33ef-48b8-8c77-0a950edc6a88\") " pod="openshift-infra/auto-csr-approver-29561452-2k4z8" Mar 16 18:52:00 crc kubenswrapper[4736]: I0316 18:52:00.349792 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbd9q\" (UniqueName: \"kubernetes.io/projected/92919fbd-33ef-48b8-8c77-0a950edc6a88-kube-api-access-rbd9q\") pod \"auto-csr-approver-29561452-2k4z8\" (UID: \"92919fbd-33ef-48b8-8c77-0a950edc6a88\") " pod="openshift-infra/auto-csr-approver-29561452-2k4z8" Mar 16 18:52:00 crc kubenswrapper[4736]: I0316 18:52:00.379726 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbd9q\" (UniqueName: \"kubernetes.io/projected/92919fbd-33ef-48b8-8c77-0a950edc6a88-kube-api-access-rbd9q\") pod \"auto-csr-approver-29561452-2k4z8\" (UID: \"92919fbd-33ef-48b8-8c77-0a950edc6a88\") " pod="openshift-infra/auto-csr-approver-29561452-2k4z8" Mar 16 18:52:00 crc kubenswrapper[4736]: I0316 18:52:00.493286 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561452-2k4z8" Mar 16 18:52:01 crc kubenswrapper[4736]: I0316 18:52:01.111034 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561452-2k4z8"] Mar 16 18:52:01 crc kubenswrapper[4736]: I0316 18:52:01.317504 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561452-2k4z8" event={"ID":"92919fbd-33ef-48b8-8c77-0a950edc6a88","Type":"ContainerStarted","Data":"7bfd35ca4aecaa1eef4a8cf41adfd1a944571260fd621c92131369630191e4fe"} Mar 16 18:52:03 crc kubenswrapper[4736]: I0316 18:52:03.341909 4736 generic.go:334] "Generic (PLEG): container finished" podID="92919fbd-33ef-48b8-8c77-0a950edc6a88" containerID="381ecf4906b84d7bf5c4bc185585737cedee01496e81d435df2ee1bdc11b2344" exitCode=0 Mar 16 18:52:03 crc kubenswrapper[4736]: I0316 18:52:03.341988 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561452-2k4z8" event={"ID":"92919fbd-33ef-48b8-8c77-0a950edc6a88","Type":"ContainerDied","Data":"381ecf4906b84d7bf5c4bc185585737cedee01496e81d435df2ee1bdc11b2344"} Mar 16 18:52:04 crc kubenswrapper[4736]: I0316 18:52:04.751912 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561452-2k4z8" Mar 16 18:52:04 crc kubenswrapper[4736]: I0316 18:52:04.848812 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbd9q\" (UniqueName: \"kubernetes.io/projected/92919fbd-33ef-48b8-8c77-0a950edc6a88-kube-api-access-rbd9q\") pod \"92919fbd-33ef-48b8-8c77-0a950edc6a88\" (UID: \"92919fbd-33ef-48b8-8c77-0a950edc6a88\") " Mar 16 18:52:04 crc kubenswrapper[4736]: I0316 18:52:04.861988 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92919fbd-33ef-48b8-8c77-0a950edc6a88-kube-api-access-rbd9q" (OuterVolumeSpecName: "kube-api-access-rbd9q") pod "92919fbd-33ef-48b8-8c77-0a950edc6a88" (UID: "92919fbd-33ef-48b8-8c77-0a950edc6a88"). InnerVolumeSpecName "kube-api-access-rbd9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:52:04 crc kubenswrapper[4736]: I0316 18:52:04.950674 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbd9q\" (UniqueName: \"kubernetes.io/projected/92919fbd-33ef-48b8-8c77-0a950edc6a88-kube-api-access-rbd9q\") on node \"crc\" DevicePath \"\"" Mar 16 18:52:05 crc kubenswrapper[4736]: I0316 18:52:05.371237 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561452-2k4z8" event={"ID":"92919fbd-33ef-48b8-8c77-0a950edc6a88","Type":"ContainerDied","Data":"7bfd35ca4aecaa1eef4a8cf41adfd1a944571260fd621c92131369630191e4fe"} Mar 16 18:52:05 crc kubenswrapper[4736]: I0316 18:52:05.371649 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bfd35ca4aecaa1eef4a8cf41adfd1a944571260fd621c92131369630191e4fe" Mar 16 18:52:05 crc kubenswrapper[4736]: I0316 18:52:05.371341 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561452-2k4z8" Mar 16 18:52:05 crc kubenswrapper[4736]: I0316 18:52:05.876195 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561446-9cg4l"] Mar 16 18:52:05 crc kubenswrapper[4736]: I0316 18:52:05.886470 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561446-9cg4l"] Mar 16 18:52:07 crc kubenswrapper[4736]: I0316 18:52:07.002136 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f110314-5c8b-4ce2-aab5-5a8e3944cab5" path="/var/lib/kubelet/pods/8f110314-5c8b-4ce2-aab5-5a8e3944cab5/volumes" Mar 16 18:52:08 crc kubenswrapper[4736]: I0316 18:52:08.507659 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:52:08 crc kubenswrapper[4736]: I0316 18:52:08.509004 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:52:12 crc kubenswrapper[4736]: I0316 18:52:12.316285 4736 scope.go:117] "RemoveContainer" containerID="9bca14046d6eb9f7c855857cc46a427460d4026b5de72346e4d1aaed266ec42e" Mar 16 18:52:18 crc kubenswrapper[4736]: I0316 18:52:18.540890 4736 generic.go:334] "Generic (PLEG): container finished" podID="6b5e0ca0-0401-4c9f-a734-40c98a75bd3a" containerID="da27d6c0a7649d4c86fa1faea41ecfcc669363b7b5fe3a146549db85669160f2" exitCode=0 Mar 16 18:52:18 crc kubenswrapper[4736]: I0316 18:52:18.540994 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-285pn/must-gather-kmbcd" event={"ID":"6b5e0ca0-0401-4c9f-a734-40c98a75bd3a","Type":"ContainerDied","Data":"da27d6c0a7649d4c86fa1faea41ecfcc669363b7b5fe3a146549db85669160f2"} Mar 16 18:52:18 crc kubenswrapper[4736]: I0316 18:52:18.542384 4736 scope.go:117] "RemoveContainer" containerID="da27d6c0a7649d4c86fa1faea41ecfcc669363b7b5fe3a146549db85669160f2" Mar 16 18:52:18 crc kubenswrapper[4736]: I0316 18:52:18.760639 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-285pn_must-gather-kmbcd_6b5e0ca0-0401-4c9f-a734-40c98a75bd3a/gather/0.log" Mar 16 18:52:35 crc kubenswrapper[4736]: I0316 18:52:35.650192 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-285pn/must-gather-kmbcd"] Mar 16 18:52:35 crc kubenswrapper[4736]: I0316 18:52:35.651053 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-285pn/must-gather-kmbcd" podUID="6b5e0ca0-0401-4c9f-a734-40c98a75bd3a" containerName="copy" containerID="cri-o://edd2d2767bea75009ed00e1bc2c23b980d2fff61d39fd49361ea0df16cfe5b9b" gracePeriod=2 Mar 16 18:52:35 crc kubenswrapper[4736]: I0316 18:52:35.662592 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-285pn/must-gather-kmbcd"] Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.158626 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-285pn_must-gather-kmbcd_6b5e0ca0-0401-4c9f-a734-40c98a75bd3a/copy/0.log" Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.159454 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/must-gather-kmbcd" Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.239989 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm8qw\" (UniqueName: \"kubernetes.io/projected/6b5e0ca0-0401-4c9f-a734-40c98a75bd3a-kube-api-access-vm8qw\") pod \"6b5e0ca0-0401-4c9f-a734-40c98a75bd3a\" (UID: \"6b5e0ca0-0401-4c9f-a734-40c98a75bd3a\") " Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.240172 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6b5e0ca0-0401-4c9f-a734-40c98a75bd3a-must-gather-output\") pod \"6b5e0ca0-0401-4c9f-a734-40c98a75bd3a\" (UID: \"6b5e0ca0-0401-4c9f-a734-40c98a75bd3a\") " Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.246586 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b5e0ca0-0401-4c9f-a734-40c98a75bd3a-kube-api-access-vm8qw" (OuterVolumeSpecName: "kube-api-access-vm8qw") pod "6b5e0ca0-0401-4c9f-a734-40c98a75bd3a" (UID: "6b5e0ca0-0401-4c9f-a734-40c98a75bd3a"). InnerVolumeSpecName "kube-api-access-vm8qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.344919 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm8qw\" (UniqueName: \"kubernetes.io/projected/6b5e0ca0-0401-4c9f-a734-40c98a75bd3a-kube-api-access-vm8qw\") on node \"crc\" DevicePath \"\"" Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.418124 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b5e0ca0-0401-4c9f-a734-40c98a75bd3a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6b5e0ca0-0401-4c9f-a734-40c98a75bd3a" (UID: "6b5e0ca0-0401-4c9f-a734-40c98a75bd3a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.446826 4736 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6b5e0ca0-0401-4c9f-a734-40c98a75bd3a-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.720182 4736 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-285pn_must-gather-kmbcd_6b5e0ca0-0401-4c9f-a734-40c98a75bd3a/copy/0.log" Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.720885 4736 generic.go:334] "Generic (PLEG): container finished" podID="6b5e0ca0-0401-4c9f-a734-40c98a75bd3a" containerID="edd2d2767bea75009ed00e1bc2c23b980d2fff61d39fd49361ea0df16cfe5b9b" exitCode=143 Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.720939 4736 scope.go:117] "RemoveContainer" containerID="edd2d2767bea75009ed00e1bc2c23b980d2fff61d39fd49361ea0df16cfe5b9b" Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.720981 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-285pn/must-gather-kmbcd" Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.741278 4736 scope.go:117] "RemoveContainer" containerID="da27d6c0a7649d4c86fa1faea41ecfcc669363b7b5fe3a146549db85669160f2" Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.791480 4736 scope.go:117] "RemoveContainer" containerID="edd2d2767bea75009ed00e1bc2c23b980d2fff61d39fd49361ea0df16cfe5b9b" Mar 16 18:52:36 crc kubenswrapper[4736]: E0316 18:52:36.795446 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edd2d2767bea75009ed00e1bc2c23b980d2fff61d39fd49361ea0df16cfe5b9b\": container with ID starting with edd2d2767bea75009ed00e1bc2c23b980d2fff61d39fd49361ea0df16cfe5b9b not found: ID does not exist" containerID="edd2d2767bea75009ed00e1bc2c23b980d2fff61d39fd49361ea0df16cfe5b9b" Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.795483 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd2d2767bea75009ed00e1bc2c23b980d2fff61d39fd49361ea0df16cfe5b9b"} err="failed to get container status \"edd2d2767bea75009ed00e1bc2c23b980d2fff61d39fd49361ea0df16cfe5b9b\": rpc error: code = NotFound desc = could not find container \"edd2d2767bea75009ed00e1bc2c23b980d2fff61d39fd49361ea0df16cfe5b9b\": container with ID starting with edd2d2767bea75009ed00e1bc2c23b980d2fff61d39fd49361ea0df16cfe5b9b not found: ID does not exist" Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.795506 4736 scope.go:117] "RemoveContainer" containerID="da27d6c0a7649d4c86fa1faea41ecfcc669363b7b5fe3a146549db85669160f2" Mar 16 18:52:36 crc kubenswrapper[4736]: E0316 18:52:36.795841 4736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da27d6c0a7649d4c86fa1faea41ecfcc669363b7b5fe3a146549db85669160f2\": container with ID starting with da27d6c0a7649d4c86fa1faea41ecfcc669363b7b5fe3a146549db85669160f2 not found: ID does not exist" containerID="da27d6c0a7649d4c86fa1faea41ecfcc669363b7b5fe3a146549db85669160f2" Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.795861 4736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da27d6c0a7649d4c86fa1faea41ecfcc669363b7b5fe3a146549db85669160f2"} err="failed to get container status \"da27d6c0a7649d4c86fa1faea41ecfcc669363b7b5fe3a146549db85669160f2\": rpc error: code = NotFound desc = could not find container \"da27d6c0a7649d4c86fa1faea41ecfcc669363b7b5fe3a146549db85669160f2\": container with ID starting with da27d6c0a7649d4c86fa1faea41ecfcc669363b7b5fe3a146549db85669160f2 not found: ID does not exist" Mar 16 18:52:36 crc kubenswrapper[4736]: I0316 18:52:36.989079 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b5e0ca0-0401-4c9f-a734-40c98a75bd3a" path="/var/lib/kubelet/pods/6b5e0ca0-0401-4c9f-a734-40c98a75bd3a/volumes" Mar 16 18:52:38 crc kubenswrapper[4736]: I0316 18:52:38.507891 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:52:38 crc kubenswrapper[4736]: I0316 18:52:38.508351 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:53:08 crc kubenswrapper[4736]: I0316 18:53:08.507446 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:53:08 crc kubenswrapper[4736]: I0316 18:53:08.508879 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:53:08 crc kubenswrapper[4736]: I0316 18:53:08.508918 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 18:53:08 crc kubenswrapper[4736]: I0316 18:53:08.509621 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ac85b8e3e10e660e194a587abee4c989c98c647973f5f4808a4029a7482bb58"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 18:53:08 crc kubenswrapper[4736]: I0316 18:53:08.509663 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://2ac85b8e3e10e660e194a587abee4c989c98c647973f5f4808a4029a7482bb58" gracePeriod=600 Mar 16 18:53:09 crc kubenswrapper[4736]: I0316 18:53:09.047300 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="2ac85b8e3e10e660e194a587abee4c989c98c647973f5f4808a4029a7482bb58" exitCode=0 Mar 16 18:53:09 crc kubenswrapper[4736]: I0316 18:53:09.047434 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"2ac85b8e3e10e660e194a587abee4c989c98c647973f5f4808a4029a7482bb58"} Mar 16 18:53:09 crc kubenswrapper[4736]: I0316 18:53:09.047836 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerStarted","Data":"3de67c687e3ab45fc7c2310794b9ec1f30ac36217d2747430d53c2457751358e"} Mar 16 18:53:09 crc kubenswrapper[4736]: I0316 18:53:09.047876 4736 scope.go:117] "RemoveContainer" containerID="9f1ec3ee7882eb58935a1a4ddf48b76c8c2f6977a291119aeeb632be815b04b4" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.149902 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561454-lcc4w"] Mar 16 18:54:00 crc kubenswrapper[4736]: E0316 18:54:00.150801 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b5e0ca0-0401-4c9f-a734-40c98a75bd3a" containerName="gather" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.150813 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b5e0ca0-0401-4c9f-a734-40c98a75bd3a" containerName="gather" Mar 16 18:54:00 crc kubenswrapper[4736]: E0316 18:54:00.150840 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b5e0ca0-0401-4c9f-a734-40c98a75bd3a" containerName="copy" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.150846 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b5e0ca0-0401-4c9f-a734-40c98a75bd3a" containerName="copy" Mar 16 18:54:00 crc kubenswrapper[4736]: E0316 18:54:00.150864 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92919fbd-33ef-48b8-8c77-0a950edc6a88" containerName="oc" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.150869 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="92919fbd-33ef-48b8-8c77-0a950edc6a88" containerName="oc" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.151022 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="92919fbd-33ef-48b8-8c77-0a950edc6a88" containerName="oc" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.151038 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b5e0ca0-0401-4c9f-a734-40c98a75bd3a" containerName="gather" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.151058 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b5e0ca0-0401-4c9f-a734-40c98a75bd3a" containerName="copy" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.151713 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561454-lcc4w" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.157074 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.157268 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.157612 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.174950 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561454-lcc4w"] Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.314024 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8nch\" (UniqueName: \"kubernetes.io/projected/38ad55a7-1563-4559-a5c1-6ca40fbae27e-kube-api-access-w8nch\") pod \"auto-csr-approver-29561454-lcc4w\" (UID: \"38ad55a7-1563-4559-a5c1-6ca40fbae27e\") " pod="openshift-infra/auto-csr-approver-29561454-lcc4w" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.415875 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8nch\" (UniqueName: \"kubernetes.io/projected/38ad55a7-1563-4559-a5c1-6ca40fbae27e-kube-api-access-w8nch\") pod \"auto-csr-approver-29561454-lcc4w\" (UID: \"38ad55a7-1563-4559-a5c1-6ca40fbae27e\") " pod="openshift-infra/auto-csr-approver-29561454-lcc4w" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.440370 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8nch\" (UniqueName: \"kubernetes.io/projected/38ad55a7-1563-4559-a5c1-6ca40fbae27e-kube-api-access-w8nch\") pod \"auto-csr-approver-29561454-lcc4w\" (UID: \"38ad55a7-1563-4559-a5c1-6ca40fbae27e\") " pod="openshift-infra/auto-csr-approver-29561454-lcc4w" Mar 16 18:54:00 crc kubenswrapper[4736]: I0316 18:54:00.476440 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561454-lcc4w" Mar 16 18:54:01 crc kubenswrapper[4736]: I0316 18:54:01.020767 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561454-lcc4w"] Mar 16 18:54:01 crc kubenswrapper[4736]: I0316 18:54:01.617977 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561454-lcc4w" event={"ID":"38ad55a7-1563-4559-a5c1-6ca40fbae27e","Type":"ContainerStarted","Data":"b4992418c687f20d439f018b8f46559e768553fd6013a6f5d645c75ad572d56f"} Mar 16 18:54:03 crc kubenswrapper[4736]: I0316 18:54:03.639921 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561454-lcc4w" event={"ID":"38ad55a7-1563-4559-a5c1-6ca40fbae27e","Type":"ContainerStarted","Data":"7ceb62af32139f43fa6ccf35892a3f03af3ce6603cbbfe0517b2ab0e5ed410c2"} Mar 16 18:54:04 crc kubenswrapper[4736]: I0316 18:54:04.657331 4736 generic.go:334] "Generic (PLEG): container finished" podID="38ad55a7-1563-4559-a5c1-6ca40fbae27e" containerID="7ceb62af32139f43fa6ccf35892a3f03af3ce6603cbbfe0517b2ab0e5ed410c2" exitCode=0 Mar 16 18:54:04 crc kubenswrapper[4736]: I0316 18:54:04.657464 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561454-lcc4w" event={"ID":"38ad55a7-1563-4559-a5c1-6ca40fbae27e","Type":"ContainerDied","Data":"7ceb62af32139f43fa6ccf35892a3f03af3ce6603cbbfe0517b2ab0e5ed410c2"} Mar 16 18:54:06 crc kubenswrapper[4736]: I0316 18:54:06.363629 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561454-lcc4w" Mar 16 18:54:06 crc kubenswrapper[4736]: I0316 18:54:06.548221 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8nch\" (UniqueName: \"kubernetes.io/projected/38ad55a7-1563-4559-a5c1-6ca40fbae27e-kube-api-access-w8nch\") pod \"38ad55a7-1563-4559-a5c1-6ca40fbae27e\" (UID: \"38ad55a7-1563-4559-a5c1-6ca40fbae27e\") " Mar 16 18:54:06 crc kubenswrapper[4736]: I0316 18:54:06.558285 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38ad55a7-1563-4559-a5c1-6ca40fbae27e-kube-api-access-w8nch" (OuterVolumeSpecName: "kube-api-access-w8nch") pod "38ad55a7-1563-4559-a5c1-6ca40fbae27e" (UID: "38ad55a7-1563-4559-a5c1-6ca40fbae27e"). InnerVolumeSpecName "kube-api-access-w8nch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:54:06 crc kubenswrapper[4736]: I0316 18:54:06.652181 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8nch\" (UniqueName: \"kubernetes.io/projected/38ad55a7-1563-4559-a5c1-6ca40fbae27e-kube-api-access-w8nch\") on node \"crc\" DevicePath \"\"" Mar 16 18:54:06 crc kubenswrapper[4736]: I0316 18:54:06.735672 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561454-lcc4w" event={"ID":"38ad55a7-1563-4559-a5c1-6ca40fbae27e","Type":"ContainerDied","Data":"b4992418c687f20d439f018b8f46559e768553fd6013a6f5d645c75ad572d56f"} Mar 16 18:54:06 crc kubenswrapper[4736]: I0316 18:54:06.735722 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4992418c687f20d439f018b8f46559e768553fd6013a6f5d645c75ad572d56f" Mar 16 18:54:06 crc kubenswrapper[4736]: I0316 18:54:06.735790 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561454-lcc4w" Mar 16 18:54:06 crc kubenswrapper[4736]: I0316 18:54:06.766926 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561448-lhtbv"] Mar 16 18:54:06 crc kubenswrapper[4736]: I0316 18:54:06.774457 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561448-lhtbv"] Mar 16 18:54:06 crc kubenswrapper[4736]: I0316 18:54:06.988171 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="370bf431-b0ae-4e95-b520-a99c927bad31" path="/var/lib/kubelet/pods/370bf431-b0ae-4e95-b520-a99c927bad31/volumes" Mar 16 18:54:12 crc kubenswrapper[4736]: I0316 18:54:12.474719 4736 scope.go:117] "RemoveContainer" containerID="eeb9d26c73b19c81133dc36edf18e4b063040d3b3e4bfaafd7f6937cf96436a7" Mar 16 18:55:08 crc kubenswrapper[4736]: I0316 18:55:08.507685 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:55:08 crc kubenswrapper[4736]: I0316 18:55:08.508317 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:55:38 crc kubenswrapper[4736]: I0316 18:55:38.508374 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:55:38 crc kubenswrapper[4736]: I0316 18:55:38.508909 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:56:00 crc kubenswrapper[4736]: I0316 18:56:00.164713 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29561456-p6zhx"] Mar 16 18:56:00 crc kubenswrapper[4736]: E0316 18:56:00.165591 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38ad55a7-1563-4559-a5c1-6ca40fbae27e" containerName="oc" Mar 16 18:56:00 crc kubenswrapper[4736]: I0316 18:56:00.165605 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="38ad55a7-1563-4559-a5c1-6ca40fbae27e" containerName="oc" Mar 16 18:56:00 crc kubenswrapper[4736]: I0316 18:56:00.165800 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="38ad55a7-1563-4559-a5c1-6ca40fbae27e" containerName="oc" Mar 16 18:56:00 crc kubenswrapper[4736]: I0316 18:56:00.167510 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561456-p6zhx" Mar 16 18:56:00 crc kubenswrapper[4736]: I0316 18:56:00.172319 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561456-p6zhx"] Mar 16 18:56:00 crc kubenswrapper[4736]: I0316 18:56:00.193617 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 16 18:56:00 crc kubenswrapper[4736]: I0316 18:56:00.193885 4736 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-44ztx" Mar 16 18:56:00 crc kubenswrapper[4736]: I0316 18:56:00.197020 4736 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 16 18:56:00 crc kubenswrapper[4736]: I0316 18:56:00.218180 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4lxk\" (UniqueName: \"kubernetes.io/projected/edbd559c-ad70-4b3b-bde8-6ffd50e0644d-kube-api-access-m4lxk\") pod \"auto-csr-approver-29561456-p6zhx\" (UID: \"edbd559c-ad70-4b3b-bde8-6ffd50e0644d\") " pod="openshift-infra/auto-csr-approver-29561456-p6zhx" Mar 16 18:56:00 crc kubenswrapper[4736]: I0316 18:56:00.320415 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4lxk\" (UniqueName: \"kubernetes.io/projected/edbd559c-ad70-4b3b-bde8-6ffd50e0644d-kube-api-access-m4lxk\") pod \"auto-csr-approver-29561456-p6zhx\" (UID: \"edbd559c-ad70-4b3b-bde8-6ffd50e0644d\") " pod="openshift-infra/auto-csr-approver-29561456-p6zhx" Mar 16 18:56:00 crc kubenswrapper[4736]: I0316 18:56:00.342832 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4lxk\" (UniqueName: \"kubernetes.io/projected/edbd559c-ad70-4b3b-bde8-6ffd50e0644d-kube-api-access-m4lxk\") pod \"auto-csr-approver-29561456-p6zhx\" (UID: \"edbd559c-ad70-4b3b-bde8-6ffd50e0644d\") " pod="openshift-infra/auto-csr-approver-29561456-p6zhx" Mar 16 18:56:00 crc kubenswrapper[4736]: I0316 18:56:00.520145 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561456-p6zhx" Mar 16 18:56:03 crc kubenswrapper[4736]: I0316 18:56:03.165482 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29561456-p6zhx"] Mar 16 18:56:03 crc kubenswrapper[4736]: I0316 18:56:03.193079 4736 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 16 18:56:03 crc kubenswrapper[4736]: W0316 18:56:03.195621 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedbd559c_ad70_4b3b_bde8_6ffd50e0644d.slice/crio-bfeddf2e1185ac9af3b4999dfb072b63b5fa2f57160497de4adfcde3c5fd904f WatchSource:0}: Error finding container bfeddf2e1185ac9af3b4999dfb072b63b5fa2f57160497de4adfcde3c5fd904f: Status 404 returned error can't find the container with id bfeddf2e1185ac9af3b4999dfb072b63b5fa2f57160497de4adfcde3c5fd904f Mar 16 18:56:03 crc kubenswrapper[4736]: I0316 18:56:03.713861 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561456-p6zhx" event={"ID":"edbd559c-ad70-4b3b-bde8-6ffd50e0644d","Type":"ContainerStarted","Data":"bfeddf2e1185ac9af3b4999dfb072b63b5fa2f57160497de4adfcde3c5fd904f"} Mar 16 18:56:05 crc kubenswrapper[4736]: I0316 18:56:05.754009 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561456-p6zhx" event={"ID":"edbd559c-ad70-4b3b-bde8-6ffd50e0644d","Type":"ContainerStarted","Data":"be9ffc6c0715d0845debc4f45bfa3483cb10820763a6d2996e7ac7f2039e9577"} Mar 16 18:56:05 crc kubenswrapper[4736]: I0316 18:56:05.790559 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29561456-p6zhx" podStartSLOduration=4.769715532 podStartE2EDuration="5.789944823s" podCreationTimestamp="2026-03-16 18:56:00 +0000 UTC" firstStartedPulling="2026-03-16 18:56:03.192672152 +0000 UTC m=+13364.920062439" lastFinishedPulling="2026-03-16 18:56:04.212901413 +0000 UTC m=+13365.940291730" observedRunningTime="2026-03-16 18:56:05.777444653 +0000 UTC m=+13367.504834950" watchObservedRunningTime="2026-03-16 18:56:05.789944823 +0000 UTC m=+13367.517335130" Mar 16 18:56:06 crc kubenswrapper[4736]: I0316 18:56:06.765315 4736 generic.go:334] "Generic (PLEG): container finished" podID="edbd559c-ad70-4b3b-bde8-6ffd50e0644d" containerID="be9ffc6c0715d0845debc4f45bfa3483cb10820763a6d2996e7ac7f2039e9577" exitCode=0 Mar 16 18:56:06 crc kubenswrapper[4736]: I0316 18:56:06.765363 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561456-p6zhx" event={"ID":"edbd559c-ad70-4b3b-bde8-6ffd50e0644d","Type":"ContainerDied","Data":"be9ffc6c0715d0845debc4f45bfa3483cb10820763a6d2996e7ac7f2039e9577"} Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.202944 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561456-p6zhx" Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.375931 4736 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4lxk\" (UniqueName: \"kubernetes.io/projected/edbd559c-ad70-4b3b-bde8-6ffd50e0644d-kube-api-access-m4lxk\") pod \"edbd559c-ad70-4b3b-bde8-6ffd50e0644d\" (UID: \"edbd559c-ad70-4b3b-bde8-6ffd50e0644d\") " Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.385330 4736 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edbd559c-ad70-4b3b-bde8-6ffd50e0644d-kube-api-access-m4lxk" (OuterVolumeSpecName: "kube-api-access-m4lxk") pod "edbd559c-ad70-4b3b-bde8-6ffd50e0644d" (UID: "edbd559c-ad70-4b3b-bde8-6ffd50e0644d"). InnerVolumeSpecName "kube-api-access-m4lxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.479493 4736 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4lxk\" (UniqueName: \"kubernetes.io/projected/edbd559c-ad70-4b3b-bde8-6ffd50e0644d-kube-api-access-m4lxk\") on node \"crc\" DevicePath \"\"" Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.507886 4736 patch_prober.go:28] interesting pod/machine-config-daemon-j9cg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.507978 4736 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.508019 4736 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.508811 4736 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3de67c687e3ab45fc7c2310794b9ec1f30ac36217d2747430d53c2457751358e"} pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.508882 4736 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" containerName="machine-config-daemon" containerID="cri-o://3de67c687e3ab45fc7c2310794b9ec1f30ac36217d2747430d53c2457751358e" gracePeriod=600 Mar 16 18:56:08 crc kubenswrapper[4736]: E0316 18:56:08.645558 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.788306 4736 generic.go:334] "Generic (PLEG): container finished" podID="45c93e24-5358-402f-9ace-e85478dedb49" containerID="3de67c687e3ab45fc7c2310794b9ec1f30ac36217d2747430d53c2457751358e" exitCode=0 Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.788398 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" event={"ID":"45c93e24-5358-402f-9ace-e85478dedb49","Type":"ContainerDied","Data":"3de67c687e3ab45fc7c2310794b9ec1f30ac36217d2747430d53c2457751358e"} Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.788438 4736 scope.go:117] "RemoveContainer" containerID="2ac85b8e3e10e660e194a587abee4c989c98c647973f5f4808a4029a7482bb58" Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.789018 4736 scope.go:117] "RemoveContainer" containerID="3de67c687e3ab45fc7c2310794b9ec1f30ac36217d2747430d53c2457751358e" Mar 16 18:56:08 crc kubenswrapper[4736]: E0316 18:56:08.789364 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.792422 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29561456-p6zhx" event={"ID":"edbd559c-ad70-4b3b-bde8-6ffd50e0644d","Type":"ContainerDied","Data":"bfeddf2e1185ac9af3b4999dfb072b63b5fa2f57160497de4adfcde3c5fd904f"} Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.792483 4736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfeddf2e1185ac9af3b4999dfb072b63b5fa2f57160497de4adfcde3c5fd904f" Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.792572 4736 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29561456-p6zhx" Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.865724 4736 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29561450-dc8sv"] Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.875468 4736 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29561450-dc8sv"] Mar 16 18:56:08 crc kubenswrapper[4736]: I0316 18:56:08.991193 4736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f91286-06b2-4d02-a397-ff977f824452" path="/var/lib/kubelet/pods/e5f91286-06b2-4d02-a397-ff977f824452/volumes" Mar 16 18:56:12 crc kubenswrapper[4736]: I0316 18:56:12.639225 4736 scope.go:117] "RemoveContainer" containerID="3faf501f17b24fc6ea7b7f36fbdba81acf6a742dd4e810f3e9aeb38550cb32b8" Mar 16 18:56:21 crc kubenswrapper[4736]: I0316 18:56:21.977517 4736 scope.go:117] "RemoveContainer" containerID="3de67c687e3ab45fc7c2310794b9ec1f30ac36217d2747430d53c2457751358e" Mar 16 18:56:21 crc kubenswrapper[4736]: E0316 18:56:21.978545 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:56:36 crc kubenswrapper[4736]: I0316 18:56:36.981980 4736 scope.go:117] "RemoveContainer" containerID="3de67c687e3ab45fc7c2310794b9ec1f30ac36217d2747430d53c2457751358e" Mar 16 18:56:36 crc kubenswrapper[4736]: E0316 18:56:36.982858 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:56:49 crc kubenswrapper[4736]: I0316 18:56:49.979127 4736 scope.go:117] "RemoveContainer" containerID="3de67c687e3ab45fc7c2310794b9ec1f30ac36217d2747430d53c2457751358e" Mar 16 18:56:49 crc kubenswrapper[4736]: E0316 18:56:49.980084 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:57:02 crc kubenswrapper[4736]: I0316 18:57:02.978295 4736 scope.go:117] "RemoveContainer" containerID="3de67c687e3ab45fc7c2310794b9ec1f30ac36217d2747430d53c2457751358e" Mar 16 18:57:02 crc kubenswrapper[4736]: E0316 18:57:02.979255 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.054763 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2gg6g"] Mar 16 18:57:12 crc kubenswrapper[4736]: E0316 18:57:12.055761 4736 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edbd559c-ad70-4b3b-bde8-6ffd50e0644d" containerName="oc" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.055776 4736 state_mem.go:107] "Deleted CPUSet assignment" podUID="edbd559c-ad70-4b3b-bde8-6ffd50e0644d" containerName="oc" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.056034 4736 memory_manager.go:354] "RemoveStaleState removing state" podUID="edbd559c-ad70-4b3b-bde8-6ffd50e0644d" containerName="oc" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.059986 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2gg6g" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.068856 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gg6g"] Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.130775 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj2cm\" (UniqueName: \"kubernetes.io/projected/1ebc3767-dd73-42ca-a96e-32746e7d4396-kube-api-access-bj2cm\") pod \"redhat-marketplace-2gg6g\" (UID: \"1ebc3767-dd73-42ca-a96e-32746e7d4396\") " pod="openshift-marketplace/redhat-marketplace-2gg6g" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.130879 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebc3767-dd73-42ca-a96e-32746e7d4396-catalog-content\") pod \"redhat-marketplace-2gg6g\" (UID: \"1ebc3767-dd73-42ca-a96e-32746e7d4396\") " pod="openshift-marketplace/redhat-marketplace-2gg6g" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.130962 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebc3767-dd73-42ca-a96e-32746e7d4396-utilities\") pod \"redhat-marketplace-2gg6g\" (UID: \"1ebc3767-dd73-42ca-a96e-32746e7d4396\") " pod="openshift-marketplace/redhat-marketplace-2gg6g" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.233344 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj2cm\" (UniqueName: \"kubernetes.io/projected/1ebc3767-dd73-42ca-a96e-32746e7d4396-kube-api-access-bj2cm\") pod \"redhat-marketplace-2gg6g\" (UID: \"1ebc3767-dd73-42ca-a96e-32746e7d4396\") " pod="openshift-marketplace/redhat-marketplace-2gg6g" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.233446 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebc3767-dd73-42ca-a96e-32746e7d4396-catalog-content\") pod \"redhat-marketplace-2gg6g\" (UID: \"1ebc3767-dd73-42ca-a96e-32746e7d4396\") " pod="openshift-marketplace/redhat-marketplace-2gg6g" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.233521 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebc3767-dd73-42ca-a96e-32746e7d4396-utilities\") pod \"redhat-marketplace-2gg6g\" (UID: \"1ebc3767-dd73-42ca-a96e-32746e7d4396\") " pod="openshift-marketplace/redhat-marketplace-2gg6g" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.234513 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebc3767-dd73-42ca-a96e-32746e7d4396-catalog-content\") pod \"redhat-marketplace-2gg6g\" (UID: \"1ebc3767-dd73-42ca-a96e-32746e7d4396\") " pod="openshift-marketplace/redhat-marketplace-2gg6g" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.234574 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebc3767-dd73-42ca-a96e-32746e7d4396-utilities\") pod \"redhat-marketplace-2gg6g\" (UID: \"1ebc3767-dd73-42ca-a96e-32746e7d4396\") " pod="openshift-marketplace/redhat-marketplace-2gg6g" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.236794 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qqql9"] Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.239361 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqql9" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.268061 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj2cm\" (UniqueName: \"kubernetes.io/projected/1ebc3767-dd73-42ca-a96e-32746e7d4396-kube-api-access-bj2cm\") pod \"redhat-marketplace-2gg6g\" (UID: \"1ebc3767-dd73-42ca-a96e-32746e7d4396\") " pod="openshift-marketplace/redhat-marketplace-2gg6g" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.277407 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqql9"] Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.337543 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e0e7675-5e8b-4a8b-b347-60dfe0e19722-utilities\") pod \"redhat-operators-qqql9\" (UID: \"6e0e7675-5e8b-4a8b-b347-60dfe0e19722\") " pod="openshift-marketplace/redhat-operators-qqql9" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.337622 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e0e7675-5e8b-4a8b-b347-60dfe0e19722-catalog-content\") pod \"redhat-operators-qqql9\" (UID: \"6e0e7675-5e8b-4a8b-b347-60dfe0e19722\") " pod="openshift-marketplace/redhat-operators-qqql9" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.337735 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5lw\" (UniqueName: \"kubernetes.io/projected/6e0e7675-5e8b-4a8b-b347-60dfe0e19722-kube-api-access-kt5lw\") pod \"redhat-operators-qqql9\" (UID: \"6e0e7675-5e8b-4a8b-b347-60dfe0e19722\") " pod="openshift-marketplace/redhat-operators-qqql9" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.435778 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2gg6g" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.442237 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e0e7675-5e8b-4a8b-b347-60dfe0e19722-utilities\") pod \"redhat-operators-qqql9\" (UID: \"6e0e7675-5e8b-4a8b-b347-60dfe0e19722\") " pod="openshift-marketplace/redhat-operators-qqql9" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.442289 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e0e7675-5e8b-4a8b-b347-60dfe0e19722-catalog-content\") pod \"redhat-operators-qqql9\" (UID: \"6e0e7675-5e8b-4a8b-b347-60dfe0e19722\") " pod="openshift-marketplace/redhat-operators-qqql9" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.442383 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5lw\" (UniqueName: \"kubernetes.io/projected/6e0e7675-5e8b-4a8b-b347-60dfe0e19722-kube-api-access-kt5lw\") pod \"redhat-operators-qqql9\" (UID: \"6e0e7675-5e8b-4a8b-b347-60dfe0e19722\") " pod="openshift-marketplace/redhat-operators-qqql9" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.443361 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e0e7675-5e8b-4a8b-b347-60dfe0e19722-utilities\") pod \"redhat-operators-qqql9\" (UID: \"6e0e7675-5e8b-4a8b-b347-60dfe0e19722\") " pod="openshift-marketplace/redhat-operators-qqql9" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.443437 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e0e7675-5e8b-4a8b-b347-60dfe0e19722-catalog-content\") pod \"redhat-operators-qqql9\" (UID: \"6e0e7675-5e8b-4a8b-b347-60dfe0e19722\") " pod="openshift-marketplace/redhat-operators-qqql9" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.465871 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5lw\" (UniqueName: \"kubernetes.io/projected/6e0e7675-5e8b-4a8b-b347-60dfe0e19722-kube-api-access-kt5lw\") pod \"redhat-operators-qqql9\" (UID: \"6e0e7675-5e8b-4a8b-b347-60dfe0e19722\") " pod="openshift-marketplace/redhat-operators-qqql9" Mar 16 18:57:12 crc kubenswrapper[4736]: I0316 18:57:12.560329 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qqql9" Mar 16 18:57:13 crc kubenswrapper[4736]: W0316 18:57:12.924355 4736 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ebc3767_dd73_42ca_a96e_32746e7d4396.slice/crio-d9a8605da50d742601be4066c5a5b92d1ea80920704702240f8656c6a2ac709c WatchSource:0}: Error finding container d9a8605da50d742601be4066c5a5b92d1ea80920704702240f8656c6a2ac709c: Status 404 returned error can't find the container with id d9a8605da50d742601be4066c5a5b92d1ea80920704702240f8656c6a2ac709c Mar 16 18:57:13 crc kubenswrapper[4736]: I0316 18:57:12.937859 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gg6g"] Mar 16 18:57:13 crc kubenswrapper[4736]: I0316 18:57:13.574920 4736 generic.go:334] "Generic (PLEG): container finished" podID="1ebc3767-dd73-42ca-a96e-32746e7d4396" containerID="d10e314a899f063dcef2eabd80d3ef20d5311515564050fec8fbeb84fecea9e5" exitCode=0 Mar 16 18:57:13 crc kubenswrapper[4736]: I0316 18:57:13.574978 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gg6g" event={"ID":"1ebc3767-dd73-42ca-a96e-32746e7d4396","Type":"ContainerDied","Data":"d10e314a899f063dcef2eabd80d3ef20d5311515564050fec8fbeb84fecea9e5"} Mar 16 18:57:13 crc kubenswrapper[4736]: I0316 18:57:13.575307 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gg6g" event={"ID":"1ebc3767-dd73-42ca-a96e-32746e7d4396","Type":"ContainerStarted","Data":"d9a8605da50d742601be4066c5a5b92d1ea80920704702240f8656c6a2ac709c"} Mar 16 18:57:13 crc kubenswrapper[4736]: I0316 18:57:13.950228 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qqql9"] Mar 16 18:57:14 crc kubenswrapper[4736]: I0316 18:57:14.586395 4736 generic.go:334] "Generic (PLEG): container finished" podID="6e0e7675-5e8b-4a8b-b347-60dfe0e19722" containerID="69d1a0ffb4fa8d86418b2cb8d12e5849324dcde707f43780e328cb2b4312bfa8" exitCode=0 Mar 16 18:57:14 crc kubenswrapper[4736]: I0316 18:57:14.586486 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqql9" event={"ID":"6e0e7675-5e8b-4a8b-b347-60dfe0e19722","Type":"ContainerDied","Data":"69d1a0ffb4fa8d86418b2cb8d12e5849324dcde707f43780e328cb2b4312bfa8"} Mar 16 18:57:14 crc kubenswrapper[4736]: I0316 18:57:14.587058 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqql9" event={"ID":"6e0e7675-5e8b-4a8b-b347-60dfe0e19722","Type":"ContainerStarted","Data":"f6c7f2ab1ea1f46cafcbe45c1f33ef2526057873f3fb317495f140044755f05a"} Mar 16 18:57:14 crc kubenswrapper[4736]: I0316 18:57:14.590248 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gg6g" event={"ID":"1ebc3767-dd73-42ca-a96e-32746e7d4396","Type":"ContainerStarted","Data":"184f8ee2ddce2bd645f04d80d4074d121671e1f230042546baa35430ab3d6265"} Mar 16 18:57:14 crc kubenswrapper[4736]: I0316 18:57:14.983392 4736 scope.go:117] "RemoveContainer" containerID="3de67c687e3ab45fc7c2310794b9ec1f30ac36217d2747430d53c2457751358e" Mar 16 18:57:14 crc kubenswrapper[4736]: E0316 18:57:14.983679 4736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-j9cg2_openshift-machine-config-operator(45c93e24-5358-402f-9ace-e85478dedb49)\"" pod="openshift-machine-config-operator/machine-config-daemon-j9cg2" podUID="45c93e24-5358-402f-9ace-e85478dedb49" Mar 16 18:57:15 crc kubenswrapper[4736]: I0316 18:57:15.640407 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wfqz5"] Mar 16 18:57:15 crc kubenswrapper[4736]: I0316 18:57:15.642944 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfqz5" Mar 16 18:57:15 crc kubenswrapper[4736]: I0316 18:57:15.680564 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfqz5"] Mar 16 18:57:15 crc kubenswrapper[4736]: I0316 18:57:15.745675 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b3a013c-2c51-4049-bba9-edb93275148c-utilities\") pod \"certified-operators-wfqz5\" (UID: \"6b3a013c-2c51-4049-bba9-edb93275148c\") " pod="openshift-marketplace/certified-operators-wfqz5" Mar 16 18:57:15 crc kubenswrapper[4736]: I0316 18:57:15.745792 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpgd8\" (UniqueName: \"kubernetes.io/projected/6b3a013c-2c51-4049-bba9-edb93275148c-kube-api-access-rpgd8\") pod \"certified-operators-wfqz5\" (UID: \"6b3a013c-2c51-4049-bba9-edb93275148c\") " pod="openshift-marketplace/certified-operators-wfqz5" Mar 16 18:57:15 crc kubenswrapper[4736]: I0316 18:57:15.745837 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b3a013c-2c51-4049-bba9-edb93275148c-catalog-content\") pod \"certified-operators-wfqz5\" (UID: \"6b3a013c-2c51-4049-bba9-edb93275148c\") " pod="openshift-marketplace/certified-operators-wfqz5" Mar 16 18:57:15 crc kubenswrapper[4736]: I0316 18:57:15.847357 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b3a013c-2c51-4049-bba9-edb93275148c-catalog-content\") pod \"certified-operators-wfqz5\" (UID: \"6b3a013c-2c51-4049-bba9-edb93275148c\") " pod="openshift-marketplace/certified-operators-wfqz5" Mar 16 18:57:15 crc kubenswrapper[4736]: I0316 18:57:15.847742 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b3a013c-2c51-4049-bba9-edb93275148c-utilities\") pod \"certified-operators-wfqz5\" (UID: \"6b3a013c-2c51-4049-bba9-edb93275148c\") " pod="openshift-marketplace/certified-operators-wfqz5" Mar 16 18:57:15 crc kubenswrapper[4736]: I0316 18:57:15.847913 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpgd8\" (UniqueName: \"kubernetes.io/projected/6b3a013c-2c51-4049-bba9-edb93275148c-kube-api-access-rpgd8\") pod \"certified-operators-wfqz5\" (UID: \"6b3a013c-2c51-4049-bba9-edb93275148c\") " pod="openshift-marketplace/certified-operators-wfqz5" Mar 16 18:57:15 crc kubenswrapper[4736]: I0316 18:57:15.848053 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b3a013c-2c51-4049-bba9-edb93275148c-catalog-content\") pod \"certified-operators-wfqz5\" (UID: \"6b3a013c-2c51-4049-bba9-edb93275148c\") " pod="openshift-marketplace/certified-operators-wfqz5" Mar 16 18:57:15 crc kubenswrapper[4736]: I0316 18:57:15.848236 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b3a013c-2c51-4049-bba9-edb93275148c-utilities\") pod \"certified-operators-wfqz5\" (UID: \"6b3a013c-2c51-4049-bba9-edb93275148c\") " pod="openshift-marketplace/certified-operators-wfqz5" Mar 16 18:57:15 crc kubenswrapper[4736]: I0316 18:57:15.871981 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpgd8\" (UniqueName: \"kubernetes.io/projected/6b3a013c-2c51-4049-bba9-edb93275148c-kube-api-access-rpgd8\") pod \"certified-operators-wfqz5\" (UID: \"6b3a013c-2c51-4049-bba9-edb93275148c\") " pod="openshift-marketplace/certified-operators-wfqz5" Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.039611 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfqz5" Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.261928 4736 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-59wmx"] Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.267323 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-59wmx" Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.273905 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-59wmx"] Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.459399 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46x4n\" (UniqueName: \"kubernetes.io/projected/77f6a395-196b-46e1-8315-6a222a696118-kube-api-access-46x4n\") pod \"community-operators-59wmx\" (UID: \"77f6a395-196b-46e1-8315-6a222a696118\") " pod="openshift-marketplace/community-operators-59wmx" Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.459513 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f6a395-196b-46e1-8315-6a222a696118-utilities\") pod \"community-operators-59wmx\" (UID: \"77f6a395-196b-46e1-8315-6a222a696118\") " pod="openshift-marketplace/community-operators-59wmx" Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.459669 4736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f6a395-196b-46e1-8315-6a222a696118-catalog-content\") pod \"community-operators-59wmx\" (UID: \"77f6a395-196b-46e1-8315-6a222a696118\") " pod="openshift-marketplace/community-operators-59wmx" Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.561158 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f6a395-196b-46e1-8315-6a222a696118-utilities\") pod \"community-operators-59wmx\" (UID: \"77f6a395-196b-46e1-8315-6a222a696118\") " pod="openshift-marketplace/community-operators-59wmx" Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.561584 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f6a395-196b-46e1-8315-6a222a696118-catalog-content\") pod \"community-operators-59wmx\" (UID: \"77f6a395-196b-46e1-8315-6a222a696118\") " pod="openshift-marketplace/community-operators-59wmx" Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.561632 4736 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46x4n\" (UniqueName: \"kubernetes.io/projected/77f6a395-196b-46e1-8315-6a222a696118-kube-api-access-46x4n\") pod \"community-operators-59wmx\" (UID: \"77f6a395-196b-46e1-8315-6a222a696118\") " pod="openshift-marketplace/community-operators-59wmx" Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.562331 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f6a395-196b-46e1-8315-6a222a696118-catalog-content\") pod \"community-operators-59wmx\" (UID: \"77f6a395-196b-46e1-8315-6a222a696118\") " pod="openshift-marketplace/community-operators-59wmx" Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.561774 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f6a395-196b-46e1-8315-6a222a696118-utilities\") pod \"community-operators-59wmx\" (UID: \"77f6a395-196b-46e1-8315-6a222a696118\") " pod="openshift-marketplace/community-operators-59wmx" Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.589742 4736 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46x4n\" (UniqueName: \"kubernetes.io/projected/77f6a395-196b-46e1-8315-6a222a696118-kube-api-access-46x4n\") pod \"community-operators-59wmx\" (UID: \"77f6a395-196b-46e1-8315-6a222a696118\") " pod="openshift-marketplace/community-operators-59wmx" Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.612613 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qqql9" event={"ID":"6e0e7675-5e8b-4a8b-b347-60dfe0e19722","Type":"ContainerStarted","Data":"8f9ddf1b06fb74fd52935a10cbe3d4612e29bfda022c875776b72228b57a5b61"} Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.618662 4736 generic.go:334] "Generic (PLEG): container finished" podID="1ebc3767-dd73-42ca-a96e-32746e7d4396" containerID="184f8ee2ddce2bd645f04d80d4074d121671e1f230042546baa35430ab3d6265" exitCode=0 Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.618714 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gg6g" event={"ID":"1ebc3767-dd73-42ca-a96e-32746e7d4396","Type":"ContainerDied","Data":"184f8ee2ddce2bd645f04d80d4074d121671e1f230042546baa35430ab3d6265"} Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.648849 4736 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-59wmx" Mar 16 18:57:16 crc kubenswrapper[4736]: I0316 18:57:16.766909 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfqz5"] Mar 16 18:57:17 crc kubenswrapper[4736]: I0316 18:57:17.628968 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gg6g" event={"ID":"1ebc3767-dd73-42ca-a96e-32746e7d4396","Type":"ContainerStarted","Data":"876361a6ebed48a1a0fb6f8514fbab79a291fe3e4872a98bc37f919fdc671adf"} Mar 16 18:57:17 crc kubenswrapper[4736]: I0316 18:57:17.630505 4736 generic.go:334] "Generic (PLEG): container finished" podID="6b3a013c-2c51-4049-bba9-edb93275148c" containerID="7cfd2e51f6d4400001c43e79ce7a91c6f3f1ebb56c3b083bed0a046b73b304f8" exitCode=0 Mar 16 18:57:17 crc kubenswrapper[4736]: I0316 18:57:17.630697 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfqz5" event={"ID":"6b3a013c-2c51-4049-bba9-edb93275148c","Type":"ContainerDied","Data":"7cfd2e51f6d4400001c43e79ce7a91c6f3f1ebb56c3b083bed0a046b73b304f8"} Mar 16 18:57:17 crc kubenswrapper[4736]: I0316 18:57:17.630731 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfqz5" event={"ID":"6b3a013c-2c51-4049-bba9-edb93275148c","Type":"ContainerStarted","Data":"f752d3f3c22ee89aa876ff85fc69a560dbc8a4cff8b8637b70378a7a474b4cca"} Mar 16 18:57:17 crc kubenswrapper[4736]: I0316 18:57:17.665165 4736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2gg6g" podStartSLOduration=1.968152812 podStartE2EDuration="5.665148112s" podCreationTimestamp="2026-03-16 18:57:12 +0000 UTC" firstStartedPulling="2026-03-16 18:57:13.577320019 +0000 UTC m=+13435.304710306" lastFinishedPulling="2026-03-16 18:57:17.274315329 +0000 UTC m=+13439.001705606" observedRunningTime="2026-03-16 18:57:17.654851231 +0000 UTC m=+13439.382241518" watchObservedRunningTime="2026-03-16 18:57:17.665148112 +0000 UTC m=+13439.392538389" Mar 16 18:57:17 crc kubenswrapper[4736]: I0316 18:57:17.713871 4736 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-59wmx"] Mar 16 18:57:18 crc kubenswrapper[4736]: I0316 18:57:18.641843 4736 generic.go:334] "Generic (PLEG): container finished" podID="77f6a395-196b-46e1-8315-6a222a696118" containerID="de0f2ab0c0326175446d86f4d33379c69e410694b5e65ce1a182c385b5c36976" exitCode=0 Mar 16 18:57:18 crc kubenswrapper[4736]: I0316 18:57:18.641897 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59wmx" event={"ID":"77f6a395-196b-46e1-8315-6a222a696118","Type":"ContainerDied","Data":"de0f2ab0c0326175446d86f4d33379c69e410694b5e65ce1a182c385b5c36976"} Mar 16 18:57:18 crc kubenswrapper[4736]: I0316 18:57:18.642209 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-59wmx" event={"ID":"77f6a395-196b-46e1-8315-6a222a696118","Type":"ContainerStarted","Data":"e900e1b2c54e4c183d757e79bb5af73402657ddd0d18c29f3723d4928d574697"} Mar 16 18:57:18 crc kubenswrapper[4736]: I0316 18:57:18.644860 4736 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfqz5" event={"ID":"6b3a013c-2c51-4049-bba9-edb93275148c","Type":"ContainerStarted","Data":"03e6e8605aa3eec52ae9f30f82ad19565cf38c61cce93897b67dbfd04000f75c"} var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515156051233024446 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015156051234017364 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015156016452016512 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015156016453015463 5ustar corecore